Sample records for test case model

  1. Validation Test Report For The CRWMS Analysis and Logistics Visually Interactive Model Calvin Version 3.0, 10074-Vtr-3.0-00

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S. Gillespie

    2000-07-27

    This report describes the tests performed to validate the CRWMS ''Analysis and Logistics Visually Interactive'' Model (CALVIN) Version 3.0 (V3.0) computer code (STN: 10074-3.0-00). To validate the code, a series of test cases was developed in the CALVIN V3.0 Validation Test Plan (CRWMS M&O 1999a) that exercises the principal calculation models and options of CALVIN V3.0. Twenty-five test cases were developed: 18 logistics test cases and 7 cost test cases. These cases test the features of CALVIN in a sequential manner, so that the validation of each test case is used to demonstrate the accuracy of the input to subsequentmore » calculations. Where necessary, the test cases utilize reduced-size data tables to make the hand calculations used to verify the results more tractable, while still adequately testing the code's capabilities. Acceptance criteria, were established for the logistics and cost test cases in the Validation Test Plan (CRWMS M&O 1999a). The Logistics test cases were developed to test the following CALVIN calculation models: Spent nuclear fuel (SNF) and reactivity calculations; Options for altering reactor life; Adjustment of commercial SNF (CSNF) acceptance rates for fiscal year calculations and mid-year acceptance start; Fuel selection, transportation cask loading, and shipping to the Monitored Geologic Repository (MGR); Transportation cask shipping to and storage at an Interim Storage Facility (ISF); Reactor pool allocation options; and Disposal options at the MGR. Two types of cost test cases were developed: cases to validate the detailed transportation costs, and cases to validate the costs associated with the Civilian Radioactive Waste Management System (CRWMS) Management and Operating Contractor (M&O) and Regional Servicing Contractors (RSCs). For each test case, values calculated using Microsoft Excel 97 worksheets were compared to CALVIN V3.0 scenarios with the same input data and assumptions. All of the test case results compare with the CALVIN V3.0 results within the bounds of the acceptance criteria. Therefore, it is concluded that the CALVIN V3.0 calculation models and options tested in this report are validated.« less

  2. Test Cases for Flutter of the Benchmark Models Rectangular Wings on the Pitch and Plunge Apparatus

    NASA Technical Reports Server (NTRS)

    Bennett, Robert M.

    2000-01-01

    The supercritical airfoil was chosen as a relatively modem airfoil for comparison. The BOO12 model was tested first. Three different types of flutter instability boundaries were encountered, a classical flutter boundary, a transonic stall flutter boundary at angle of attack, and a plunge instability near M = 0.9 and for zero angle of attack. This test was made in air and was Transonic Dynamics Tunnel (TDT) Test 468. The BSCW model (for Benchmark SuperCritical Wing) was tested next as TDT Test 470. It was tested using both with air and a heavy gas, R-12, as a test medium. The effect of a transition strip on flutter was evaluated in air. The B64AOlO model was subsequently tested as TDT Test 493. Some further analysis of the experimental data for the BOO12 wing is presented. Transonic calculations using the parameters for the BOO12 wing in a two-dimensional typical section flutter analysis are given. These data are supplemented with data from the Benchmark Active Controls Technology model (BACT) given and in the next chapter of this document. The BACT model was of the same planform and airfoil as the BOO12 model, but with spoilers and a trailing edge control. It was tested in the heavy gas R-12, and was instrumented mostly at the 60 per cent span. The flutter data obtained on PAPA and the static aerodynamic test cases from BACT serve as additional data for the BOO12 model. All three types of flutter are included in the BACT Test Cases. In this report several test cases are selected to illustrate trends for a variety of different conditions with emphasis on transonic flutter. Cases are selected for classical and stall flutter for the BSCW model, for classical and plunge for the B64AOlO model, and for classical flutter for the BOO12 model. Test Cases are also presented for BSCW for static angles of attack. Only the mean pressures and the real and imaginary parts of the first harmonic of the pressures are included in the data for the test cases, but digitized time histories have been archived. The data for the test cases are available as separate electronic files. An overview of the model and tests is given, the standard formulary for these data is listed, and some sample results are presented.

  3. Model Based Analysis and Test Generation for Flight Software

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.; Schumann, Johann M.; Mehlitz, Peter C.; Lowry, Mike R.; Karsai, Gabor; Nine, Harmon; Neema, Sandeep

    2009-01-01

    We describe a framework for model-based analysis and test case generation in the context of a heterogeneous model-based development paradigm that uses and combines Math- Works and UML 2.0 models and the associated code generation tools. This paradigm poses novel challenges to analysis and test case generation that, to the best of our knowledge, have not been addressed before. The framework is based on a common intermediate representation for different modeling formalisms and leverages and extends model checking and symbolic execution tools for model analysis and test case generation, respectively. We discuss the application of our framework to software models for a NASA flight mission.

  4. Airside HVAC BESTEST. Adaptation of ASHRAE RP 865 Airside HVAC Equipment Modeling Test Cases for ASHRAE Standard 140. Volume 1, Cases AE101-AE445

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neymark, J.; Kennedy, M.; Judkoff, R.

    This report documents a set of diagnostic analytical verification cases for testing the ability of whole building simulation software to model the air distribution side of typical heating, ventilating and air conditioning (HVAC) equipment. These cases complement the unitary equipment cases included in American National Standards Institute (ANSI)/American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) Standard 140, Standard Method of Test for the Evaluation of Building Energy Analysis Computer Programs, which test the ability to model the heat-transfer fluid side of HVAC equipment.

  5. Formal methods for test case generation

    NASA Technical Reports Server (NTRS)

    Rushby, John (Inventor); De Moura, Leonardo Mendonga (Inventor); Hamon, Gregoire (Inventor)

    2011-01-01

    The invention relates to the use of model checkers to generate efficient test sets for hardware and software systems. The method provides for extending existing tests to reach new coverage targets; searching *to* some or all of the uncovered targets in parallel; searching in parallel *from* some or all of the states reached in previous tests; and slicing the model relative to the current set of coverage targets. The invention provides efficient test case generation and test set formation. Deep regions of the state space can be reached within allotted time and memory. The approach has been applied to use of the model checkers of SRI's SAL system and to model-based designs developed in Stateflow. Stateflow models achieving complete state and transition coverage in a single test case are reported.

  6. Black-Box System Testing of Real-Time Embedded Systems Using Random and Search-Based Testing

    NASA Astrophysics Data System (ADS)

    Arcuri, Andrea; Iqbal, Muhammad Zohaib; Briand, Lionel

    Testing real-time embedded systems (RTES) is in many ways challenging. Thousands of test cases can be potentially executed on an industrial RTES. Given the magnitude of testing at the system level, only a fully automated approach can really scale up to test industrial RTES. In this paper we take a black-box approach and model the RTES environment using the UML/MARTE international standard. Our main motivation is to provide a more practical approach to the model-based testing of RTES by allowing system testers, who are often not familiar with the system design but know the application domain well-enough, to model the environment to enable test automation. Environment models can support the automation of three tasks: the code generation of an environment simulator, the selection of test cases, and the evaluation of their expected results (oracles). In this paper, we focus on the second task (test case selection) and investigate three test automation strategies using inputs from UML/MARTE environment models: Random Testing (baseline), Adaptive Random Testing, and Search-Based Testing (using Genetic Algorithms). Based on one industrial case study and three artificial systems, we show how, in general, no technique is better than the others. Which test selection technique to use is determined by the failure rate (testing stage) and the execution time of test cases. Finally, we propose a practical process to combine the use of all three test strategies.

  7. Simulation of Benchmark Cases with the Terminal Area Simulation System (TASS)

    NASA Technical Reports Server (NTRS)

    Ahmad, Nash'at; Proctor, Fred

    2011-01-01

    The hydrodynamic core of the Terminal Area Simulation System (TASS) is evaluated against different benchmark cases. In the absence of closed form solutions for the equations governing atmospheric flows, the models are usually evaluated against idealized test cases. Over the years, various authors have suggested a suite of these idealized cases which have become standards for testing and evaluating the dynamics and thermodynamics of atmospheric flow models. In this paper, simulations of three such cases are described. In addition, the TASS model is evaluated against a test case that uses an exact solution of the Navier-Stokes equations. The TASS results are compared against previously reported simulations of these banchmark cases in the literature. It is demonstrated that the TASS model is highly accurate, stable and robust.

  8. Simulation of Benchmark Cases with the Terminal Area Simulation System (TASS)

    NASA Technical Reports Server (NTRS)

    Ahmad, Nashat N.; Proctor, Fred H.

    2011-01-01

    The hydrodynamic core of the Terminal Area Simulation System (TASS) is evaluated against different benchmark cases. In the absence of closed form solutions for the equations governing atmospheric flows, the models are usually evaluated against idealized test cases. Over the years, various authors have suggested a suite of these idealized cases which have become standards for testing and evaluating the dynamics and thermodynamics of atmospheric flow models. In this paper, simulations of three such cases are described. In addition, the TASS model is evaluated against a test case that uses an exact solution of the Navier-Stokes equations. The TASS results are compared against previously reported simulations of these benchmark cases in the literature. It is demonstrated that the TASS model is highly accurate, stable and robust.

  9. Test Cases for the Benchmark Active Controls: Spoiler and Control Surface Oscillations and Flutter

    NASA Technical Reports Server (NTRS)

    Bennett, Robert M.; Scott, Robert C.; Wieseman, Carol D.

    2000-01-01

    As a portion of the Benchmark Models Program at NASA Langley, a simple generic model was developed for active controls research and was called BACT for Benchmark Active Controls Technology model. This model was based on the previously-tested Benchmark Models rectangular wing with the NACA 0012 airfoil section that was mounted on the Pitch and Plunge Apparatus (PAPA) for flutter testing. The BACT model had an upper surface spoiler, a lower surface spoiler, and a trailing edge control surface for use in flutter suppression and dynamic response excitation. Previous experience with flutter suppression indicated a need for measured control surface aerodynamics for accurate control law design. Three different types of flutter instability boundaries had also been determined for the NACA 0012/PAPA model, a classical flutter boundary, a transonic stall flutter boundary at angle of attack, and a plunge instability near M = 0.9. Therefore an extensive set of steady and control surface oscillation data was generated spanning the range of the three types of instabilities. This information was subsequently used to design control laws to suppress each flutter instability. There have been three tests of the BACT model. The objective of the first test, TDT Test 485, was to generate a data set of steady and unsteady control surface effectiveness data, and to determine the open loop dynamic characteristics of the control systems including the actuators. Unsteady pressures, loads, and transfer functions were measured. The other two tests, TDT Test 502 and TDT Test 5 18, were primarily oriented towards active controls research, but some data supplementary to the first test were obtained. Dynamic response of the flexible system to control surface excitation and open loop flutter characteristics were determined during Test 502. Loads were not measured during the last two tests. During these tests, a database of over 3000 data sets was obtained. A reasonably extensive subset of the data sets from the first two tests have been chosen for Test Cases for computational comparisons concentrating on static conditions and cases with harmonically oscillating control surfaces. Several flutter Test Cases from both tests have also been included. Some aerodynamic comparisons with the BACT data have been made using computational fluid dynamics codes at the Navier-Stokes level (and in the accompanying chapter SC). Some mechanical and active control studies have been presented. In this report several Test Cases are selected to illustrate trends for a variety of different conditions with emphasis on transonic flow effects. Cases for static angles of attack, static trailing-edge and upper-surface spoiler deflections are included for a range of conditions near those for the oscillation cases. Cases for trailing-edge control and upper-surface spoiler oscillations for a range of Mach numbers, angle of attack, and static control deflections are included. Cases for all three types of flutter instability are selected. In addition some cases are included for dynamic response measurements during forced oscillations of the controls on the flexible mount. An overview of the model and tests is given, and the standard formulary for these data is listed. Some sample data and sample results of calculations are presented. Only the static pressures and the first harmonic real and imaginary parts of the pressures are included in the data for the Test Cases, but digitized time histories have been archived. The data for the Test Cases are also available as separate electronic files.

  10. Experimental Applications of Automatic Test Markup Language (ATML)

    NASA Technical Reports Server (NTRS)

    Lansdowne, Chatwin A.; McCartney, Patrick; Gorringe, Chris

    2012-01-01

    The authors describe challenging use-cases for Automatic Test Markup Language (ATML), and evaluate solutions. The first case uses ATML Test Results to deliver active features to support test procedure development and test flow, and bridging mixed software development environments. The second case examines adding attributes to Systems Modelling Language (SysML) to create a linkage for deriving information from a model to fill in an ATML document set. Both cases are outside the original concept of operations for ATML but are typical when integrating large heterogeneous systems with modular contributions from multiple disciplines.

  11. Enhancing HIV Testing and Treatment among Men Who Have Sex with Men in China: A Pilot Model with Two-Rapid Tests, Single Blood Draw Session, and Intensified Case Management in Six Cities in 2013.

    PubMed

    Zhang, Dapeng; Lu, Hongyan; Zhuang, Minghua; Wu, Guohui; Yan, Hongjing; Xu, Jun; Wei, Xiaoli; Li, Chengmei; Meng, Sining; Fu, Xiaojing; Qi, Jinlei; Wang, Peng; Luo, Mei; Dai, Min; Yip, Ray; Sun, Jiangping; Wu, Zunyou

    2016-01-01

    To explore models to improve HIV testing, linkage to care and treatment among men who have sex with men (MSM) in cooperation with community-based organizations (CBOs) in China. We introduced a new model for HIV testing services targeting MSM in six cities in 2013.These models introduced provision of rapid HIV testing by CBO staff and streamlined processes for HIV screening, confirmation of initial reactive screening results, and linkage to care among diagnosed people. We monitored attrition along each step of the continuum of care from screening to treatment and compared program performance between 2012 and 2013. According to the providers of two rapid tests (HIV screening), four different services delivery models were examined in 2013: Model A = first screen at CDC, second at CDC (Model A = CDC+CDC), Model B = first and second screens at CBOs (Model B = CBO+CBO), Model C = first screen at CBO, second at Hospital (Model C = CBO+Hosp), and Model D = first screen at CBO, second at CDC (Model D = CBO+CDC). Logistic regressions were performed to assess advantages of different screening models of case finding and case management. Compared to 2012, the number of HIV screening tests performed for MSM increased 35.8% in 2013 (72,577 in 2013 vs. 53,455 in 2012). We observed a 5.6% increase in proportion of cases screened reactive receiving HIV confirmatory tests (93.9% in 2013 vs. 89.2% in 2012, χ2 = 48.52, p<0.001) and 65% reduction in loss to CD4 cell count tests (15% in 2013 vs. 43% in 2012, χ2 = 628.85, p<0.001). Regarding linkage to care and treatment, the 2013 pilot showed that the Model D had the highest rate of loss between screening reactive and confirmatory test among the four models, with 18.1% fewer receiving a second screening test and a further 5.9% loss among those receiving HIV confirmatory tests. The Model B and the Model C showed lower losses (0.8% and 1.3%) for newly diagnosed HIV positives receiving CD4 cell count tests, and higher rates of HIV positives referred to designated ART hospitals (88.0% and 93.3%) than the Model A and Model D (4.6% and 5.7% for CD4 cell count test, and 68.9% and 64.4% for referring to designated ART hospitals). The proportion of cases where the screening test was reactive that were commenced on ART was highest in Model C; 52.8% of cases commenced on ART compared to 38.9%, 34.2% and 21.1% in Models A, B and D respectively. Using Model A as a reference group, the multivariate logistic regression results also showed the advantages of Models B, C and D, which increased CD4 cell count test, referral to designated ART hospitals and initiation of ART, when controlling for program city and other factors. This study has demonstrated that involvement of CBOs in HIV rapid testing provision, streamlining testing and care procedures and early hospital case management can improve testing, linkage to, and retention in care and treatment among MSM in China.

  12. Pre-Test Assessment of the Use Envelope of the Normal Force of a Wind Tunnel Strain-Gage Balance

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2016-01-01

    The relationship between the aerodynamic lift force generated by a wind tunnel model, the model weight, and the measured normal force of a strain-gage balance is investigated to better understand the expected use envelope of the normal force during a wind tunnel test. First, the fundamental relationship between normal force, model weight, lift curve slope, model reference area, dynamic pressure, and angle of attack is derived. Then, based on this fundamental relationship, the use envelope of a balance is examined for four typical wind tunnel test cases. The first case looks at the use envelope of the normal force during the test of a light wind tunnel model at high subsonic Mach numbers. The second case examines the use envelope of the normal force during the test of a heavy wind tunnel model in an atmospheric low-speed facility. The third case reviews the use envelope of the normal force during the test of a floor-mounted semi-span model. The fourth case discusses the normal force characteristics during the test of a rotated full-span model. The wind tunnel model's lift-to-weight ratio is introduced as a new parameter that may be used for a quick pre-test assessment of the use envelope of the normal force of a balance. The parameter is derived as a function of the lift coefficient, the dimensionless dynamic pressure, and the dimensionless model weight. Lower and upper bounds of the use envelope of a balance are defined using the model's lift-to-weight ratio. Finally, data from a pressurized wind tunnel is used to illustrate both application and interpretation of the model's lift-to-weight ratio.

  13. What Different Kinds of Stratification Can Reveal about the Generalizability of Data-Mined Skill Assessment Models

    ERIC Educational Resources Information Center

    Sao Pedro, Michael A.; Baker, Ryan S. J. d.; Gobert, Janice D.

    2013-01-01

    When validating assessment models built with data mining, generalization is typically tested at the student-level, where models are tested on new students. This approach, though, may fail to find cases where model performance suffers if other aspects of those cases relevant to prediction are not well represented. We explore this here by testing if…

  14. Mathematical Basis and Test Cases for Colloid-Facilitated Radionuclide Transport Modeling in GDSA-PFLOTRAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reimus, Paul William

    This report provides documentation of the mathematical basis for a colloid-facilitated radionuclide transport modeling capability that can be incorporated into GDSA-PFLOTRAN. It also provides numerous test cases against which the modeling capability can be benchmarked once the model is implemented numerically in GDSA-PFLOTRAN. The test cases were run using a 1-D numerical model developed by the author, and the inputs and outputs from the 1-D model are provided in an electronic spreadsheet supplement to this report so that all cases can be reproduced in GDSA-PFLOTRAN, and the outputs can be directly compared with the 1-D model. The cases include examplesmore » of all potential scenarios in which colloid-facilitated transport could result in the accelerated transport of a radionuclide relative to its transport in the absence of colloids. Although it cannot be claimed that all the model features that are described in the mathematical basis were rigorously exercised in the test cases, the goal was to test the features that matter the most for colloid-facilitated transport; i.e., slow desorption of radionuclides from colloids, slow filtration of colloids, and equilibrium radionuclide partitioning to colloids that is strongly favored over partitioning to immobile surfaces, resulting in a substantial fraction of radionuclide mass being associated with mobile colloids.« less

  15. Test-Case Generation using an Explicit State Model Checker Final Report

    NASA Technical Reports Server (NTRS)

    Heimdahl, Mats P. E.; Gao, Jimin

    2003-01-01

    In the project 'Test-Case Generation using an Explicit State Model Checker' we have extended an existing tools infrastructure for formal modeling to export Java code so that we can use the NASA Ames tool Java Pathfinder (JPF) for test case generation. We have completed a translator from our source language RSML(exp -e) to Java and conducted initial studies of how JPF can be used as a testing tool. In this final report, we provide a detailed description of the translation approach as implemented in our tools.

  16. Classification of hyperspectral imagery using MapReduce on a NVIDIA graphics processing unit (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Ramirez, Andres; Rahnemoonfar, Maryam

    2017-04-01

    A hyperspectral image provides multidimensional figure rich in data consisting of hundreds of spectral dimensions. Analyzing the spectral and spatial information of such image with linear and non-linear algorithms will result in high computational time. In order to overcome this problem, this research presents a system using a MapReduce-Graphics Processing Unit (GPU) model that can help analyzing a hyperspectral image through the usage of parallel hardware and a parallel programming model, which will be simpler to handle compared to other low-level parallel programming models. Additionally, Hadoop was used as an open-source version of the MapReduce parallel programming model. This research compared classification accuracy results and timing results between the Hadoop and GPU system and tested it against the following test cases: the CPU and GPU test case, a CPU test case and a test case where no dimensional reduction was applied.

  17. A Model Independent S/W Framework for Search-Based Software Testing

    PubMed Central

    Baik, Jongmoon

    2014-01-01

    In Model-Based Testing (MBT) area, Search-Based Software Testing (SBST) has been employed to generate test cases from the model of a system under test. However, many types of models have been used in MBT. If the type of a model has changed from one to another, all functions of a search technique must be reimplemented because the types of models are different even if the same search technique has been applied. It requires too much time and effort to implement the same algorithm over and over again. We propose a model-independent software framework for SBST, which can reduce redundant works. The framework provides a reusable common software platform to reduce time and effort. The software framework not only presents design patterns to find test cases for a target model but also reduces development time by using common functions provided in the framework. We show the effectiveness and efficiency of the proposed framework with two case studies. The framework improves the productivity by about 50% when changing the type of a model. PMID:25302314

  18. Dynamic Fracture of Concrete. Part 1

    DTIC Science & Technology

    1990-02-14

    unnotched) by Mindess and the Charpy type impact tests by Shah. In both cases, dynamic finite element modeling with the adjusted constitutive equavm for the...Mindess and the Charpy type impact tests by Shah. In both cases, dynamic finite element modeling with the adjusted constitutive equations for the...Modeling Shah’s Charpy Impact Tests ................ 190 Figure 7.20 Specimen Configuration and Finite Element Model for Concrete and Mortar Beam Impact

  19. Practical Results from the Application of Model Checking and Test Generation from UML/SysML Models of On-Board Space Applications

    NASA Astrophysics Data System (ADS)

    Faria, J. M.; Mahomad, S.; Silva, N.

    2009-05-01

    The deployment of complex safety-critical applications requires rigorous techniques and powerful tools both for the development and V&V stages. Model-based technologies are increasingly being used to develop safety-critical software, and arguably, turning to them can bring significant benefits to such processes, however, along with new challenges. This paper presents the results of a research project where we tried to extend current V&V methodologies to be applied on UML/SysML models and aiming at answering the demands related to validation issues. Two quite different but complementary approaches were investigated: (i) model checking and the (ii) extraction of robustness test-cases from the same models. These two approaches don't overlap and when combined provide a wider reaching model/design validation ability than each one alone thus offering improved safety assurance. Results are very encouraging, even though they either fell short of the desired outcome as shown for model checking, or still appear as not fully matured as shown for robustness test case extraction. In the case of model checking, it was verified that the automatic model validation process can become fully operational and even expanded in scope once tool vendors help (inevitably) to improve the XMI standard interoperability situation. For the robustness test case extraction methodology, the early approach produced interesting results but need further systematisation and consolidation effort in order to produce results in a more predictable fashion and reduce reliance on expert's heuristics. Finally, further improvements and innovation research projects were immediately apparent for both investigated approaches, which point to either circumventing current limitations in XMI interoperability on one hand and bringing test case specification onto the same graphical level as the models themselves and then attempting to automate the generation of executable test cases from its standard UML notation.

  20. Predicate Argument Structure Analysis for Use Case Description Modeling

    NASA Astrophysics Data System (ADS)

    Takeuchi, Hironori; Nakamura, Taiga; Yamaguchi, Takahira

    In a large software system development project, many documents are prepared and updated frequently. In such a situation, support is needed for looking through these documents easily to identify inconsistencies and to maintain traceability. In this research, we focus on the requirements documents such as use cases and consider how to create models from the use case descriptions in unformatted text. In the model construction, we propose a few semantic constraints based on the features of the use cases and use them for a predicate argument structure analysis to assign semantic labels to actors and actions. With this approach, we show that we can assign semantic labels without enhancing any existing general lexical resources such as case frame dictionaries and design a less language-dependent model construction architecture. By using the constructed model, we consider a system for quality analysis of the use cases and automated test case generation to keep the traceability between document sets. We evaluated the reuse of the existing use cases and generated test case steps automatically with the proposed prototype system from real-world use cases in the development of a system using a packaged application. Based on the evaluation, we show how to construct models with high precision from English and Japanese use case data. Also, we could generate good test cases for about 90% of the real use cases through the manual improvement of the descriptions based on the feedback from the quality analysis system.

  1. FUEL ASSEMBLY SHAKER TEST SIMULATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klymyshyn, Nicholas A.; Sanborn, Scott E.; Adkins, Harold E.

    This report describes the modeling of a PWR fuel assembly under dynamic shock loading in support of the Sandia National Laboratories (SNL) shaker test campaign. The focus of the test campaign is on evaluating the response of used fuel to shock and vibration loads that a can occur during highway transport. Modeling began in 2012 using an LS-DYNA fuel assembly model that was first created for modeling impact scenarios. SNL’s proposed test scenario was simulated through analysis and the calculated results helped guide the instrumentation and other aspects of the testing. During FY 2013, the fuel assembly model was refinedmore » to better represent the test surrogate. Analysis of the proposed loads suggested the frequency band needed to be lowered to attempt to excite the lower natural frequencies of the fuel assembly. Despite SNL’s expansion of lower frequency components in their five shock realizations, pretest predictions suggested a very mild dynamic response to the test loading. After testing was completed, one specific shock case was modeled, using recorded accelerometer data to excite the model. Direct comparison of predicted strain in the cladding was made to the recorded strain gauge data. The magnitude of both sets of strain (calculated and recorded) are very low, compared to the expected yield strength of the Zircaloy-4 material. The model was accurate enough to predict that no yielding of the cladding was expected, but its precision at predicting micro strains is questionable. The SNL test data offers some opportunity for validation of the finite element model, but the specific loading conditions of the testing only excite the fuel assembly to respond in a limited manner. For example, the test accelerations were not strong enough to substantially drive the fuel assembly out of contact with the basket. Under this test scenario, the fuel assembly model does a reasonable job of approximating actual fuel assembly response, a claim that can be verified through direct comparison of model results to recorded test results. This does not offer validation for the fuel assembly model in all conceivable cases, such as high kinetic energy shock cases where the fuel assembly might lift off the basket floor to strike to basket ceiling. This type of nonlinear behavior was not witnessed in testing, so the model does not have test data to be validated against.a basis for validation in cases that substantially alter the fuel assembly response range. This leads to a gap in knowledge that is identified through this modeling study. The SNL shaker testing loaded a surrogate fuel assembly with a certain set of artificially-generated time histories. One thing all the shock cases had in common was an elimination of low frequency components, which reduces the rigid body dynamic response of the system. It is not known if the SNL test cases effectively bound all highway transportation scenarios, or if significantly greater rigid body motion than was tested is credible. This knowledge gap could be filled through modeling the vehicle dynamics of a used fuel conveyance, or by collecting acceleration time history data from an actual conveyance under highway conditions.« less

  2. The Dynamical Core Model Intercomparison Project (DCMIP-2016): Results of the Supercell Test Case

    NASA Astrophysics Data System (ADS)

    Zarzycki, C. M.; Reed, K. A.; Jablonowski, C.; Ullrich, P. A.; Kent, J.; Lauritzen, P. H.; Nair, R. D.

    2016-12-01

    The 2016 Dynamical Core Model Intercomparison Project (DCMIP-2016) assesses the modeling techniques for global climate and weather models and was recently held at the National Center for Atmospheric Research (NCAR) in conjunction with a two-week summer school. Over 12 different international modeling groups participated in DCMIP-2016 and focused on the evaluation of the newest non-hydrostatic dynamical core designs for future high-resolution weather and climate models. The paper highlights the results of the third DCMIP-2016 test case, which is an idealized supercell storm on a reduced-radius Earth. The supercell storm test permits the study of a non-hydrostatic moist flow field with strong vertical velocities and associated precipitation. This test assesses the behavior of global modeling systems at extremely high spatial resolution and is used in the development of next-generation numerical weather prediction capabilities. In this regime the effective grid spacing is very similar to the horizontal scale of convective plumes, emphasizing resolved non-hydrostatic dynamics. The supercell test case sheds light on the physics-dynamics interplay and highlights the impact of diffusion on model solutions.

  3. Comparative effectiveness of incorporating a hypothetical DCIS prognostic marker into breast cancer screening.

    PubMed

    Trentham-Dietz, Amy; Ergun, Mehmet Ali; Alagoz, Oguzhan; Stout, Natasha K; Gangnon, Ronald E; Hampton, John M; Dittus, Kim; James, Ted A; Vacek, Pamela M; Herschorn, Sally D; Burnside, Elizabeth S; Tosteson, Anna N A; Weaver, Donald L; Sprague, Brian L

    2018-02-01

    Due to limitations in the ability to identify non-progressive disease, ductal carcinoma in situ (DCIS) is usually managed similarly to localized invasive breast cancer. We used simulation modeling to evaluate the potential impact of a hypothetical test that identifies non-progressive DCIS. A discrete-event model simulated a cohort of U.S. women undergoing digital screening mammography. All women diagnosed with DCIS underwent the hypothetical DCIS prognostic test. Women with test results indicating progressive DCIS received standard breast cancer treatment and a decrement to quality of life corresponding to the treatment. If the DCIS test indicated non-progressive DCIS, no treatment was received and women continued routine annual surveillance mammography. A range of test performance characteristics and prevalence of non-progressive disease were simulated. Analysis compared discounted quality-adjusted life years (QALYs) and costs for test scenarios to base-case scenarios without the test. Compared to the base case, a perfect prognostic test resulted in a 40% decrease in treatment costs, from $13,321 to $8005 USD per DCIS case. A perfect test produced 0.04 additional QALYs (16 days) for women diagnosed with DCIS, added to the base case of 5.88 QALYs per DCIS case. The results were sensitive to the performance characteristics of the prognostic test, the proportion of DCIS cases that were non-progressive in the model, and the frequency of mammography screening in the population. A prognostic test that identifies non-progressive DCIS would substantially reduce treatment costs but result in only modest improvements in quality of life when averaged over all DCIS cases.

  4. Revisiting the Rossby Haurwitz wave test case with contour advection

    NASA Astrophysics Data System (ADS)

    Smith, Robert K.; Dritschel, David G.

    2006-09-01

    This paper re-examines a basic test case used for spherical shallow-water numerical models, and underscores the need for accurate, high resolution models of atmospheric and ocean dynamics. The Rossby-Haurwitz test case, first proposed by Williamson et al. [D.L. Williamson, J.B. Drake, J.J. Hack, R. Jakob, P.N. Swarztrauber, A standard test set for numerical approximations to the shallow-water equations on the sphere, J. Comput. Phys. (1992) 221-224], has been examined using a wide variety of shallow-water models in previous papers. Here, two contour-advective semi-Lagrangian (CASL) models are considered, and results are compared with previous test results. We go further by modifying this test case in a simple way to initiate a rapid breakdown of the basic wave state. This breakdown is accompanied by the formation of sharp potential vorticity gradients (fronts), placing far greater demands on the numerics than the original test case does. We also go further by examining other dynamical fields besides the height and potential vorticity, to assess how well the models deal with gravity waves. Such waves are sensitive to the presence or not of sharp potential vorticity gradients, as well as to numerical parameter settings. In particular, large time steps (convenient for semi-Lagrangian schemes) can seriously affect gravity waves but can also have an adverse impact on the primary fields of height and velocity. These problems are exacerbated by a poor resolution of potential vorticity gradients.

  5. Application of conditional moment tests to model checking for generalized linear models.

    PubMed

    Pan, Wei

    2002-06-01

    Generalized linear models (GLMs) are increasingly being used in daily data analysis. However, model checking for GLMs with correlated discrete response data remains difficult. In this paper, through a case study on marginal logistic regression using a real data set, we illustrate the flexibility and effectiveness of using conditional moment tests (CMTs), along with other graphical methods, to do model checking for generalized estimation equation (GEE) analyses. Although CMTs provide an array of powerful diagnostic tests for model checking, they were originally proposed in the econometrics literature and, to our knowledge, have never been applied to GEE analyses. CMTs cover many existing tests, including the (generalized) score test for an omitted covariate, as special cases. In summary, we believe that CMTs provide a class of useful model checking tools.

  6. Attribution of horizontal and vertical contributions to spurious mixing in an Arbitrary Lagrangian-Eulerian ocean model

    NASA Astrophysics Data System (ADS)

    Gibson, Angus H.; Hogg, Andrew McC.; Kiss, Andrew E.; Shakespeare, Callum J.; Adcroft, Alistair

    2017-11-01

    We examine the separate contributions to spurious mixing from horizontal and vertical processes in an ALE ocean model, MOM6, using reference potential energy (RPE). The RPE is a global diagnostic which changes only due to mixing between density classes. We extend this diagnostic to a sub-timestep timescale in order to individually separate contributions to spurious mixing through horizontal (tracer advection) and vertical (regridding/remapping) processes within the model. We both evaluate the overall spurious mixing in MOM6 against previously published output from other models (MOM5, MITGCM and MPAS-O), and investigate impacts on the components of spurious mixing in MOM6 across a suite of test cases: a lock exchange, internal wave propagation, and a baroclinically-unstable eddying channel. The split RPE diagnostic demonstrates that the spurious mixing in a lock exchange test case is dominated by horizontal tracer advection, due to the spatial variability in the velocity field. In contrast, the vertical component of spurious mixing dominates in an internal waves test case. MOM6 performs well in this test case owing to its quasi-Lagrangian implementation of ALE. Finally, the effects of model resolution are examined in a baroclinic eddies test case. In particular, the vertical component of spurious mixing dominates as horizontal resolution increases, an important consideration as global models evolve towards higher horizontal resolutions.

  7. Do Test Design and Uses Influence Test Preparation? Testing a Model of Washback with Structural Equation Modeling

    ERIC Educational Resources Information Center

    Xie, Qin; Andrews, Stephen

    2013-01-01

    This study introduces Expectancy-value motivation theory to explain the paths of influences from perceptions of test design and uses to test preparation as a special case of washback on learning. Based on this theory, two conceptual models were proposed and tested via Structural Equation Modeling. Data collection involved over 870 test takers of…

  8. Testing MODFLOW-LGR for simulating flow around buried Quaternary valleys - synthetic test cases

    NASA Astrophysics Data System (ADS)

    Vilhelmsen, T. N.; Christensen, S.

    2009-12-01

    In this study the Local Grid Refinement (LGR) method developed for MODFLOW-2005 (Mehl and Hill, 2005) is utilized to describe groundwater flow in areas containing buried Quaternary valley structures. The tests are conducted as comparative analysis between simulations run with a globally refined model, a locally refined model, and a globally coarse model, respectively. The models vary from simple one layer models to more complex ones with up to 25 model layers. The comparisons of accuracy are conducted within the locally refined area and focus on water budgets, simulated heads, and simulated particle traces. Simulations made with the globally refined model are used as reference (regarded as “true” values). As expected, for all test cases the application of local grid refinement resulted in more accurate results than when using the globally coarse model. A significant advantage of utilizing MODFLOW-LGR was that it allows increased numbers of model layers to better resolve complex geology within local areas. This resulted in more accurate simulations than when using either a globally coarse model grid or a locally refined model with lower geological resolution. Improved accuracy in the latter case could not be expected beforehand because difference in geological resolution between the coarse parent model and the refined child model contradicts the assumptions of the Darcy weighted interpolation used in MODFLOW-LGR. With respect to model runtimes, it was sometimes found that the runtime for the locally refined model is much longer than for the globally refined model. This was the case even when the closure criteria were relaxed compared to the globally refined model. These results are contradictory to those presented by Mehl and Hill (2005). Furthermore, in the complex cases it took some testing (model runs) to identify the closure criteria and the damping factor that secured convergence, accurate solutions, and reasonable runtimes. For our cases this is judged to be a serious disadvantage of applying MODFLOW-LGR. Another disadvantage in the studied cases was that the MODFLOW-LGR results proved to be somewhat dependent on the correction method used at the parent-child model interface. This indicates that when applying MODFLOW-LGR there is a need for thorough and case-specific considerations regarding choice of correction method. References: Mehl, S. and M. C. Hill (2005). "MODFLOW-2005, THE U.S. GEOLOGICAL SURVEY MODULAR GROUND-WATER MODEL - DOCUMENTATION OF SHARED NODE LOCAL GRID REFINEMENT (LGR) AND THE BOUNDARY FLOW AND HEAD (BFH) PACKAGE " U.S. Geological Survey Techniques and Methods 6-A12

  9. Retrospective cost adaptive Reynolds-averaged Navier-Stokes k-ω model for data-driven unsteady turbulent simulations

    NASA Astrophysics Data System (ADS)

    Li, Zhiyong; Hoagg, Jesse B.; Martin, Alexandre; Bailey, Sean C. C.

    2018-03-01

    This paper presents a data-driven computational model for simulating unsteady turbulent flows, where sparse measurement data is available. The model uses the retrospective cost adaptation (RCA) algorithm to automatically adjust the closure coefficients of the Reynolds-averaged Navier-Stokes (RANS) k- ω turbulence equations to improve agreement between the simulated flow and the measurements. The RCA-RANS k- ω model is verified for steady flow using a pipe-flow test case and for unsteady flow using a surface-mounted-cube test case. Measurements used for adaptation of the verification cases are obtained from baseline simulations with known closure coefficients. These verification test cases demonstrate that the RCA-RANS k- ω model can successfully adapt the closure coefficients to improve agreement between the simulated flow field and a set of sparse flow-field measurements. Furthermore, the RCA-RANS k- ω model improves agreement between the simulated flow and the baseline flow at locations at which measurements do not exist. The RCA-RANS k- ω model is also validated with experimental data from 2 test cases: steady pipe flow, and unsteady flow past a square cylinder. In both test cases, the adaptation improves agreement with experimental data in comparison to the results from a non-adaptive RANS k- ω model that uses the standard values of the k- ω closure coefficients. For the steady pipe flow, adaptation is driven by mean stream-wise velocity measurements at 24 locations along the pipe radius. The RCA-RANS k- ω model reduces the average velocity error at these locations by over 35%. For the unsteady flow over a square cylinder, adaptation is driven by time-varying surface pressure measurements at 2 locations on the square cylinder. The RCA-RANS k- ω model reduces the average surface-pressure error at these locations by 88.8%.

  10. Computer aided system engineering and analysis (CASE/A) modeling package for ECLS systems - An overview

    NASA Technical Reports Server (NTRS)

    Dalee, Robert C.; Bacskay, Allen S.; Knox, James C.

    1990-01-01

    An overview of the CASE/A-ECLSS series modeling package is presented. CASE/A is an analytical tool that has supplied engineering productivity accomplishments during ECLSS design activities. A components verification program was performed to assure component modeling validity based on test data from the Phase II comparative test program completed at the Marshall Space Flight Center. An integrated plotting feature has been added to the program which allows the operator to analyze on-screen data trends or get hard copy plots from within the CASE/A operating environment. New command features in the areas of schematic, output, and model management, and component data editing have been incorporated to enhance the engineer's productivity during a modeling program.

  11. Possibilities of rock constitutive modelling and simulations

    NASA Astrophysics Data System (ADS)

    Baranowski, Paweł; Małachowski, Jerzy

    2018-01-01

    The paper deals with a problem of rock finite element modelling and simulation. The main intention of authors was to present possibilities of different approaches in case of rock constitutive modelling. For this purpose granite rock was selected, due to its wide mechanical properties recognition and prevalence in literature. Two significantly different constitutive material models were implemented to simulate the granite fracture in various configurations: Johnson - Holmquist ceramic model which is very often used for predicting rock and other brittle materials behavior, and a simple linear elastic model with a brittle failure which can be used for simulating glass fracturing. Four cases with different loading conditions were chosen to compare the aforementioned constitutive models: uniaxial compression test, notched three-point-bending test, copper ball impacting a block test and small scale blasting test.

  12. Propeller aircraft interior noise model. II - Scale-model and flight-test comparisons

    NASA Technical Reports Server (NTRS)

    Willis, C. M.; Mayes, W. H.

    1987-01-01

    A program for predicting the sound levels inside propeller driven aircraft arising from sidewall transmission of airborne exterior noise is validated through comparisons of predictions with both scale-model test results and measurements obtained in flight tests on a turboprop aircraft. The program produced unbiased predictions for the case of the scale-model tests, with a standard deviation of errors of about 4 dB. For the case of the flight tests, the predictions revealed a bias of 2.62-4.28 dB (depending upon whether or not the data for the fourth harmonic were included) and the standard deviation of the errors ranged between 2.43 and 4.12 dB. The analytical model is shown to be capable of taking changes in the flight environment into account.

  13. The evaluation of the National Long Term Care Demonstration. 4. Case management under channeling.

    PubMed Central

    Phillips, B R; Kemper, P; Applebaum, R A

    1988-01-01

    The channeling demonstration involved provision of comprehensive case management and direct service expansion. This article considers the former. Under both models, comprehensive case management was implemented largely as intended; moreover, channeling substantially increased the receipt of comprehensive care management. However, channeling was not a pure test of the effect of comprehensive case management: roughly 10-20 percent of control group members received comparable case management services. This was particularly the case for the financial control model. Thus, the demonstration was not a test of case management compared to no case management; rather, it compared channeling case management to the existing community care system, which already was providing comprehensive case management to some of the population eligible for channeling. PMID:3130331

  14. Theoretical Models, Assessment Frameworks and Test Construction.

    ERIC Educational Resources Information Center

    Chalhoub-Deville, Micheline

    1997-01-01

    Reviews the usefulness of proficiency models influencing second language testing. Findings indicate that several factors contribute to the lack of congruence between models and test construction and make a case for distinguishing between theoretical models. Underscores the significance of an empirical, contextualized and structured approach to the…

  15. MATTS- A Step Towards Model Based Testing

    NASA Astrophysics Data System (ADS)

    Herpel, H.-J.; Willich, G.; Li, J.; Xie, J.; Johansen, B.; Kvinnesland, K.; Krueger, S.; Barrios, P.

    2016-08-01

    In this paper we describe a Model Based approach to testing of on-board software and compare it with traditional validation strategy currently applied to satellite software. The major problems that software engineering will face over at least the next two decades are increasing application complexity driven by the need for autonomy and serious application robustness. In other words, how do we actually get to declare success when trying to build applications one or two orders of magnitude more complex than today's applications. To solve the problems addressed above the software engineering process has to be improved at least for two aspects: 1) Software design and 2) Software testing. The software design process has to evolve towards model-based approaches with extensive use of code generators. Today, testing is an essential, but time and resource consuming activity in the software development process. Generating a short, but effective test suite usually requires a lot of manual work and expert knowledge. In a model-based process, among other subtasks, test construction and test execution can also be partially automated. The basic idea behind the presented study was to start from a formal model (e.g. State Machines), generate abstract test cases which are then converted to concrete executable test cases (input and expected output pairs). The generated concrete test cases were applied to an on-board software. Results were collected and evaluated wrt. applicability, cost-efficiency, effectiveness at fault finding, and scalability.

  16. Assessing Discriminative Performance at External Validation of Clinical Prediction Models

    PubMed Central

    Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W.

    2016-01-01

    Introduction External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. Methods We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. Results The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. Conclusion The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients. PMID:26881753

  17. Assessing Discriminative Performance at External Validation of Clinical Prediction Models.

    PubMed

    Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W

    2016-01-01

    External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients.

  18. A controlled experiment in ground water flow model calibration

    USGS Publications Warehouse

    Hill, M.C.; Cooley, R.L.; Pollock, D.W.

    1998-01-01

    Nonlinear regression was introduced to ground water modeling in the 1970s, but has been used very little to calibrate numerical models of complicated ground water systems. Apparently, nonlinear regression is thought by many to be incapable of addressing such complex problems. With what we believe to be the most complicated synthetic test case used for such a study, this work investigates using nonlinear regression in ground water model calibration. Results of the study fall into two categories. First, the study demonstrates how systematic use of a well designed nonlinear regression method can indicate the importance of different types of data and can lead to successive improvement of models and their parameterizations. Our method differs from previous methods presented in the ground water literature in that (1) weighting is more closely related to expected data errors than is usually the case; (2) defined diagnostic statistics allow for more effective evaluation of the available data, the model, and their interaction; and (3) prior information is used more cautiously. Second, our results challenge some commonly held beliefs about model calibration. For the test case considered, we show that (1) field measured values of hydraulic conductivity are not as directly applicable to models as their use in some geostatistical methods imply; (2) a unique model does not necessarily need to be identified to obtain accurate predictions; and (3) in the absence of obvious model bias, model error was normally distributed. The complexity of the test case involved implies that the methods used and conclusions drawn are likely to be powerful in practice.Nonlinear regression was introduced to ground water modeling in the 1970s, but has been used very little to calibrate numerical models of complicated ground water systems. Apparently, nonlinear regression is thought by many to be incapable of addressing such complex problems. With what we believe to be the most complicated synthetic test case used for such a study, this work investigates using nonlinear regression in ground water model calibration. Results of the study fall into two categories. First, the study demonstrates how systematic use of a well designed nonlinear regression method can indicate the importance of different types of data and can lead to successive improvement of models and their parameterizations. Our method differs from previous methods presented in the ground water literature in that (1) weighting is more closely related to expected data errors than is usually the case; (2) defined diagnostic statistics allow for more effective evaluation of the available data, the model, and their interaction; and (3) prior information is used more cautiously. Second, our results challenge some commonly held beliefs about model calibration. For the test case considered, we show that (1) field measured values of hydraulic conductivity are not as directly applicable to models as their use in some geostatistical methods imply; (2) a unique model does not necessarily need to be identified to obtain accurate predictions; and (3) in the absence of obvious model bias, model error was normally distributed. The complexity of the test case involved implies that the methods used and conclusions drawn are likely to be powerful in practice.

  19. DIAGNOSTIC TOOL DEVELOPMENT AND APPLICATION THROUGH REGIONAL CASE STUDIES

    EPA Science Inventory

    Case studies are a useful vehicle for developing and testing conceptual models, classification systems, diagnostic tools and models, and stressor-response relationships. Furthermore, case studies focused on specific places or issues of interest to the Agency provide an excellent ...

  20. Parameterization of Model Validating Sets for Uncertainty Bound Optimizations. Revised

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Giesy, D. P.

    2000-01-01

    Given measurement data, a nominal model and a linear fractional transformation uncertainty structure with an allowance on unknown but bounded exogenous disturbances, easily computable tests for the existence of a model validating uncertainty set are given. Under mild conditions, these tests are necessary and sufficient for the case of complex, nonrepeated, block-diagonal structure. For the more general case which includes repeated and/or real scalar uncertainties, the tests are only necessary but become sufficient if a collinearity condition is also satisfied. With the satisfaction of these tests, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization is used as a basis for a systematic way to construct or perform uncertainty tradeoff with model validating uncertainty sets which have specific linear fractional transformation structure for use in robust control design and analysis. An illustrative example which includes a comparison of candidate model validating sets is given.

  1. Building Physics Test Cases | Buildings | NREL

    Science.gov Websites

    building physics test cases in BESTEST-EX. In these cases, the model inputs that describe the house are programs. This diagram provides an overview of the BESTEST-EX physics case process. On the left side of the diagram is a box labeled "BESTEST-EX Document" with a list that contains two bulleted items. The

  2. An approach to verification and validation of a reliable multicasting protocol: Extended Abstract

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Montgomery, Todd L.

    1995-01-01

    This paper describes the process of implementing a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally using a combination of formal and informal techniques in an attempt to ensure the correctness of its implementation. Our development process involved three concurrent activities: (1) the initial construction and incremental enhancement of a formal state model of the protocol machine; (2) the initial coding and incremental enhancement of the implementation; and (3) model-based testing of iterative implementations of the protocol. These activities were carried out by two separate teams: a design team and a V&V team. The design team built the first version of RMP with limited functionality to handle only nominal requirements of data delivery. This initial version did not handle off-nominal cases such as network partitions or site failures. Meanwhile, the V&V team concurrently developed a formal model of the requirements using a variant of SCR-based state tables. Based on these requirements tables, the V&V team developed test cases to exercise the implementation. In a series of iterative steps, the design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation. This was done by generating test cases based on suspected errant or off-nominal behaviors predicted by the current model. If the execution of a test in the model and implementation agreed, then the test either found a potential problem or verified a required behavior. However, if the execution of a test was different in the model and implementation, then the differences helped identify inconsistencies between the model and implementation. In either case, the dialogue between both teams drove the co-evolution of the model and implementation. We have found that this interactive, iterative approach to development allows software designers to focus on delivery of nominal functionality while the V&V team can focus on analysis of off nominal cases. Testing serves as the vehicle for keeping the model and implementation in fidelity with each other. This paper describes (1) our experiences in developing our process model; and (2) three example problems found during the development of RMP. Although RMP has provided our research effort with a rich set of test cases, it also has practical applications within NASA. For example, RMP is being considered for use in the NASA EOSDIS project due to its significant performance benefits in applications that need to replicate large amounts of data to many network sites.

  3. A case study on modeling and independent practice cycles in teaching beginning science inquiry

    NASA Astrophysics Data System (ADS)

    Sadeghpour-Kramer, Margaret Ann Plattenberger

    With increasing pressure to produce high standardized test scores, school systems will be looking for the surest ways to increase scores. Decision makers uninformed about the value of inquiry science may recommend more direct teaching methods and curricula in the hope that students will more quickly accumulate factual information for high test scores. This researcher and other proponents of inquiry science suggest that the best preparation for any test is the ability to use all available information and problem solving skills to think through to a solution. This study proposes to test the theory that inquiry problem solving skills need to be modeled and practiced in increasingly independent situations to be learned. Students tend to copy what they have been led to believe is correct, and to avoid continued copying, their skills must be applied in new situations requiring independent practice and improvement. This study follows ten sixth grade students, selected for maximum variation, as they participate in a series of five cycles of modeling and practicing inquiry science investigations as part of an ongoing unit on water quality. The cycles were designed to make the students increasingly independent in their use of inquiry. The results showed that all ten students made significant progress from copying teacher modeling in investigation #1 towards independent inquiry, with nine of the ten achieving acceptable to good beginning independent inquiry in investigation #5. Each case was analyzed independently using such case study methodology as pattern matching, case study protocols, and theoretical propositions. Constant comparison and other case study methods were used in a cross-case analysis. Eight cases confirmed a matching set of propositions and the hypothesis, in literal replication, and the other two cases confirmed a set of propositions and the hypothesis through theoretical replication. The study suggests to educators that repeated cycles of modeling and increasingly independent practice serve three purposes; first to develop independent inquiry skills by providing multiple opportunities with intermittent modeling, second to repeat the modeling initially in very similar situations and then encourage transfer to new situations, and third to provide repeated modeling for those students who do not grasp the concepts as quickly as do their classmates.

  4. Unsteady Computational Tests of a Non-Equilibrium

    NASA Astrophysics Data System (ADS)

    Jirasek, Adam; Hamlington, Peter; Lofthouse, Andrew; Usafa Collaboration; Cu Boulder Collaboration

    2017-11-01

    A non-equilibrium turbulence model is assessed on simulations of three practically-relevant unsteady test cases; oscillating channel flow, transonic flow around an oscillating airfoil, and transonic flow around the Benchmark Super-Critical Wing. The first case is related to piston-driven flows while the remaining cases are relevant to unsteady aerodynamics at high angles of attack and transonic speeds. Non-equilibrium turbulence effects arise in each of these cases in the form of a lag between the mean strain rate and Reynolds stresses, resulting in reduced kinetic energy production compared to classical equilibrium turbulence models that are based on the gradient transport (or Boussinesq) hypothesis. As a result of the improved representation of unsteady flow effects, the non-equilibrium model provides substantially better agreement with available experimental data than do classical equilibrium turbulence models. This suggests that the non-equilibrium model may be ideally suited for simulations of modern high-speed, high angle of attack aerodynamics problems.

  5. Twenty Years On!: Updating the IEA BESTEST Building Thermal Fabric Test Cases for ASHRAE Standard 140

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Judkoff, R.; Neymark, J.

    2013-07-01

    ANSI/ASHRAE Standard 140, Standard Method of Test for the Evaluation of Building Energy Analysis Computer Programs applies the IEA BESTEST building thermal fabric test cases and example simulation results originally published in 1995. These software accuracy test cases and their example simulation results, which comprise the first test suite adapted for the initial 2001 version of Standard 140, are approaching their 20th anniversary. In response to the evolution of the state of the art in building thermal fabric modeling since the test cases and example simulation results were developed, work is commencing to update the normative test specification and themore » informative example results.« less

  6. Twenty Years On!: Updating the IEA BESTEST Building Thermal Fabric Test Cases for ASHRAE Standard 140: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Judkoff, R.; Neymark, J.

    2013-07-01

    ANSI/ASHRAE Standard 140, Standard Method of Test for the Evaluation of Building Energy Analysis Computer Programs applies the IEA BESTEST building thermal fabric test cases and example simulation results originally published in 1995. These software accuracy test cases and their example simulation results, which comprise the first test suite adapted for the initial 2001 version of Standard 140, are approaching their 20th anniversary. In response to the evolution of the state of the art in building thermal fabric modeling since the test cases and example simulation results were developed, work is commencing to update the normative test specification and themore » informative example results.« less

  7. Modeling the Declining Positivity Rates for Human Immunodeficiency Virus Testing in New York State.

    PubMed

    Martin, Erika G; MacDonald, Roderick H; Smith, Lou C; Gordon, Daniel E; Lu, Tao; OʼConnell, Daniel A

    2015-01-01

    New York health care providers have experienced declining percentages of positive human immunodeficiency virus (HIV) tests among patients. Furthermore, observed positivity rates are lower than expected on the basis of the national estimate that one-fifth of HIV-infected residents are unaware of their infection. We used mathematical modeling to evaluate whether this decline could be a result of declining numbers of HIV-infected persons who are unaware of their infection, a measure that is impossible to measure directly. A stock-and-flow mathematical model of HIV incidence, testing, and diagnosis was developed. The model includes stocks for uninfected, infected and unaware (in 4 disease stages), and diagnosed individuals. Inputs came from published literature and time series (2006-2009) for estimated new infections, newly diagnosed HIV cases, living diagnosed cases, mortality, and diagnosis rates in New York. Primary model outcomes were the percentage of HIV-infected persons unaware of their infection and the percentage of HIV tests with a positive result (HIV positivity rate). In the base case, the estimated percentage of unaware HIV-infected persons declined from 14.2% in 2006 (range, 11.9%-16.5%) to 11.8% in 2010 (range, 9.9%-13.1%). The HIV positivity rate, assuming testing occurred independent of risk, was 0.12% in 2006 (range, 0.11%-0.15%) and 0.11% in 2010 (range, 0.10%-0.13%). The observed HIV positivity rate was more than 4 times the expected positivity rate based on the model. HIV test positivity is a readily available indicator, but it cannot distinguish causes of underlying changes. Findings suggest that the percentage of unaware HIV-infected New Yorkers is lower than the national estimate and that the observed HIV test positivity rate is greater than expected if infected and uninfected individuals tested at the same rate, indicating that testing efforts are appropriately targeting undiagnosed cases.

  8. Determination of the Underlying Task Scheduling Algorithm for an Ada Runtime System

    DTIC Science & Technology

    1989-12-01

    was also curious as to how well I could model the test cases with Ada programs . In particular, I wanted to see whether I could model the equal arrival...parameter relationshis=s required to detect the execution of individual algorithms. These test cases were modeled using Ada programs . Then, the...results were analyzed to determine whether the Ada programs were capable of revealing the task scheduling algorithm used by the Ada run-time system. This

  9. Model-Invariant Hybrid Computations of Separated Flows for RCA Standard Test Cases

    NASA Technical Reports Server (NTRS)

    Woodruff, Stephen

    2016-01-01

    NASA's Revolutionary Computational Aerosciences (RCA) subproject has identified several smooth-body separated flows as standard test cases to emphasize the challenge these flows present for computational methods and their importance to the aerospace community. Results of computations of two of these test cases, the NASA hump and the FAITH experiment, are presented. The computations were performed with the model-invariant hybrid LES-RANS formulation, implemented in the NASA code VULCAN-CFD. The model- invariant formulation employs gradual LES-RANS transitions and compensation for model variation to provide more accurate and efficient hybrid computations. Comparisons revealed that the LES-RANS transitions employed in these computations were sufficiently gradual that the compensating terms were unnecessary. Agreement with experiment was achieved only after reducing the turbulent viscosity to mitigate the effect of numerical dissipation. The stream-wise evolution of peak Reynolds shear stress was employed as a measure of turbulence dynamics in separated flows useful for evaluating computations.

  10. Estimation of tunnel blockage from wall pressure signatures: A review and data correlation

    NASA Technical Reports Server (NTRS)

    Hackett, J. E.; Wilsden, D. J.; Lilley, D. E.

    1979-01-01

    A method is described for estimating low speed wind tunnel blockage, including model volume, bubble separation and viscous wake effects. A tunnel-centerline, source/sink distribution is derived from measured wall pressure signatures using fast algorithms to solve the inverse problem in three dimensions. Blockage may then be computed throughout the test volume. Correlations using scaled models or tests in two tunnels were made in all cases. In many cases model reference area exceeded 10% of the tunnel cross-sectional area. Good correlations were obtained regarding model surface pressures, lift drag and pitching moment. It is shown that blockage-induced velocity variations across the test section are relatively unimportant but axial gradients should be considered when model size is determined.

  11. A dual memory theory of the testing effect.

    PubMed

    Rickard, Timothy C; Pan, Steven C

    2017-06-05

    A new theoretical framework for the testing effect-the finding that retrieval practice is usually more effective for learning than are other strategies-is proposed, the empirically supported tenet of which is that separate memories form as a consequence of study and test events. A simplest case quantitative model is derived from that framework for the case of cued recall. With no free parameters, that model predicts both proportion correct in the test condition and the magnitude of the testing effect across 10 experiments conducted in our laboratory, experiments that varied with respect to material type, retention interval, and performance in the restudy condition. The model also provides the first quantitative accounts of (a) the testing effect as a function of performance in the restudy condition, (b) the upper bound magnitude of the testing effect, (c) the effect of correct answer feedback, (d) the testing effect as a function of retention interval for the cases of feedback and no feedback, and (e) the effect of prior learning method on subsequent learning through testing. Candidate accounts of several other core phenomena in the literature, including test-potentiated learning, recognition versus cued recall training effects, cued versus free recall final test effects, and other select transfer effects, are also proposed. Future prospects and relations to other theories are discussed.

  12. Modeling and design of challenge tests: Inflammatory and metabolic biomarker study examples.

    PubMed

    Gabrielsson, Johan; Hjorth, Stephan; Vogg, Barbara; Harlfinger, Stephanie; Gutierrez, Pablo Morentin; Peletier, Lambertus; Pehrson, Rikard; Davidsson, Pia

    2015-01-25

    Given the complexity of pharmacological challenge experiments, it is perhaps not surprising that design and analysis, and in turn interpretation and communication of results from a quantitative point of view, is often suboptimal. Here we report an inventory of common designs sampled from anti-inflammatory, respiratory and metabolic disease drug discovery studies, all of which are based on animal models of disease involving pharmacological and/or patho/physiological interaction challenges. The corresponding data are modeled and analyzed quantitatively, the merits of the respective approach discussed and inferences made with respect to future design improvements. Although our analysis is limited to these disease model examples, the challenge approach is generally applicable to the vast majority of pharmacological intervention studies. In the present five Case Studies results from pharmacodynamic effect models from different therapeutic areas were explored and analyzed according to five typical designs. Plasma exposures of test compounds were assayed by either liquid chromatography/mass spectrometry or ligand binding assays. To describe how drug intervention can regulate diverse processes, turnover models of test compound-challenger interaction, transduction processes, and biophase time courses were applied for biomarker response in eosinophil count, IL6 response, paw-swelling, TNFα response and glucose turnover in vivo. Case Study 1 shows results from intratracheal administration of Sephadex, which is a glucocorticoid-sensitive model of airway inflammation in rats. Eosinophils in bronchoalveolar fluid were obtained at different time points via destructive sampling and then regressed by the mixed-effects modeling. A biophase function of the Sephadex time course was inferred from the modeled eosinophil time courses. In Case Study 2, a mouse model showed that the time course of cytokine-induced IL1β challenge was altered with or without drug intervention. Anakinra reversed the IL1β induced cytokine IL6 response in a dose-dependent manner. This Case Study contained time courses of test compound (drug), challenger (IL1β) and cytokine response (IL6), which resulted in high parameter precision. Case Study 3 illustrates collagen-induced arthritis progression in the rat. Swelling scores (based on severity of hind paw swelling) were used to describe arthritis progression after the challenge and the inhibitory effect of two doses of an orally administered test compound. In Case Study 4, a cynomolgus monkey model for lipopolysaccharide LPS-induced TNFα synthesis and/or release was investigated. This model provides integrated information on pharmacokinetics and in vivo potency of the test compounds. Case Study 5 contains data from an oral glucose tolerance test in rats, where the challenger is the same as the pharmacodynamic response biomarker (glucose). It is therefore convenient to model the extra input of glucose simultaneously with baseline data and during intervention of a glucose-lowering compound at different dose levels. Typically time-series analyses of challenger- and biomarker-time data are necessary if an accurate and precise estimate of the pharmacodynamic properties of a test compound is sought. Erosion of data, resulting in the single-point assessment of drug action after a challenge test, should generally be avoided. This is particularly relevant for situations where one expects time-curve shifts, tolerance/rebound, impact of disease, or hormetic concentration-response relationships to occur. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Improvement of shallow landslide prediction accuracy using soil parameterisation for a granite area in South Korea

    NASA Astrophysics Data System (ADS)

    Kim, M. S.; Onda, Y.; Kim, J. K.

    2015-01-01

    SHALSTAB model applied to shallow landslides induced by rainfall to evaluate soil properties related with the effect of soil depth for a granite area in Jinbu region, Republic of Korea. Soil depth measured by a knocking pole test and two soil parameters from direct shear test (a and b) as well as one soil parameters from a triaxial compression test (c) were collected to determine the input parameters for the model. Experimental soil data were used for the first simulation (Case I) and, soil data represented the effect of measured soil depth and average soil depth from soil data of Case I were used in the second (Case II) and third simulations (Case III), respectively. All simulations were analysed using receiver operating characteristic (ROC) analysis to determine the accuracy of prediction. ROC analysis results for first simulation showed the low ROC values under 0.75 may be due to the internal friction angle and particularly the cohesion value. Soil parameters calculated from a stochastic hydro-geomorphological model were applied to the SHALSTAB model. The accuracy of Case II and Case III using ROC analysis showed higher accuracy values rather than first simulation. Our results clearly demonstrate that the accuracy of shallow landslide prediction can be improved when soil parameters represented the effect of soil thickness.

  14. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 2. CDMP Test Case Report.

    DTIC Science & Technology

    1985-11-01

    As a o11066v. nlstle VSuSY £6I5PSAY I’ Iu PAS 11. Title Integrated Information Support System (1SS) Vol V - Common Data Model Subsystem Part 2 - CIMP ...AD-Mel1 236 INTEGRATED INFORMATION SUPPORT SYSTEM (IISS) VOLUME 5 1/2 COMMON DATA MODEL S.. (U) GENERAL ELECTRIC CO SCHENECTADY NY PRODUCTION...Volume V - Common Data Model Subsystem Part 2 - CDMP Test Case Report General Electric Company Production Resources Consulting One River Road

  15. FAST Model Calibration and Validation of the OC5-DeepCwind Floating Offshore Wind System Against Wave Tank Test Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendt, Fabian F; Robertson, Amy N; Jonkman, Jason

    During the course of the Offshore Code Comparison Collaboration, Continued, with Correlation (OC5) project, which focused on the validation of numerical methods through comparison against tank test data, the authors created a numerical FAST model of the 1:50-scale DeepCwind semisubmersible system that was tested at the Maritime Research Institute Netherlands ocean basin in 2013. This paper discusses several model calibration studies that were conducted to identify model adjustments that improve the agreement between the numerical simulations and the experimental test data. These calibration studies cover wind-field-specific parameters (coherence, turbulence), hydrodynamic and aerodynamic modeling approaches, as well as rotor model (blade-pitchmore » and blade-mass imbalances) and tower model (structural tower damping coefficient) adjustments. These calibration studies were conducted based on relatively simple calibration load cases (wave only/wind only). The agreement between the final FAST model and experimental measurements is then assessed based on more-complex combined wind and wave validation cases.« less

  16. Fuel assembly shaker and truck test simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klymyshyn, Nicholas A.; Jensen, Philip J.; Sanborn, Scott E.

    2014-09-30

    This study continues the modeling support of the SNL shaker table task from 2013 and includes analysis of the SNL 2014 truck test campaign. Detailed finite element models of the fuel assembly surrogate used by SNL during testing form the basis of the modeling effort. Additional analysis was performed to characterize and filter the accelerometer data collected during the SNL testing. The detailed fuel assembly finite element model was modified to improve the performance and accuracy of the original surrogate fuel assembly model in an attempt to achieve a closer agreement with the low strains measured during testing. The revisedmore » model was used to recalculate the shaker table load response from the 2013 test campaign. As it happened, the results remained comparable to the values calculated with the original fuel assembly model. From this it is concluded that the original model was suitable for the task and the improvements to the model were not able to bring the calculated strain values down to the extremely low level recorded during testing. The model needs more precision to calculate strains that are so close to zero. The truck test load case had an even lower magnitude than the shaker table case. Strain gage data from the test was compared directly to locations on the model. Truck test strains were lower than the shaker table case, but the model achieved a better relative agreement of 100-200 microstrains (or 0.0001-0.0002 mm/mm). The truck test data included a number of accelerometers at various locations on the truck bed, surrogate basket, and surrogate fuel assembly. This set of accelerometers allowed an evaluation of the dynamics of the conveyance system used in testing. It was discovered that the dynamic load transference through the conveyance has a strong frequency-range dependency. This suggests that different conveyance configurations could behave differently and transmit different magnitudes of loads to the fuel even when traveling down the same road at the same speed. It is recommended that the SNL conveyance system used in testing be characterized through modal analysis and frequency response analysis to provide context and assist in the interpretation of the strain data that was collected during the truck test campaign.« less

  17. 75 FR 53371 - Liquefied Natural Gas Facilities: Obtaining Approval of Alternative Vapor-Gas Dispersion Models

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-31

    ... factors as the approved models, are validated by experimental test data, and receive the Administrator's... stage of the MEP involves applying the model against a database of experimental test cases including..., particularly the requirement for validation by experimental test data. That guidance is based on the MEP's...

  18. Airside HVAC BESTEST: HVAC Air-Distribution System Model Test Cases for ASHRAE Standard 140

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Judkoff, Ronald; Neymark, Joel; Kennedy, Mike D.

    This paper summarizes recent work to develop new airside HVAC equipment model analytical verification test cases for ANSI/ASHRAE Standard 140, Standard Method of Test for the Evaluation of Building Energy Analysis Computer Programs. The analytical verification test method allows comparison of simulation results from a wide variety of building energy simulation programs with quasi-analytical solutions, further described below. Standard 140 is widely cited for evaluating software for use with performance-path energy efficiency analysis, in conjunction with well-known energy-efficiency standards including ASHRAE Standard 90.1, the International Energy Conservation Code, and other international standards. Airside HVAC Equipment is a common area ofmore » modelling not previously explicitly tested by Standard 140. Integration of the completed test suite into Standard 140 is in progress.« less

  19. Path generation algorithm for UML graphic modeling of aerospace test software

    NASA Astrophysics Data System (ADS)

    Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Chen, Chao

    2018-03-01

    Aerospace traditional software testing engineers are based on their own work experience and communication with software development personnel to complete the description of the test software, manual writing test cases, time-consuming, inefficient, loopholes and more. Using the high reliability MBT tools developed by our company, the one-time modeling can automatically generate test case documents, which is efficient and accurate. UML model to describe the process accurately express the need to rely on the path is reached, the existing path generation algorithm are too simple, cannot be combined into a path and branch path with loop, or too cumbersome, too complicated arrangement generates a path is meaningless, for aerospace software testing is superfluous, I rely on our experience of ten load space, tailor developed a description of aerospace software UML graphics path generation algorithm.

  20. Lessons Learned During Instrument Testing for the Thermal Infrared Sensor (TIRS)

    NASA Technical Reports Server (NTRS)

    Peabody, Hume L.; Otero, Veronica; Neuberger, David

    2013-01-01

    The Themal InfraRed Sensor (TIRS) instrument, set to launch on the Landsat Data Continuity Mission in 2013, features a passively cooled telescope and IR detectors which are actively cooled by a two stage cryocooler. In order to proceed to the instrument level test campaign, at least one full functional test was required, necessitating a thermal vacuum test to sufficiently cool the detectors and demonstrate performance. This was fairly unique in that this test occurred before the Pre Environmental Review, but yielded significant knowledge gains before the planned instrument level test. During the pre-PER test, numerous discrepancies were found between the model and the actual hardware, which were revealed by poor correlation between model predictions and test data. With the inclusion of pseudo-balance points, the test also provided an opportunity to perform a pre-correlation to test data prior to the instrument level test campaign. Various lessons were learned during this test related to modeling and design of both the flight hardware and the Ground Support Equipment and test setup. The lessons learned in the pre-PER test resulted in a better test setup for the nstrument level test and the completion of the final instrument model correlation in a shorter period of time. Upon completion of the correlation, the flight predictions were generated including the full suite of off-nominal cases, including some new cases defined by the spacecraft. For some of these ·new cases, some components now revealed limit exceedances, in particular for a portion of the hardware that could not be tested due to its size and chamber limitations.. Further lessons were learned during the completion of flight predictions. With a correlated detalled instrument model, significant efforts were made to generate a reduced model suitable for observatory level analyses. This proved a major effort both to generate an appropriate network as well as to convert to the final model to the required format and yielded additional lessons learned. In spite of all the challenges encountered by TIRS, the instrument was successfully delivered to the spacecraft and will soon be tested at observatory level in preparation for a successful mission launch.

  1. Nemesis Autonomous Test System

    NASA Technical Reports Server (NTRS)

    Barltrop, Kevin J.; Lee, Cin-Young; Horvath, Gregory A,; Clement, Bradley J.

    2012-01-01

    A generalized framework has been developed for systems validation that can be applied to both traditional and autonomous systems. The framework consists of an automated test case generation and execution system called Nemesis that rapidly and thoroughly identifies flaws or vulnerabilities within a system. By applying genetic optimization and goal-seeking algorithms on the test equipment side, a "war game" is conducted between a system and its complementary nemesis. The end result of the war games is a collection of scenarios that reveals any undesirable behaviors of the system under test. The software provides a reusable framework to evolve test scenarios using genetic algorithms using an operation model of the system under test. It can automatically generate and execute test cases that reveal flaws in behaviorally complex systems. Genetic algorithms focus the exploration of tests on the set of test cases that most effectively reveals the flaws and vulnerabilities of the system under test. It leverages advances in state- and model-based engineering, which are essential in defining the behavior of autonomous systems. It also uses goal networks to describe test scenarios.

  2. Effect of Turbulence Models on Two Massively-Separated Benchmark Flow Cases

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.

    2003-01-01

    Two massively-separated flow cases (the 2-D hill and the 3-D Ahmed body) were computed with several different turbulence models in the Reynolds-averaged Navier-Stokes code CFL3D as part of participation in a turbulence modeling workshop held in Poitiers, France in October, 2002. Overall, results were disappointing, but were consistent with results from other RANS codes and other turbulence models at the workshop. For the 2-D hill case, those turbulence models that predicted separation location accurately ended up yielding a too-long separation extent downstream. The one model that predicted a shorter separation extent in better agreement with LES data did so only by coincidence: its prediction of earlier reattachment was due to a too-late prediction of the separation location. For the Ahmed body, two slant angles were computed, and CFD performed fairly well for one of the cases (the larger slant angle). Both turbulence models tested in this case were very similar to each other. For the smaller slant angle, CFD predicted massive separation, whereas the experiment showed reattachment about half-way down the center of the face. These test cases serve as reminders that state- of-the-art CFD is currently not a reliable predictor of massively-separated flow physics, and that further validation studies in this area would be beneficial.

  3. Artificial neural network model to distinguish follicular adenoma from follicular carcinoma on fine needle aspiration of thyroid.

    PubMed

    Savala, Rajiv; Dey, Pranab; Gupta, Nalini

    2018-03-01

    To distinguish follicular adenoma (FA) and follicular carcinoma (FC) of thyroid in fine needle aspiration cytology (FNAC) is a challenging problem. In this article, we attempted to build an artificial neural network (ANN) model from the cytological and morphometric features of the FNAC smears of thyroid to distinguish FA from FC. The cytological features and morphometric analysis were done on the FNAC smears of histology proven cases of FA (26) and FC (31). The cytological features were analysed semi-quantitatively by two independent observers (RS and PD). These data were used to make an ANN model to differentiate FA versus FC on FNAC material. The performance of this ANN model was assessed by analysing the confusion matrix and receiving operator curve. There were 39 cases in training set, 9 cases each in validation and test sets. In the test group, ANN model successfully distinguished all cases (9/9) of FA and FC. The area under receiver operating curve was 1. The present ANN model is efficient to diagnose follicular adenoma and carcinoma cases on cytology smears without any error. In future, this ANN model will be able to diagnose follicular adenoma and carcinoma cases on thyroid aspirate. This study has immense potential in future. This is an open ended ANN model and more parameters and more cases can be included to make the model much stronger. © 2017 Wiley Periodicals, Inc.

  4. Practical Formal Verification of Diagnosability of Large Models via Symbolic Model Checking

    NASA Technical Reports Server (NTRS)

    Cavada, Roberto; Pecheur, Charles

    2003-01-01

    This document reports on the activities carried out during a four-week visit of Roberto Cavada at the NASA Ames Research Center. The main goal was to test the practical applicability of the framework proposed, where a diagnosability problem is reduced to a Symbolic Model Checking problem. Section 2 contains a brief explanation of major techniques currently used in Symbolic Model Checking, and how these techniques can be tuned in order to obtain good performances when using Model Checking tools. Diagnosability is performed on large and structured models of real plants. Section 3 describes how these plants are modeled, and how models can be simplified to improve the performance of Symbolic Model Checkers. Section 4 reports scalability results. Three test cases are briefly presented, and several parameters and techniques have been applied on those test cases in order to produce comparison tables. Furthermore, comparison between several Model Checkers is reported. Section 5 summarizes the application of diagnosability verification to a real application. Several properties have been tested, and results have been highlighted. Finally, section 6 draws some conclusions, and outlines future lines of research.

  5. Tests of multiplicative models in psychology: a case study using the unified theory of implicit attitudes, stereotypes, self-esteem, and self-concept.

    PubMed

    Blanton, Hart; Jaccard, James

    2006-01-01

    Theories that posit multiplicative relationships between variables are common in psychology. A. G. Greenwald et al. recently presented a theory that explicated relationships between group identification, group attitudes, and self-esteem. Their theory posits a multiplicative relationship between concepts when predicting a criterion variable. Greenwald et al. suggested analytic strategies to test their multiplicative model that researchers might assume are appropriate for testing multiplicative models more generally. The theory and analytic strategies of Greenwald et al. are used as a case study to show the strong measurement assumptions that underlie certain tests of multiplicative models. It is shown that the approach used by Greenwald et al. can lead to declarations of theoretical support when the theory is wrong as well as rejection of the theory when the theory is correct. A simple strategy for testing multiplicative models that makes weaker measurement assumptions than the strategy proposed by Greenwald et al. is suggested and discussed.

  6. Test case for VVER-1000 complex modeling using MCU and ATHLET

    NASA Astrophysics Data System (ADS)

    Bahdanovich, R. B.; Bogdanova, E. V.; Gamtsemlidze, I. D.; Nikonov, S. P.; Tikhomirov, G. V.

    2017-01-01

    The correct modeling of processes occurring in the fuel core of the reactor is very important. In the design and operation of nuclear reactors it is necessary to cover the entire range of reactor physics. Very often the calculations are carried out within the framework of only one domain, for example, in the framework of structural analysis, neutronics (NT) or thermal hydraulics (TH). However, this is not always correct, as the impact of related physical processes occurring simultaneously, could be significant. Therefore it is recommended to spend the coupled calculations. The paper provides test case for the coupled neutronics-thermal hydraulics calculation of VVER-1000 using the precise neutron code MCU and system engineering code ATHLET. The model is based on the fuel assembly (type 2M). Test case for calculation of power distribution, fuel and coolant temperature, coolant density, etc. has been developed. It is assumed that the test case will be used for simulation of VVER-1000 reactor and in the calculation using other programs, for example, for codes cross-verification. The detailed description of the codes (MCU, ATHLET), geometry and material composition of the model and an iterative calculation scheme is given in the paper. Script in PERL language was written to couple the codes.

  7. Investigation of different modeling approaches for computational fluid dynamics simulation of high-pressure rocket combustors

    NASA Astrophysics Data System (ADS)

    Ivancic, B.; Riedmann, H.; Frey, M.; Knab, O.; Karl, S.; Hannemann, K.

    2016-07-01

    The paper summarizes technical results and first highlights of the cooperation between DLR and Airbus Defence and Space (DS) within the work package "CFD Modeling of Combustion Chamber Processes" conducted in the frame of the Propulsion 2020 Project. Within the addressed work package, DLR Göttingen and Airbus DS Ottobrunn have identified several test cases where adequate test data are available and which can be used for proper validation of the computational fluid dynamics (CFD) tools. In this paper, the first test case, the Penn State chamber (RCM1), is discussed. Presenting the simulation results from three different tools, it is shown that the test case can be computed properly with steady-state Reynolds-averaged Navier-Stokes (RANS) approaches. The achieved simulation results reproduce the measured wall heat flux as an important validation parameter very well but also reveal some inconsistencies in the test data which are addressed in this paper.

  8. Vector-model-supported approach in prostate plan optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Eva Sau Fan; Department of Health Technology and Informatics, The Hong Kong Polytechnic University; Wu, Vincent Wing Cheung

    Lengthy time consumed in traditional manual plan optimization can limit the use of step-and-shoot intensity-modulated radiotherapy/volumetric-modulated radiotherapy (S&S IMRT/VMAT). A vector model base, retrieving similar radiotherapy cases, was developed with respect to the structural and physiologic features extracted from the Digital Imaging and Communications in Medicine (DICOM) files. Planning parameters were retrieved from the selected similar reference case and applied to the test case to bypass the gradual adjustment of planning parameters. Therefore, the planning time spent on the traditional trial-and-error manual optimization approach in the beginning of optimization could be reduced. Each S&S IMRT/VMAT prostate reference database comprised 100more » previously treated cases. Prostate cases were replanned with both traditional optimization and vector-model-supported optimization based on the oncologists' clinical dose prescriptions. A total of 360 plans, which consisted of 30 cases of S&S IMRT, 30 cases of 1-arc VMAT, and 30 cases of 2-arc VMAT plans including first optimization and final optimization with/without vector-model-supported optimization, were compared using the 2-sided t-test and paired Wilcoxon signed rank test, with a significance level of 0.05 and a false discovery rate of less than 0.05. For S&S IMRT, 1-arc VMAT, and 2-arc VMAT prostate plans, there was a significant reduction in the planning time and iteration with vector-model-supported optimization by almost 50%. When the first optimization plans were compared, 2-arc VMAT prostate plans had better plan quality than 1-arc VMAT plans. The volume receiving 35 Gy in the femoral head for 2-arc VMAT plans was reduced with the vector-model-supported optimization compared with the traditional manual optimization approach. Otherwise, the quality of plans from both approaches was comparable. Vector-model-supported optimization was shown to offer much shortened planning time and iteration number without compromising the plan quality.« less

  9. Modeling the dissipation rate in rotating turbulent flows

    NASA Technical Reports Server (NTRS)

    Speziale, Charles G.; Raj, Rishi; Gatski, Thomas B.

    1990-01-01

    A variety of modifications to the modeled dissipation rate transport equation that have been proposed during the past two decades to account for rotational strains are examined. The models are subjected to two crucial test cases: the decay of isotropic turbulence in a rotating frame and homogeneous shear flow in a rotating frame. It is demonstrated that these modifications do not yield substantially improved predictions for these two test cases and in many instances give rise to unphysical behavior. An alternative proposal, based on the use of the tensor dissipation rate, is made for the development of improved models.

  10. Magazine Influence on Cartridge Case Ejection Patterns with Glock Pistols.

    PubMed

    Kerkhoff, Wim; Alberink, Ivo; Mattijssen, Erwin J A T

    2018-01-01

    In this study, the cartridge case ejection patterns of six different Glock model pistols (one specimen per model) were compared under three conditions: firing with a loaded magazine, an empty magazine, and without magazine. The distances, covered by the ejected cartridge cases given these three conditions, were compared for each of the six models. A significant difference was found between the groups of data for each of the tested specimens. This indicates that it is important that, to reconstruct a shooting scene incident based on the ejection patterns of a pistol, test shots are fired with the same pistol type and under the correct magazine condition. © 2017 American Academy of Forensic Sciences.

  11. Towards a suite of test cases and a pycomodo library to assess and improve numerical methods in ocean models

    NASA Astrophysics Data System (ADS)

    Garnier, Valérie; Honnorat, Marc; Benshila, Rachid; Boutet, Martial; Cambon, Gildas; Chanut, Jérome; Couvelard, Xavier; Debreu, Laurent; Ducousso, Nicolas; Duhaut, Thomas; Dumas, Franck; Flavoni, Simona; Gouillon, Flavien; Lathuilière, Cyril; Le Boyer, Arnaud; Le Sommer, Julien; Lyard, Florent; Marsaleix, Patrick; Marchesiello, Patrick; Soufflet, Yves

    2016-04-01

    The COMODO group (http://www.comodo-ocean.fr) gathers developers of global and limited-area ocean models (NEMO, ROMS_AGRIF, S, MARS, HYCOM, S-TUGO) with the aim to address well-identified numerical issues. In order to evaluate existing models, to improve numerical approaches and methods or concept (such as effective resolution) to assess the behavior of numerical model in complex hydrodynamical regimes and to propose guidelines for the development of future ocean models, a benchmark suite that covers both idealized test cases dedicated to targeted properties of numerical schemes and more complex test case allowing the evaluation of the kernel coherence is proposed. The benchmark suite is built to study separately, then together, the main components of an ocean model : the continuity and momentum equations, the advection-diffusion of the tracers, the vertical coordinate design and the time stepping algorithms. The test cases are chosen for their simplicity of implementation (analytic initial conditions), for their capacity to focus on a (few) scheme or part of the kernel, for the availability of analytical solutions or accurate diagnoses and lastly to simulate a key oceanic processus in a controlled environment. Idealized test cases allow to verify properties of numerical schemes advection-diffusion of tracers, - upwelling, - lock exchange, - baroclinic vortex, - adiabatic motion along bathymetry, and to put into light numerical issues that remain undetected in realistic configurations - trajectory of barotropic vortex, - interaction current - topography. When complexity in the simulated dynamics grows up, - internal wave, - unstable baroclinic jet, the sharing of the same experimental designs by different existing models is useful to get a measure of the model sensitivity to numerical choices (Soufflet et al., 2016). Lastly, test cases help in understanding the submesoscale influence on the dynamics (Couvelard et al., 2015). Such a benchmark suite is an interesting bed to continue research in numerical approaches as well as an efficient tool to maintain any oceanic code and assure the users a stamped model in a certain range of hydrodynamical regimes. Thanks to a common netCDF format, this suite is completed with a python library that encompasses all the tools and metrics used to assess the efficiency of the numerical methods. References - Couvelard X., F. Dumas, V. Garnier, A.L. Ponte, C. Talandier, A.M. Treguier (2015). Mixed layer formation and restratification in presence of mesoscale and submesoscale turbulence. Ocean Modelling, Vol 96-2, p 243-253. doi:10.1016/j.ocemod.2015.10.004. - Soufflet Y., P. Marchesiello, F. Lemarié, J. Jouanno, X. Capet, L. Debreu , R. Benshila (2016). On effective resolution in ocean models. Ocean Modelling, in press. doi:10.1016/j.ocemod.2015.12.004

  12. Developing a quality by design approach to model tablet dissolution testing: an industrial case study.

    PubMed

    Yekpe, Ketsia; Abatzoglou, Nicolas; Bataille, Bernard; Gosselin, Ryan; Sharkawi, Tahmer; Simard, Jean-Sébastien; Cournoyer, Antoine

    2018-07-01

    This study applied the concept of Quality by Design (QbD) to tablet dissolution. Its goal was to propose a quality control strategy to model dissolution testing of solid oral dose products according to International Conference on Harmonization guidelines. The methodology involved the following three steps: (1) a risk analysis to identify the material- and process-related parameters impacting the critical quality attributes of dissolution testing, (2) an experimental design to evaluate the influence of design factors (attributes and parameters selected by risk analysis) on dissolution testing, and (3) an investigation of the relationship between design factors and dissolution profiles. Results show that (a) in the case studied, the two parameters impacting dissolution kinetics are active pharmaceutical ingredient particle size distributions and tablet hardness and (b) these two parameters could be monitored with PAT tools to predict dissolution profiles. Moreover, based on the results obtained, modeling dissolution is possible. The practicality and effectiveness of the QbD approach were demonstrated through this industrial case study. Implementing such an approach systematically in industrial pharmaceutical production would reduce the need for tablet dissolution testing.

  13. Practical Applications of a Building Method to Construct Aerodynamic Database of Guided Missile Using Wind Tunnel Test Data

    NASA Astrophysics Data System (ADS)

    Kim, Duk-hyun; Lee, Hyoung-Jin

    2018-04-01

    A study of efficient aerodynamic database modeling method was conducted. A creation of database using periodicity and symmetry characteristic of missile aerodynamic coefficient was investigated to minimize the number of wind tunnel test cases. In addition, studies of how to generate the aerodynamic database when the periodicity changes due to installation of protuberance and how to conduct a zero calibration were carried out. Depending on missile configurations, the required number of test cases changes and there exist tests that can be omitted. A database of aerodynamic on deflection angle of control surface can be constituted using phase shift. A validity of modeling method was demonstrated by confirming that the result which the aerodynamic coefficient calculated by using the modeling method was in agreement with wind tunnel test results.

  14. The influence of prototype testing in three-dimensional aortic models on fenestrated endograft design.

    PubMed

    Taher, Fadi; Falkensammer, Juergen; McCarte, Jamie; Strassegger, Johann; Uhlmann, Miriam; Schuch, Philipp; Assadian, Afshin

    2017-06-01

    The fenestrated Anaconda endograft (Vascutek/Terumo, Inchinnan, UK) is intended for the treatment of abdominal aortic aneurysms with an insufficient infrarenal landing zone. The endografts are custom-made with use of high-resolution, 1-mm-slice computed tomography angiography images. For every case, a nonsterile prototype and a three-dimensional (3D) model of the patient's aorta are constructed to allow the engineers as well as the physician to test-implant the device and to review the fit of the graft. The aim of this investigation was to assess the impact of 3D model construction and prototype testing on the design of the final sterile endograft. A prospectively held database on fenestrated endovascular aortic repair patients treated at a single institution was completed with data from the Vascutek engineers' prototype test results as well as the product request forms. Changes to endograft design based on prototype testing were assessed and are reported for all procedures. Between April 1, 2013, and August 18, 2015, 60 fenestrated Anaconda devices were implanted. Through prototype testing, engineers were able to identify and report potential risks to technical success related to use of the custom device for the respective patient. Theoretical concerns about endograft fit in the rigid model were expressed in 51 cases (85.0%), and the engineers suggested potential changes to the design of 21 grafts (35.0%). Thirteen cases (21.7%) were eventually modified after the surgeon's testing of the prototype. A second prototype was ordered in three cases (5.0%) because of extensive changes to endograft design, such as inclusion of an additional fenestration. Technical success rates were comparable for grafts that showed a perfect fit from the beginning and cases in which prototype testing resulted in a modification of graft design. Planning and construction of fenestrated endografts for complex aortic anatomies where exact fit and positioning of the graft are paramount to allow cannulation of the aortic branches are challenging. In the current series, approximately one in five endografts was modified after prototype testing in an aortic model. Eventually, success rates were comparable between the groups of patients with a good primary prototype fit and those in which the endograft design was altered. Prototype testing in 3D aortic models is a valuable tool to test the fit of a custom-made endograft before implantation. This may help avoid potentially debilitating adverse events associated with misaligned fenestrations and unconnected aortic branches. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  15. Bayes factors based on robust TDT-type tests for family trio design.

    PubMed

    Yuan, Min; Pan, Xiaoqing; Yang, Yaning

    2015-06-01

    Adaptive transmission disequilibrium test (aTDT) and MAX3 test are two robust-efficient association tests for case-parent family trio data. Both tests incorporate information of common genetic models including recessive, additive and dominant models and are efficient in power and robust to genetic model specifications. The aTDT uses information of departure from Hardy-Weinberg disequilibrium to identify the potential genetic model underlying the data and then applies the corresponding TDT-type test, and the MAX3 test is defined as the maximum of the absolute value of three TDT-type tests under the three common genetic models. In this article, we propose three robust Bayes procedures, the aTDT based Bayes factor, MAX3 based Bayes factor and Bayes model averaging (BMA), for association analysis with case-parent trio design. The asymptotic distributions of aTDT under the null and alternative hypothesis are derived in order to calculate its Bayes factor. Extensive simulations show that the Bayes factors and the p-values of the corresponding tests are generally consistent and these Bayes factors are robust to genetic model specifications, especially so when the priors on the genetic models are equal. When equal priors are used for the underlying genetic models, the Bayes factor method based on aTDT is more powerful than those based on MAX3 and Bayes model averaging. When the prior placed a small (large) probability on the true model, the Bayes factor based on aTDT (BMA) is more powerful. Analysis of a simulation data about RA from GAW15 is presented to illustrate applications of the proposed methods.

  16. NONPARAMETRIC MANOVA APPROACHES FOR NON-NORMAL MULTIVARIATE OUTCOMES WITH MISSING VALUES

    PubMed Central

    He, Fanyin; Mazumdar, Sati; Tang, Gong; Bhatia, Triptish; Anderson, Stewart J.; Dew, Mary Amanda; Krafty, Robert; Nimgaonkar, Vishwajit; Deshpande, Smita; Hall, Martica; Reynolds, Charles F.

    2017-01-01

    Between-group comparisons often entail many correlated response variables. The multivariate linear model, with its assumption of multivariate normality, is the accepted standard tool for these tests. When this assumption is violated, the nonparametric multivariate Kruskal-Wallis (MKW) test is frequently used. However, this test requires complete cases with no missing values in response variables. Deletion of cases with missing values likely leads to inefficient statistical inference. Here we extend the MKW test to retain information from partially-observed cases. Results of simulated studies and analysis of real data show that the proposed method provides adequate coverage and superior power to complete-case analyses. PMID:29416225

  17. Mathematical Models of IABG Thermal-Vacuum Facilities

    NASA Astrophysics Data System (ADS)

    Doring, Daniel; Ulfers, Hendrik

    2014-06-01

    IABG in Ottobrunn, Germany, operates thermal-vacuum facilities of different sizes and complexities as a service for space-testing of satellites and components. One aspect of these tests is the qualification of the thermal control system that keeps all onboard components within their save operating temperature band. As not all possible operation / mission states can be simulated within a sensible test time, usually a subset of important and extreme states is tested at TV facilities to validate the thermal model of the satellite, which is then used to model all other possible mission states. With advances in the precision of customer thermal models, simple assumptions of the test environment (e.g. everything black & cold, one solar constant of light from this side) are no longer sufficient, as real space simulation chambers do deviate from this ideal. For example the mechanical adapters which support the spacecraft are usually not actively cooled. To enable IABG to provide a model that is sufficiently detailed and realistic for current system tests, Munich engineering company CASE developed ESATAN models for the two larger chambers. CASE has many years of experience in thermal analysis for space-flight systems and ESATAN. The two models represent the rather simple (and therefore very homogeneous) 3m-TVA and the extremely complex space simulation test facility and its solar simulator. The cooperation of IABG and CASE built up extensive knowledge of the facilities thermal behaviour. This is the key to optimally support customers with their test campaigns in the future. The ESARAD part of the models contains all relevant information with regard to geometry (CAD data), surface properties (optical measurements) and solar irradiation for the sun simulator. The temperature of the actively cooled thermal shrouds is measured and mapped to the thermal mesh to create the temperature field in the ESATAN part as boundary conditions. Both models comprise switches to easily establish multiple possible set-ups (e.g. exclude components like the motion system or enable / disable the solar simulator). Both models were validated by comparing calculated results (thermal balance temperatures for simple passive test articles) with measured temperatures generated in actual tests in these facilities. This paper presents information about the chambers, the modelling approach, properties of the models and their performance in the validation tests.

  18. 42 CFR § 512.100 - EPM episodes being tested.

    Code of Federal Regulations, 2010 CFR

    2017-10-01

    ... SERVICES (CONTINUED) HEALTH CARE INFRASTRUCTURE AND MODEL PROGRAMS EPISODE PAYMENT MODEL Episode Payment Model Participants § 512.100 EPM episodes being tested. (a) Initiation of an episode. An episode is... under an EPM anchor MS-DRG and, in the case of the AMI model, with an AMI ICD-10-CM diagnosis code if...

  19. FAST Model Calibration and Validation of the OC5- DeepCwind Floating Offshore Wind System Against Wave Tank Test Data: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendt, Fabian F; Robertson, Amy N; Jonkman, Jason

    During the course of the Offshore Code Comparison Collaboration, Continued, with Correlation (OC5) project, which focused on the validation of numerical methods through comparison against tank test data, the authors created a numerical FAST model of the 1:50-scale DeepCwind semisubmersible system that was tested at the Maritime Research Institute Netherlands ocean basin in 2013. This paper discusses several model calibration studies that were conducted to identify model adjustments that improve the agreement between the numerical simulations and the experimental test data. These calibration studies cover wind-field-specific parameters (coherence, turbulence), hydrodynamic and aerodynamic modeling approaches, as well as rotor model (blade-pitchmore » and blade-mass imbalances) and tower model (structural tower damping coefficient) adjustments. These calibration studies were conducted based on relatively simple calibration load cases (wave only/wind only). The agreement between the final FAST model and experimental measurements is then assessed based on more-complex combined wind and wave validation cases.« less

  20. Modeling Multibody Stage Separation Dynamics Using Constraint Force Equation Methodology

    NASA Technical Reports Server (NTRS)

    Tartabini, Paul V.; Roithmayr, Carlos M.; Toniolo, Matthew D.; Karlgaard, Christopher D.; Pamadi, Bandu N.

    2011-01-01

    This paper discusses the application of the constraint force equation methodology and its implementation for multibody separation problems using three specially designed test cases. The first test case involves two rigid bodies connected by a fixed joint, the second case involves two rigid bodies connected with a universal joint, and the third test case is that of Mach 7 separation of the X-43A vehicle. For the first two cases, the solutions obtained using the constraint force equation method compare well with those obtained using industry- standard benchmark codes. For the X-43A case, the constraint force equation solutions show reasonable agreement with the flight-test data. Use of the constraint force equation method facilitates the analysis of stage separation in end-to-end simulations of launch vehicle trajectories

  1. Mixed Model Association with Family-Biased Case-Control Ascertainment.

    PubMed

    Hayeck, Tristan J; Loh, Po-Ru; Pollack, Samuela; Gusev, Alexander; Patterson, Nick; Zaitlen, Noah A; Price, Alkes L

    2017-01-05

    Mixed models have become the tool of choice for genetic association studies; however, standard mixed model methods may be poorly calibrated or underpowered under family sampling bias and/or case-control ascertainment. Previously, we introduced a liability threshold-based mixed model association statistic (LTMLM) to address case-control ascertainment in unrelated samples. Here, we consider family-biased case-control ascertainment, where case and control subjects are ascertained non-randomly with respect to family relatedness. Previous work has shown that this type of ascertainment can severely bias heritability estimates; we show here that it also impacts mixed model association statistics. We introduce a family-based association statistic (LT-Fam) that is robust to this problem. Similar to LTMLM, LT-Fam is computed from posterior mean liabilities (PML) under a liability threshold model; however, LT-Fam uses published narrow-sense heritability estimates to avoid the problem of biased heritability estimation, enabling correct calibration. In simulations with family-biased case-control ascertainment, LT-Fam was correctly calibrated (average χ 2 = 1.00-1.02 for null SNPs), whereas the Armitage trend test (ATT), standard mixed model association (MLM), and case-control retrospective association test (CARAT) were mis-calibrated (e.g., average χ 2 = 0.50-1.22 for MLM, 0.89-2.65 for CARAT). LT-Fam also attained higher power than other methods in some settings. In 1,259 type 2 diabetes-affected case subjects and 5,765 control subjects from the CARe cohort, downsampled to induce family-biased ascertainment, LT-Fam was correctly calibrated whereas ATT, MLM, and CARAT were again mis-calibrated. Our results highlight the importance of modeling family sampling bias in case-control datasets with related samples. Copyright © 2017 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  2. SU-E-T-131: Artificial Neural Networks Applied to Overall Survival Prediction for Patients with Periampullary Carcinoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gong, Y; Yu, J; Yeung, V

    Purpose: Artificial neural networks (ANN) can be used to discover complex relations within datasets to help with medical decision making. This study aimed to develop an ANN method to predict two-year overall survival of patients with peri-ampullary cancer (PAC) following resection. Methods: Data were collected from 334 patients with PAC following resection treated in our institutional pancreatic tumor registry between 2006 and 2012. The dataset contains 14 variables including age, gender, T-stage, tumor differentiation, positive-lymph-node ratio, positive resection margins, chemotherapy, radiation therapy, and tumor histology.After censoring for two-year survival analysis, 309 patients were left, of which 44 patients (∼15%) weremore » randomly selected to form testing set. The remaining 265 cases were randomly divided into training set (211 cases, ∼80% of 265) and validation set (54 cases, ∼20% of 265) for 20 times to build 20 ANN models. Each ANN has one hidden layer with 5 units. The 20 ANN models were ranked according to their concordance index (c-index) of prediction on validation sets. To further improve prediction, the top 10% of ANN models were selected, and their outputs averaged for prediction on testing set. Results: By random division, 44 cases in testing set and the remaining 265 cases have approximately equal two-year survival rates, 36.4% and 35.5% respectively. The 20 ANN models, which were trained and validated on the 265 cases, yielded mean c-indexes as 0.59 and 0.63 on validation sets and the testing set, respectively. C-index was 0.72 when the two best ANN models (top 10%) were used in prediction on testing set. The c-index of Cox regression analysis was 0.63. Conclusion: ANN improved survival prediction for patients with PAC. More patient data and further analysis of additional factors may be needed for a more robust model, which will help guide physicians in providing optimal post-operative care. This project was supported by PA CURE Grant.« less

  3. Stability analysis of an F/A-18 E/F cable mount m odel

    NASA Technical Reports Server (NTRS)

    Thompson, Nancy; Farmer, Moses

    1994-01-01

    A full-span F/A-18 E/F cable mounted wind tunnel model is part of a flutter clearance program at the NASA Langley Transonic Dynamics Tunnel. Parametric analysis of this model using GRUMCBL software was conducted to assess stability for wind tunnel tests. Two configurations of the F/A-18 E/F were examined. The parameters examined were pulley-cable friction, mach number, dynamic pressure, cable geometry, center of gravity location, cable tension, snubbing the model, drag, and test medium. For the nominal cable geometry (Cable Geometry 1), Configuration One was unstable for cases with higher pulley-cable friction coefficients. A new cable geometry (Cable Geometry 3) was determined in which Configuration One was stable for all cases evaluated. Configuration Two with the nominal center of gravity position was found to be unstable for cases with higher pulley-cable friction coefficients; however, the model was stable when the center of gravity moved forward 1/2. The model was tested using the cable mount system during the initial wind tunnel entry and was stable as predicted.

  4. A comparison of theory and experiment for coupled rotor body stability of a bearingless rotor model in hover and forward flight

    NASA Technical Reports Server (NTRS)

    Mirick, Paul H.

    1988-01-01

    Seven cases were selected for correlation from a 1/5.86 Froude-scale experiment that examined several rotor designs which were being considered for full-scale flight testing as part of the Bearingless Main Rotor (BMR) program. The model rotor hub used in these tests consisted of back-to-back C-beams as flexbeam elements with a torque tube for pitch control. The first four cases selected from the experiment were hover tests which examined the effects on rotor stability of variations in hub-to-flexbeam coning, hub-to-flexbeam pitch, flexbeam-to-blade coning, and flexbeam-to-blade pitch. The final three cases were selected from the forward flight tests of optimum rotor configuration as defined during the hover test. The selected cases examined the effects of variations in forward speed, rotor speed, and shaft angle. Analytical results from Bell Helicopter Textron, Boeing Vertol, Sikorsky Aircraft, and the U.S. Army Aeromechanics Laboratory were compared with the data and the correlations ranged from poor-to-fair to fair-to-good.

  5. Nonparametric estimation and testing of fixed effects panel data models

    PubMed Central

    Henderson, Daniel J.; Carroll, Raymond J.; Li, Qi

    2009-01-01

    In this paper we consider the problem of estimating nonparametric panel data models with fixed effects. We introduce an iterative nonparametric kernel estimator. We also extend the estimation method to the case of a semiparametric partially linear fixed effects model. To determine whether a parametric, semiparametric or nonparametric model is appropriate, we propose test statistics to test between the three alternatives in practice. We further propose a test statistic for testing the null hypothesis of random effects against fixed effects in a nonparametric panel data regression model. Simulations are used to examine the finite sample performance of the proposed estimators and the test statistics. PMID:19444335

  6. Structural Dynamics Modeling of HIRENASD in Support of the Aeroelastic Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Wieseman, Carol; Chwalowski, Pawel; Heeg, Jennifer; Boucke, Alexander; Castro, Jack

    2013-01-01

    An Aeroelastic Prediction Workshop (AePW) was held in April 2012 using three aeroelasticity case study wind tunnel tests for assessing the capabilities of various codes in making aeroelasticity predictions. One of these case studies was known as the HIRENASD model that was tested in the European Transonic Wind Tunnel (ETW). This paper summarizes the development of a standardized enhanced analytical HIRENASD structural model for use in the AePW effort. The modifications to the HIRENASD finite element model were validated by comparing modal frequencies, evaluating modal assurance criteria, comparing leading edge, trailing edge and twist of the wing with experiment and by performing steady and unsteady CFD analyses for one of the test conditions on the same grid, and identical processing of results.

  7. A new fit-for-purpose model testing framework: Decision Crash Tests

    NASA Astrophysics Data System (ADS)

    Tolson, Bryan; Craig, James

    2016-04-01

    Decision-makers in water resources are often burdened with selecting appropriate multi-million dollar strategies to mitigate the impacts of climate or land use change. Unfortunately, the suitability of existing hydrologic simulation models to accurately inform decision-making is in doubt because the testing procedures used to evaluate model utility (i.e., model validation) are insufficient. For example, many authors have identified that a good standard framework for model testing called the Klemes Crash Tests (KCTs), which are the classic model validation procedures from Klemeš (1986) that Andréassian et al. (2009) rename as KCTs, have yet to become common practice in hydrology. Furthermore, Andréassian et al. (2009) claim that the progression of hydrological science requires widespread use of KCT and the development of new crash tests. Existing simulation (not forecasting) model testing procedures such as KCTs look backwards (checking for consistency between simulations and past observations) rather than forwards (explicitly assessing if the model is likely to support future decisions). We propose a fundamentally different, forward-looking, decision-oriented hydrologic model testing framework based upon the concept of fit-for-purpose model testing that we call Decision Crash Tests or DCTs. Key DCT elements are i) the model purpose (i.e., decision the model is meant to support) must be identified so that model outputs can be mapped to management decisions ii) the framework evaluates not just the selected hydrologic model but the entire suite of model-building decisions associated with model discretization, calibration etc. The framework is constructed to directly and quantitatively evaluate model suitability. The DCT framework is applied to a model building case study on the Grand River in Ontario, Canada. A hypothetical binary decision scenario is analysed (upgrade or not upgrade the existing flood control structure) under two different sets of model building decisions. In one case, we show the set of model building decisions has a low probability to correctly support the upgrade decision. In the other case, we show evidence suggesting another set of model building decisions has a high probability to correctly support the decision. The proposed DCT framework focuses on what model users typically care about: the management decision in question. The DCT framework will often be very strict and will produce easy to interpret results enabling clear unsuitability determinations. In the past, hydrologic modelling progress has necessarily meant new models and model building methods. Continued progress in hydrologic modelling requires finding clear evidence to motivate researchers to disregard unproductive models and methods and the DCT framework is built to produce this kind of evidence. References: Andréassian, V., C. Perrin, L. Berthet, N. Le Moine, J. Lerat, C. Loumagne, L. Oudin, T. Mathevet, M.-H. Ramos, and A. Valéry (2009), Crash tests for a standardized evaluation of hydrological models. Hydrology and Earth System Sciences, 13, 1757-1764. Klemeš, V. (1986), Operational testing of hydrological simulation models. Hydrological Sciences Journal, 31 (1), 13-24.

  8. Constrained inversion as a hypothesis testing tool, what can we learn about the lithosphere?

    NASA Astrophysics Data System (ADS)

    Moorkamp, Max; Stewart, Fishwick; Jones, Alan G.

    2017-04-01

    Inversion of geophysical data constrained by a reference model is typically used to guide the inversion of low resolution data towards a geologically plausible solution. For example, a migrated seismic section can provide the location of lithological boundaries for potential field inversions. Here we consider the inversion of long-period magnetotelluric data constrained by models generated through surface wave inversion. In this case, we do not consider the surface wave model inherently better in any sense and want to guide the magnetotelluric inversion towards this model, but we want to test the hypothesis that both datasets can be explained by models with similar structure. If the hypothesis test is successful, i.e. we can fit the observations with a conductivity model with structural similarity to the seismic model, we have found an alternative explanation compared to the individual inversion and can use the differences to learn about the resolution of the magnetotelluric data and can improve our interpretation. Conversely, if the test refutes our hypothesis of coincident structure, we have found features in the models that are sensed fundamentally different by both methods which is potentially instructive on the nature of the anomalies. We use a MT dataset acquired in central Botswana over the Okwa terrane and the adjacent Kaapvaal and Zimbabwe Cratons together with a tomographic model for the region to illustrate and test this approach. Here, various conductive structures have been identified that bridge the Moho. Furthermore, the thickness of the lithosphere inferred from the different methods differs. In both cases the question is in how far this is a result of the ill-posed nature of inversion and in how far these differences can be reconciled. Thus this dataset is an ideal test case for our hypothesis testing approach. Finally, we will demonstrate how we can use the results of the constrained inversion to extract conductivity-velocity relationships in the region and gain further insight into the composition and thermal structure of the lithosphere.

  9. Prediction of Acoustic Loads Generated by Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Perez, Linamaria; Allgood, Daniel C.

    2011-01-01

    NASA Stennis Space Center is one of the nation's premier facilities for conducting large-scale rocket engine testing. As liquid rocket engines vary in size, so do the acoustic loads that they produce. When these acoustic loads reach very high levels they may cause damages both to humans and to actual structures surrounding the testing area. To prevent these damages, prediction tools are used to estimate the spectral content and levels of the acoustics being generated by the rocket engine plumes and model their propagation through the surrounding atmosphere. Prior to the current work, two different acoustic prediction tools were being implemented at Stennis Space Center, each having their own advantages and disadvantages depending on the application. Therefore, a new prediction tool was created, using NASA SP-8072 handbook as a guide, which would replicate the same prediction methods as the previous codes, but eliminate any of the drawbacks the individual codes had. Aside from replicating the previous modeling capability in a single framework, additional modeling functions were added thereby expanding the current modeling capability. To verify that the new code could reproduce the same predictions as the previous codes, two verification test cases were defined. These verification test cases also served as validation cases as the predicted results were compared to actual test data.

  10. Thermal Testing and Model Correlation of the Magnetospheric Multiscale (MMS) Observatories

    NASA Technical Reports Server (NTRS)

    Kim, Jong S.; Teti, Nicholas M.

    2015-01-01

    International Conference on Envronmental Systems (ICES), Seattle WA NCTS 20964-15. The Magnetospheric Multiscale (MMS) mission is a Solar Terrestrial Probes mission comprising four identically instrumented spacecraft that will use Earths magnetosphere as a laboratory tostudy the microphysics of three fundamental plasma processes: magnetic reconnection, energetic particle acceleration, and turbulence. This paper presents the complete thermal balance (TB) test performed on the first of four observatories to go through thermal vacuum (TV) and the minibalance testing that was performed on the subsequent observatories to provide a comparison of all four. The TV and TB tests were conducted in a thermal vacuum chamber at the Naval Research Laboratory (NRL) in Washington, D.C. with the vacuum level higher than 1.3 x 10-4 Pa (10-6 torr)and the surrounding temperature achieving -180 C. Three TB test cases were performed that included hot operational science, cold operational science and a cold survival case. In addition to the three balance cases a two hour eclipse and a four hour eclipse simulation was performed during the TV test to provide additional transient data points that represent the orbit in eclipse (or Earth's shadow) The goal was to perform testing such that the flight orbital environments could be simulated as closely as possible. A thermal model correlation between the thermal analysis and the test results was completed. Over 400 1-Wire temperature sensors, 200 thermocouples and 125 flight thermistor temperature sensors recorded data during TV and TB testing. These temperatureversus time profiles and their agreements with the analytical results obtained using Thermal Desktop and SINDAFLUINT are discussed. The model correlation for the thermal mathematical model (TMM) is conducted based on the numerical analysis results and the test data. The philosophy of model correlation was to correlate the model to within 3 C of the test data using the standard deviation and mean deviation error calculation. Individual temperature error goal is to be within 5 C and the heater power goal is to be within 5 of test data. The results of the model correlation are discussed and the effect of some material and interface parameters on the temperature profiles are presented.

  11. Thermal Testing and Model Correlation of the Magnetospheric Multiscale (MMS) Observatories

    NASA Technical Reports Server (NTRS)

    Kim, Jong S.; Teti, Nicholas M.

    2015-01-01

    The Magnetospheric Multiscale (MMS) mission is a Solar Terrestrial Probes mission comprising four identically instrumented spacecraft that will use Earth's magnetosphere as a laboratory to study the microphysics of three fundamental plasma processes: magnetic reconnection, energetic particle acceleration, and turbulence. This paper presents the complete thermal balance (TB) test performed on the first of four observatories to go through thermal vacuum (TV) and the minibalance testing that was performed on the subsequent observatories to provide a comparison of all four. The TV and TB tests were conducted in a thermal vacuum chamber at the Naval Research Laboratory (NRL) in Washington, D.C. with the vacuum level higher than 1.3 x 10 (sup -4) pascals (10 (sup -6) torr) and the surrounding temperature achieving -180 degrees Centigrade. Three TB test cases were performed that included hot operational science, cold operational science and a cold survival case. In addition to the three balance cases a two hour eclipse and a four hour eclipse simulation was performed during the TV test to provide additional transient data points that represent the orbit in eclipse (or Earth's shadow) The goal was to perform testing such that the flight orbital environments could be simulated as closely as possible. A thermal model correlation between the thermal analysis and the test results was completed. Over 400 1-Wire temperature sensors, 200 thermocouples and 125 flight thermistor temperature sensors recorded data during TV and TB testing. These temperature versus time profiles and their agreements with the analytical results obtained using Thermal Desktop and SINDA/FLUINT are discussed. The model correlation for the thermal mathematical model (TMM) is conducted based on the numerical analysis results and the test data. The philosophy of model correlation was to correlate the model to within 3 degrees Centigrade of the test data using the standard deviation and mean deviation error calculation. Individual temperature error goal is to be within 5 degrees Centigrade and the heater power goal is to be within 5 percent of test data. The results of the model correlation are discussed and the effect of some material and interface parameters on the temperature profiles are presented.

  12. A hybrid model for combining case-control and cohort studies in systematic reviews of diagnostic tests

    PubMed Central

    Chen, Yong; Liu, Yulun; Ning, Jing; Cormier, Janice; Chu, Haitao

    2014-01-01

    Systematic reviews of diagnostic tests often involve a mixture of case-control and cohort studies. The standard methods for evaluating diagnostic accuracy only focus on sensitivity and specificity and ignore the information on disease prevalence contained in cohort studies. Consequently, such methods cannot provide estimates of measures related to disease prevalence, such as population averaged or overall positive and negative predictive values, which reflect the clinical utility of a diagnostic test. In this paper, we propose a hybrid approach that jointly models the disease prevalence along with the diagnostic test sensitivity and specificity in cohort studies, and the sensitivity and specificity in case-control studies. In order to overcome the potential computational difficulties in the standard full likelihood inference of the proposed hybrid model, we propose an alternative inference procedure based on the composite likelihood. Such composite likelihood based inference does not suffer computational problems and maintains high relative efficiency. In addition, it is more robust to model mis-specifications compared to the standard full likelihood inference. We apply our approach to a review of the performance of contemporary diagnostic imaging modalities for detecting metastases in patients with melanoma. PMID:25897179

  13. Considerations for test design to accommodate energy-budget models in ecotoxicology: a case study for acetone in the pond snail Lymnaea stagnalis.

    PubMed

    Barsi, Alpar; Jager, Tjalling; Collinet, Marc; Lagadic, Laurent; Ducrot, Virginie

    2014-07-01

    Toxicokinetic-toxicodynamic (TKTD) modeling offers many advantages in the analysis of ecotoxicity test data. Calibration of TKTD models, however, places different demands on test design compared with classical concentration-response approaches. In the present study, useful complementary information is provided regarding test design for TKTD modeling. A case study is presented for the pond snail Lymnaea stagnalis exposed to the narcotic compound acetone, in which the data on all endpoints were analyzed together using a relatively simple TKTD model called DEBkiss. Furthermore, the influence of the data used for calibration on accuracy and precision of model parameters is discussed. The DEBkiss model described toxic effects on survival, growth, and reproduction over time well, within a single integrated analysis. Regarding the parameter estimates (e.g., no-effect concentration), precision rather than accuracy was affected depending on which data set was used for model calibration. In addition, the present study shows that the intrinsic sensitivity of snails to acetone stays the same across different life stages, including the embryonic stage. In fact, the data on egg development allowed for selection of a unique metabolic mode of action for the toxicant. Practical and theoretical considerations for test design to accommodate TKTD modeling are discussed in the hope that this information will aid other researchers to make the best possible use of their test animals. © 2014 SETAC.

  14. Universal Verification Methodology Based Register Test Automation Flow.

    PubMed

    Woo, Jae Hun; Cho, Yong Kwan; Park, Sun Kyu

    2016-05-01

    In today's SoC design, the number of registers has been increased along with complexity of hardware blocks. Register validation is a time-consuming and error-pron task. Therefore, we need an efficient way to perform verification with less effort in shorter time. In this work, we suggest register test automation flow based UVM (Universal Verification Methodology). UVM provides a standard methodology, called a register model, to facilitate stimulus generation and functional checking of registers. However, it is not easy for designers to create register models for their functional blocks or integrate models in test-bench environment because it requires knowledge of SystemVerilog and UVM libraries. For the creation of register models, many commercial tools support a register model generation from register specification described in IP-XACT, but it is time-consuming to describe register specification in IP-XACT format. For easy creation of register model, we propose spreadsheet-based register template which is translated to IP-XACT description, from which register models can be easily generated using commercial tools. On the other hand, we also automate all the steps involved integrating test-bench and generating test-cases, so that designers may use register model without detailed knowledge of UVM or SystemVerilog. This automation flow involves generating and connecting test-bench components (e.g., driver, checker, bus adaptor, etc.) and writing test sequence for each type of register test-case. With the proposed flow, designers can save considerable amount of time to verify functionality of registers.

  15. Testing and Modeling of a 3-MW Wind Turbine Using Fully Coupled Simulation Codes (Poster)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LaCava, W.; Guo, Y.; Van Dam, J.

    This poster describes the NREL/Alstom Wind testing and model verification of the Alstom 3-MW wind turbine located at NREL's National Wind Technology Center. NREL,in collaboration with ALSTOM Wind, is studying a 3-MW wind turbine installed at the National Wind Technology Center(NWTC). The project analyzes the turbine design using a state-of-the-art simulation code validated with detailed test data. This poster describes the testing and the model validation effort, and provides conclusions about the performance of the unique drive train configuration used in this wind turbine. The 3-MW machine has been operating at the NWTC since March 2011, and drive train measurementsmore » will be collected through the spring of 2012. The NWTC testing site has particularly turbulent wind patterns that allow for the measurement of large transient loads and the resulting turbine response. This poster describes the 3-MW turbine test project, the instrumentation installed, and the load cases captured. The design of a reliable wind turbine drive train increasingly relies on the use of advanced simulation to predict structural responses in a varying wind field. This poster presents a fully coupled, aero-elastic and dynamic model of the wind turbine. It also shows the methodology used to validate the model, including the use of measured tower modes, model-to-model comparisons of the power curve, and mainshaft bending predictions for various load cases. The drivetrain is designed to only transmit torque to the gearbox, eliminating non-torque moments that are known to cause gear misalignment. Preliminary results show that the drivetrain is able to divert bending loads in extreme loading cases, and that a significantly smaller bending moment is induced on the mainshaft compared to a three-point mounting design.« less

  16. Time-series-based hybrid mathematical modelling method adapted to forecast automotive and medical waste generation: Case study of Lithuania.

    PubMed

    Karpušenkaitė, Aistė; Ruzgas, Tomas; Denafas, Gintaras

    2018-05-01

    The aim of the study was to create a hybrid forecasting method that could produce higher accuracy forecasts than previously used 'pure' time series methods. Mentioned methods were already tested with total automotive waste, hazardous automotive waste, and total medical waste generation, but demonstrated at least a 6% error rate in different cases and efforts were made to decrease it even more. Newly developed hybrid models used a random start generation method to incorporate different time-series advantages and it helped to increase the accuracy of forecasts by 3%-4% in hazardous automotive waste and total medical waste generation cases; the new model did not increase the accuracy of total automotive waste generation forecasts. Developed models' abilities to forecast short- and mid-term forecasts were tested using prediction horizon.

  17. A longitudinal test of the demand-control model using specific job demands and specific job control.

    PubMed

    de Jonge, Jan; van Vegchel, Natasja; Shimazu, Akihito; Schaufeli, Wilmar; Dormann, Christian

    2010-06-01

    Supportive studies of the demand-control (DC) model were more likely to measure specific demands combined with a corresponding aspect of control. A longitudinal test of Karasek's (Adm Sci Q. 24:285-308, 1) job strain hypothesis including specific measures of job demands and job control, and both self-report and objectively recorded well-being. Job strain hypothesis was tested among 267 health care employees from a two-wave Dutch panel survey with a 2-year time lag. Significant demand/control interactions were found for mental and emotional demands, but not for physical demands. The association between job demands and job satisfaction was positive in case of high job control, whereas this association was negative in case of low job control. In addition, the relation between job demands and psychosomatic health symptoms/sickness absence was negative in case of high job control and positive in case of low control. Longitudinal support was found for the core assumption of the DC model with specific measures of job demands and job control as well as self-report and objectively recorded well-being.

  18. INNOVATIVE INSTRUMENTATION AND ANALYSIS OF THE TEMPERATURE MEASUREMENT FOR HIGH TEMPERATURE GASIFICATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seong W. Lee

    During this reporting period, the literature survey including the gasifier temperature measurement literature, the ultrasonic application and its background study in cleaning application, and spray coating process are completed. The gasifier simulator (cold model) testing has been successfully conducted. Four factors (blower voltage, ultrasonic application, injection time intervals, particle weight) were considered as significant factors that affect the temperature measurement. The Analysis of Variance (ANOVA) was applied to analyze the test data. The analysis shows that all four factors are significant to the temperature measurements in the gasifier simulator (cold model). The regression analysis for the case with the normalizedmore » room temperature shows that linear model fits the temperature data with 82% accuracy (18% error). The regression analysis for the case without the normalized room temperature shows 72.5% accuracy (27.5% error). The nonlinear regression analysis indicates a better fit than that of the linear regression. The nonlinear regression model's accuracy is 88.7% (11.3% error) for normalized room temperature case, which is better than the linear regression analysis. The hot model thermocouple sleeve design and fabrication are completed. The gasifier simulator (hot model) design and the fabrication are completed. The system tests of the gasifier simulator (hot model) have been conducted and some modifications have been made. Based on the system tests and results analysis, the gasifier simulator (hot model) has met the proposed design requirement and the ready for system test. The ultrasonic cleaning method is under evaluation and will be further studied for the gasifier simulator (hot model) application. The progress of this project has been on schedule.« less

  19. Model-Based GUI Testing Using Uppaal at Novo Nordisk

    NASA Astrophysics Data System (ADS)

    Hjort, Ulrik H.; Illum, Jacob; Larsen, Kim G.; Petersen, Michael A.; Skou, Arne

    This paper details a collaboration between Aalborg University and Novo Nordiskin developing an automatic model-based test generation tool for system testing of the graphical user interface of a medical device on an embedded platform. The tool takes as input an UML Statemachine model and generates a test suite satisfying some testing criterion, such as edge or state coverage, and converts the individual test case into a scripting language that can be automatically executed against the target. The tool has significantly reduced the time required for test construction and generation, and reduced the number of test scripts while increasing the coverage.

  20. W-8 Acoustic Casing Treatment Test Overview

    NASA Technical Reports Server (NTRS)

    Bozak, Rick; Podboy, Gary; Dougherty, Robert

    2017-01-01

    During February 2017, aerodynamic and acoustic testing was performed on a scale-model high bypass ratio turbofan rotor, R4, in an internal flow component test facility. An overview of the testing completed is presented.

  1. The Vanishing Tetrad Test: Another Test of Model Misspecification

    ERIC Educational Resources Information Center

    Roos, J. Micah

    2014-01-01

    The Vanishing Tetrad Test (VTT) (Bollen, Lennox, & Dahly, 2009; Bollen & Ting, 2000; Hipp, Bauer, & Bollen, 2005) is an extension of the Confirmatory Tetrad Analysis (CTA) proposed by Bollen and Ting (Bollen & Ting, 1993). VTT is a powerful tool for detecting model misspecification and can be particularly useful in cases in which…

  2. Preliminary dynamic tests of a flight-type ejector

    NASA Technical Reports Server (NTRS)

    Drummond, Colin K.

    1992-01-01

    A thrust augmenting ejector was tested to provide experimental data to assist in the assessment of theoretical models to predict duct and ejector fluid-dynamic characteristics. Eleven full-scale thrust augmenting ejector tests were conducted in which a rapid increase in the ejector nozzle pressure ratio was effected through a unique bypass/burst-disk subsystem. The present work examines two cases representative of the test performance window. In the first case, the primary nozzle pressure ration (NPR) increased 36 percent from one unchoked (NPR = 1.29) primary flow condition to another (NPR = 1.75) over a 0.15 second interval. The second case involves choked primary flow conditions, where a 17 percent increase in primary nozzle flowrate (from NPR = 2.35 to NPR = 2.77) occurred over approximately 0.1 seconds. Transient signal treatment of the present dataset is discussed and initial interpretations of the results are compared with theoretical predictions for a similar STOVL ejector model.

  3. Using Generalized Additive Models to Analyze Single-Case Designs

    ERIC Educational Resources Information Center

    Shadish, William; Sullivan, Kristynn

    2013-01-01

    Many analyses for single-case designs (SCDs)--including nearly all the effect size indicators-- currently assume no trend in the data. Regression and multilevel models allow for trend, but usually test only linear trend and have no principled way of knowing if higher order trends should be represented in the model. This paper shows how Generalized…

  4. Mining Peripheral Arterial Disease Cases from Narrative Clinical Notes Using Natural Language Processing

    PubMed Central

    Afzal, Naveed; Sohn, Sunghwan; Abram, Sara; Scott, Christopher G.; Chaudhry, Rajeev; Liu, Hongfang; Kullo, Iftikhar J.; Arruda-Olson, Adelaide M.

    2016-01-01

    Objective Lower extremity peripheral arterial disease (PAD) is highly prevalent and affects millions of individuals worldwide. We developed a natural language processing (NLP) system for automated ascertainment of PAD cases from clinical narrative notes and compared the performance of the NLP algorithm to billing code algorithms, using ankle-brachial index (ABI) test results as the gold standard. Methods We compared the performance of the NLP algorithm to 1) results of gold standard ABI; 2) previously validated algorithms based on relevant ICD-9 diagnostic codes (simple model) and 3) a combination of ICD-9 codes with procedural codes (full model). A dataset of 1,569 PAD patients and controls was randomly divided into training (n= 935) and testing (n= 634) subsets. Results We iteratively refined the NLP algorithm in the training set including narrative note sections, note types and service types, to maximize its accuracy. In the testing dataset, when compared with both simple and full models, the NLP algorithm had better accuracy (NLP: 91.8%, full model: 81.8%, simple model: 83%, P<.001), PPV (NLP: 92.9%, full model: 74.3%, simple model: 79.9%, P<.001), and specificity (NLP: 92.5%, full model: 64.2%, simple model: 75.9%, P<.001). Conclusions A knowledge-driven NLP algorithm for automatic ascertainment of PAD cases from clinical notes had greater accuracy than billing code algorithms. Our findings highlight the potential of NLP tools for rapid and efficient ascertainment of PAD cases from electronic health records to facilitate clinical investigation and eventually improve care by clinical decision support. PMID:28189359

  5. Toward Verification of USM3D Extensions for Mixed Element Grids

    NASA Technical Reports Server (NTRS)

    Pandya, Mohagna J.; Frink, Neal T.; Ding, Ejiang; Parlette, Edward B.

    2013-01-01

    The unstructured tetrahedral grid cell-centered finite volume flow solver USM3D has been recently extended to handle mixed element grids composed of hexahedral, prismatic, pyramidal, and tetrahedral cells. Presently, two turbulence models, namely, baseline Spalart-Allmaras (SA) and Menter Shear Stress Transport (SST), support mixed element grids. This paper provides an overview of the various numerical discretization options available in the newly enhanced USM3D. Using the SA model, the flow solver extensions are verified on three two-dimensional test cases available on the Turbulence Modeling Resource website at the NASA Langley Research Center. The test cases are zero pressure gradient flat plate, planar shear, and bump-inchannel. The effect of cell topologies on the flow solution is also investigated using the planar shear case. Finally, the assessment of various cell and face gradient options is performed on the zero pressure gradient flat plate case.

  6. Low Speed and High Speed Correlation of SMART Active Flap Rotor Loads

    NASA Technical Reports Server (NTRS)

    Kottapalli, Sesi B. R.

    2010-01-01

    Measured, open loop and closed loop data from the SMART rotor test in the NASA Ames 40- by 80- Foot Wind Tunnel are compared with CAMRAD II calculations. One open loop high-speed case and four closed loop cases are considered. The closed loop cases include three high-speed cases and one low-speed case. Two of these high-speed cases include a 2 deg flap deflection at 5P case and a test maximum-airspeed case. This study follows a recent, open loop correlation effort that used a simple correction factor for the airfoil pitching moment Mach number. Compared to the earlier effort, the current open loop study considers more fundamental corrections based on advancing blade aerodynamic conditions. The airfoil tables themselves have been studied. Selected modifications to the HH-06 section flap airfoil pitching moment table are implemented. For the closed loop condition, the effect of the flap actuator is modeled by increased flap hinge stiffness. Overall, the open loop correlation is reasonable, thus confirming the basic correctness of the current semi-empirical modifications; the closed loop correlation is also reasonable considering that the current flap model is a first generation model. Detailed correlation results are given in the paper.

  7. Universal Versus Targeted Screening for Lynch Syndrome: Comparing Ascertainment and Costs Based on Clinical Experience.

    PubMed

    Erten, Mujde Z; Fernandez, Luca P; Ng, Hank K; McKinnon, Wendy C; Heald, Brandie; Koliba, Christopher J; Greenblatt, Marc S

    2016-10-01

    Strategies to screen colorectal cancers (CRCs) for Lynch syndrome are evolving rapidly; the optimal strategy remains uncertain. We compared targeted versus universal screening of CRCs for Lynch syndrome. In 2010-2011, we employed targeted screening (age < 60 and/or Bethesda criteria). From 2012 to 2014, we screened all CRCs. Immunohistochemistry for the four mismatch repair proteins was done in all cases, followed by other diagnostic studies as indicated. We modeled the diagnostic costs of detecting Lynch syndrome and estimated the 5-year costs of preventing CRC by colonoscopy screening, using a system dynamics model. Using targeted screening, 51/175 (29 %) cancers fit criteria and were tested by immunohistochemistry; 15/51 (29 %, or 8.6 % of all CRCs) showed suspicious loss of ≥1 mismatch repair protein. Germline mismatch repair gene mutations were found in 4/4 cases sequenced (11 suspected cases did not have germline testing). Using universal screening, 17/292 (5.8 %) screened cancers had abnormal immunohistochemistry suspicious for Lynch syndrome. Germline mismatch repair mutations were found in only 3/10 cases sequenced (7 suspected cases did not have germline testing). The mean cost to identify Lynch syndrome probands was ~$23,333/case for targeted screening and ~$175,916/case for universal screening at our institution. Estimated costs to identify and screen probands and relatives were: targeted, $9798/case and universal, $38,452/case. In real-world Lynch syndrome management, incomplete clinical follow-up was the major barrier to do genetic testing. Targeted screening costs 2- to 7.5-fold less than universal and rarely misses Lynch syndrome cases. Future changes in testing costs will likely change the optimal algorithm.

  8. Testing of transition-region models: Test cases and data

    NASA Technical Reports Server (NTRS)

    Singer, Bart A.; Dinavahi, Surya; Iyer, Venkit

    1991-01-01

    Mean flow quantities in the laminar turbulent transition region and in the fully turbulent region are predicted with different models incorporated into a 3-D boundary layer code. The predicted quantities are compared with experimental data for a large number of different flows and the suitability of the models for each flow is evaluated.

  9. Computer program for Stirling engine performance calculations

    NASA Technical Reports Server (NTRS)

    Tew, R. C., Jr.

    1983-01-01

    The thermodynamic characteristics of the Stirling engine were analyzed and modeled on a computer to support its development as a possible alternative to the automobile spark ignition engine. The computer model is documented. The documentation includes a user's manual, symbols list, a test case, comparison of model predictions with test results, and a description of the analytical equations used in the model.

  10. A closed form slug test theory for high permeability aquifers.

    PubMed

    Ostendorf, David W; DeGroot, Don J; Dunaj, Philip J; Jakubowski, Joseph

    2005-01-01

    We incorporate a linear estimate of casing friction into the analytical slug test theory of Springer and Gelhar (1991) for high permeability aquifers. The modified theory elucidates the influence of inertia and casing friction on consistent, closed form equations for the free surface, pressure, and velocity fluctuations for overdamped and underdamped conditions. A consistent, but small, correction for kinetic energy is included as well. A characteristic velocity linearizes the turbulent casing shear stress so that an analytical solution for attenuated, phase shifted pressure fluctuations fits a single parameter (damping frequency) to transducer data from any depth in the casing. Underdamped slug tests of 0.3, 0.6, and 1 m amplitudes at five transducer depths in a 5.1 cm diameter PVC well 21 m deep in the Plymouth-Carver Aquifer yield a consistent hydraulic conductivity of 1.5 x 10(-3) m/s. The Springer and Gelhar (1991) model underestimates the hydraulic conductivity for these tests by as much as 25% by improperly ascribing smooth turbulent casing friction to the aquifer. The match point normalization of Butler (1998) agrees with our fitted hydraulic conductivity, however, when friction is included in the damping frequency. Zurbuchen et al. (2002) use a numerical model to establish a similar sensitivity of hydraulic conductivity to nonlinear casing friction.

  11. Reflection on design and testing of pancreatic alpha-amylase inhibitors: an in silico comparison between rat and rabbit enzyme models

    PubMed Central

    2012-01-01

    Background Inhibitors of pancreatic alpha-amylase are potential drugs to treat diabetes and obesity. In order to find compounds that would be effective amylase inhibitors, in vitro and in vivo models are usually used. The accuracy of models is limited, but these tools are nonetheless valuable. In vitro models could be used in large screenings involving thousands of chemicals that are tested to find potential lead compounds. In vivo models are still used as preliminary mean of testing compounds behavior in the whole organism. In the case of alpha-amylase inhibitors, both rats and rabbits could be chosen as in vivo models. The question was which animal could present more accuracy with regard to its pancreatic alpha-amylase. Results As there is no crystal structure of these enzymes, a molecular modeling study was done in order to compare the rabbit and rat enzymes with the human one. The overall result is that rabbit enzyme could probably be a better choice in this regard, but in the case of large ligands, which could make putative interactions with the −4 subsite of pancreatic alpha-amylase, interpretation of results should be made cautiously. Conclusion Molecular modeling tools could be used to choose the most suitable model enzyme that would help to identify new enzyme inhibitors. In the case of alpha-amylase, three-dimensional structures of animal enzymes show differences with the human one which should be taken into account when testing potential new drugs. PMID:23352052

  12. Wellbore Seal Repair Using Nanocomposite Materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stormont, John

    2016-08-31

    Nanocomposite wellbore repair materials have been developed, tested, and modeled through an integrated program of laboratory testing and numerical modeling. Numerous polymer-cement nanocomposites were synthesized as candidate wellbore repair materials using various combinations of base polymers and nanoparticles. Based on tests of bond strength to steel and cement, ductility, stability, flowability, and penetrability in opening of 50 microns and less, we identified Novolac epoxy reinforced with multi-walled carbon nanotubes and/or alumina nanoparticles to be a superior wellbore seal material compared to conventional microfine cements. A system was developed for testing damaged and repaired wellbore specimens comprised of a cement sheathmore » cast on a steel casing. The system allows independent application of confining pressures and casing pressures while gas flow is measured through the specimens along the wellbore axis. Repair with the nanocomposite epoxy base material was successful in dramatically reducing the flow through flaws of various sizes and types, and restoring the specimen comparable to an intact condition. In contrast, repair of damaged specimens with microfine cement was less effective, and the repair degraded with application of stress. Post-test observations confirm the complete penetration and sealing of flaws using the nanocomposite epoxy base material. A number of modeling efforts have supported the material development and testing efforts. We have modeled the steel-repair material interface behavior in detail during slant shear tests, which we used to characterize bond strength of candidate repair materials. A numerical model of the laboratory testing of damaged wellbore specimens was developed. This investigation found that microannulus permeability can satisfactorily be described by a joint model. Finally, a wellbore model has been developed that can be used to evaluate the response of the wellbore system (casing, cement, and microannulus), including the use of either cement or a nanocomposite in the microannulus to represent a repaired system. This wellbore model was successfully coupled with a field-scale model of CO 2 injection, to enable predictions of stress and strains in the wellbore subjected to subsurface changes (i.e. domal uplift) associated with fluid injection.« less

  13. Parallelization of elliptic solver for solving 1D Boussinesq model

    NASA Astrophysics Data System (ADS)

    Tarwidi, D.; Adytia, D.

    2018-03-01

    In this paper, a parallel implementation of an elliptic solver in solving 1D Boussinesq model is presented. Numerical solution of Boussinesq model is obtained by implementing a staggered grid scheme to continuity, momentum, and elliptic equation of Boussinesq model. Tridiagonal system emerging from numerical scheme of elliptic equation is solved by cyclic reduction algorithm. The parallel implementation of cyclic reduction is executed on multicore processors with shared memory architectures using OpenMP. To measure the performance of parallel program, large number of grids is varied from 28 to 214. Two test cases of numerical experiment, i.e. propagation of solitary and standing wave, are proposed to evaluate the parallel program. The numerical results are verified with analytical solution of solitary and standing wave. The best speedup of solitary and standing wave test cases is about 2.07 with 214 of grids and 1.86 with 213 of grids, respectively, which are executed by using 8 threads. Moreover, the best efficiency of parallel program is 76.2% and 73.5% for solitary and standing wave test cases, respectively.

  14. Consistency of QSAR models: Correct split of training and test sets, ranking of models and performance parameters.

    PubMed

    Rácz, A; Bajusz, D; Héberger, K

    2015-01-01

    Recent implementations of QSAR modelling software provide the user with numerous models and a wealth of information. In this work, we provide some guidance on how one should interpret the results of QSAR modelling, compare and assess the resulting models, and select the best and most consistent ones. Two QSAR datasets are applied as case studies for the comparison of model performance parameters and model selection methods. We demonstrate the capabilities of sum of ranking differences (SRD) in model selection and ranking, and identify the best performance indicators and models. While the exchange of the original training and (external) test sets does not affect the ranking of performance parameters, it provides improved models in certain cases (despite the lower number of molecules in the training set). Performance parameters for external validation are substantially separated from the other merits in SRD analyses, highlighting their value in data fusion.

  15. A moist aquaplanet variant of the Held–Suarez test for atmospheric model dynamical cores

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thatcher, Diana R.; Jablonowski, Christiane

    A moist idealized test case (MITC) for atmospheric model dynamical cores is presented. The MITC is based on the Held–Suarez (HS) test that was developed for dry simulations on “a flat Earth” and replaces the full physical parameterization package with a Newtonian temperature relaxation and Rayleigh damping of the low-level winds. This new variant of the HS test includes moisture and thereby sheds light on the nonlinear dynamics–physics moisture feedbacks without the complexity of full-physics parameterization packages. In particular, it adds simplified moist processes to the HS forcing to model large-scale condensation, boundary-layer mixing, and the exchange of latent and sensible heat betweenmore » the atmospheric surface and an ocean-covered planet. Using a variety of dynamical cores of the National Center for Atmospheric Research (NCAR)'s Community Atmosphere Model (CAM), this paper demonstrates that the inclusion of the moist idealized physics package leads to climatic states that closely resemble aquaplanet simulations with complex physical parameterizations. This establishes that the MITC approach generates reasonable atmospheric circulations and can be used for a broad range of scientific investigations. This paper provides examples of two application areas. First, the test case reveals the characteristics of the physics–dynamics coupling technique and reproduces coupling issues seen in full-physics simulations. In particular, it is shown that sudden adjustments of the prognostic fields due to moist physics tendencies can trigger undesirable large-scale gravity waves, which can be remedied by a more gradual application of the physical forcing. Second, the moist idealized test case can be used to intercompare dynamical cores. These examples demonstrate the versatility of the MITC approach and suggestions are made for further application areas. Furthermore, the new moist variant of the HS test can be considered a test case of intermediate complexity.« less

  16. A moist aquaplanet variant of the Held–Suarez test for atmospheric model dynamical cores

    DOE PAGES

    Thatcher, Diana R.; Jablonowski, Christiane

    2016-04-04

    A moist idealized test case (MITC) for atmospheric model dynamical cores is presented. The MITC is based on the Held–Suarez (HS) test that was developed for dry simulations on “a flat Earth” and replaces the full physical parameterization package with a Newtonian temperature relaxation and Rayleigh damping of the low-level winds. This new variant of the HS test includes moisture and thereby sheds light on the nonlinear dynamics–physics moisture feedbacks without the complexity of full-physics parameterization packages. In particular, it adds simplified moist processes to the HS forcing to model large-scale condensation, boundary-layer mixing, and the exchange of latent and sensible heat betweenmore » the atmospheric surface and an ocean-covered planet. Using a variety of dynamical cores of the National Center for Atmospheric Research (NCAR)'s Community Atmosphere Model (CAM), this paper demonstrates that the inclusion of the moist idealized physics package leads to climatic states that closely resemble aquaplanet simulations with complex physical parameterizations. This establishes that the MITC approach generates reasonable atmospheric circulations and can be used for a broad range of scientific investigations. This paper provides examples of two application areas. First, the test case reveals the characteristics of the physics–dynamics coupling technique and reproduces coupling issues seen in full-physics simulations. In particular, it is shown that sudden adjustments of the prognostic fields due to moist physics tendencies can trigger undesirable large-scale gravity waves, which can be remedied by a more gradual application of the physical forcing. Second, the moist idealized test case can be used to intercompare dynamical cores. These examples demonstrate the versatility of the MITC approach and suggestions are made for further application areas. Furthermore, the new moist variant of the HS test can be considered a test case of intermediate complexity.« less

  17. Optimal design of studies of influenza transmission in households. II: comparison between cohort and case-ascertained studies.

    PubMed

    Klick, B; Nishiura, H; Leung, G M; Cowling, B J

    2014-04-01

    Both case-ascertained household studies, in which households are recruited after an 'index case' is identified, and household cohort studies, where a household is enrolled before the start of the epidemic, may be used to test and estimate the protective effect of interventions used to prevent influenza transmission. A simulation approach parameterized with empirical data from household studies was used to evaluate and compare the statistical power of four study designs: a cohort study with routine virological testing of household contacts of infected index case, a cohort study where only household contacts with acute respiratory illness (ARI) are sampled for virological testing, a case-ascertained study with routine virological testing of household contacts, and a case-ascertained study where only household contacts with ARI are sampled for virological testing. We found that a case-ascertained study with ARI-triggered testing would be the most powerful design while a cohort design only testing household contacts with ARI was the least powerful. Sensitivity analysis demonstrated that these conclusions varied by model parameters including the serial interval and the risk of influenza virus infection from outside the household.

  18. Unconditional or Conditional Logistic Regression Model for Age-Matched Case-Control Data?

    PubMed

    Kuo, Chia-Ling; Duan, Yinghui; Grady, James

    2018-01-01

    Matching on demographic variables is commonly used in case-control studies to adjust for confounding at the design stage. There is a presumption that matched data need to be analyzed by matched methods. Conditional logistic regression has become a standard for matched case-control data to tackle the sparse data problem. The sparse data problem, however, may not be a concern for loose-matching data when the matching between cases and controls is not unique, and one case can be matched to other controls without substantially changing the association. Data matched on a few demographic variables are clearly loose-matching data, and we hypothesize that unconditional logistic regression is a proper method to perform. To address the hypothesis, we compare unconditional and conditional logistic regression models by precision in estimates and hypothesis testing using simulated matched case-control data. Our results support our hypothesis; however, the unconditional model is not as robust as the conditional model to the matching distortion that the matching process not only makes cases and controls similar for matching variables but also for the exposure status. When the study design involves other complex features or the computational burden is high, matching in loose-matching data can be ignored for negligible loss in testing and estimation if the distributions of matching variables are not extremely different between cases and controls.

  19. Verification of a Constraint Force Equation Methodology for Modeling Multi-Body Stage Separation

    NASA Technical Reports Server (NTRS)

    Tartabini, Paul V.; Roithmayr, Carlos; Toniolo, Matthew D.; Karlgaard, Christopher; Pamadi, Bandu N.

    2008-01-01

    This paper discusses the verification of the Constraint Force Equation (CFE) methodology and its implementation in the Program to Optimize Simulated Trajectories II (POST2) for multibody separation problems using three specially designed test cases. The first test case involves two rigid bodies connected by a fixed joint; the second case involves two rigid bodies connected with a universal joint; and the third test case is that of Mach 7 separation of the Hyper-X vehicle. For the first two cases, the POST2/CFE solutions compared well with those obtained using industry standard benchmark codes, namely AUTOLEV and ADAMS. For the Hyper-X case, the POST2/CFE solutions were in reasonable agreement with the flight test data. The CFE implementation in POST2 facilitates the analysis and simulation of stage separation as an integral part of POST2 for seamless end-to-end simulations of launch vehicle trajectories.

  20. On the Relationship Between Classical Test Theory and Item Response Theory: From One to the Other and Back.

    PubMed

    Raykov, Tenko; Marcoulides, George A

    2016-04-01

    The frequently neglected and often misunderstood relationship between classical test theory and item response theory is discussed for the unidimensional case with binary measures and no guessing. It is pointed out that popular item response models can be directly obtained from classical test theory-based models by accounting for the discrete nature of the observed items. Two distinct observational equivalence approaches are outlined that render the item response models from corresponding classical test theory-based models, and can each be used to obtain the former from the latter models. Similarly, classical test theory models can be furnished using the reverse application of either of those approaches from corresponding item response models.

  1. Incorporating single-side sparing in models for predicting parotid dose sparing in head and neck IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuan, Lulin, E-mail: lulin.yuan@duke.edu; Wu, Q. Jackie; Yin, Fang-Fang

    2014-02-15

    Purpose: Sparing of single-side parotid gland is a common practice in head-and-neck (HN) intensity modulated radiation therapy (IMRT) planning. It is a special case of dose sparing tradeoff between different organs-at-risk. The authors describe an improved mathematical model for predicting achievable dose sparing in parotid glands in HN IMRT planning that incorporates single-side sparing considerations based on patient anatomy and learning from prior plan data. Methods: Among 68 HN cases analyzed retrospectively, 35 cases had physician prescribed single-side parotid sparing preferences. The single-side sparing model was trained with cases which had single-side sparing preferences, while the standard model was trainedmore » with the remainder of cases. A receiver operating characteristics (ROC) analysis was performed to determine the best criterion that separates the two case groups using the physician's single-side sparing prescription as ground truth. The final predictive model (combined model) takes into account the single-side sparing by switching between the standard and single-side sparing models according to the single-side sparing criterion. The models were tested with 20 additional cases. The significance of the improvement of prediction accuracy by the combined model over the standard model was evaluated using the Wilcoxon rank-sum test. Results: Using the ROC analysis, the best single-side sparing criterion is (1) the predicted median dose of one parotid is higher than 24 Gy; and (2) that of the other is higher than 7 Gy. This criterion gives a true positive rate of 0.82 and a false positive rate of 0.19, respectively. For the bilateral sparing cases, the combined and the standard models performed equally well, with the median of the prediction errors for parotid median dose being 0.34 Gy by both models (p = 0.81). For the single-side sparing cases, the standard model overestimates the median dose by 7.8 Gy on average, while the predictions by the combined model differ from actual values by only 2.2 Gy (p = 0.005). Similarly, the sum of residues between the modeled and the actual plan DVHs is the same for the bilateral sparing cases by both models (p = 0.67), while the standard model predicts significantly higher DVHs than the combined model for the single-side sparing cases (p = 0.01). Conclusions: The combined model for predicting parotid sparing that takes into account single-side sparing improves the prediction accuracy over the previous model.« less

  2. Incorporating single-side sparing in models for predicting parotid dose sparing in head and neck IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuan, Lulin, E-mail: lulin.yuan@duke.edu; Wu, Q. Jackie; Yin, Fang-Fang

    Purpose: Sparing of single-side parotid gland is a common practice in head-and-neck (HN) intensity modulated radiation therapy (IMRT) planning. It is a special case of dose sparing tradeoff between different organs-at-risk. The authors describe an improved mathematical model for predicting achievable dose sparing in parotid glands in HN IMRT planning that incorporates single-side sparing considerations based on patient anatomy and learning from prior plan data. Methods: Among 68 HN cases analyzed retrospectively, 35 cases had physician prescribed single-side parotid sparing preferences. The single-side sparing model was trained with cases which had single-side sparing preferences, while the standard model was trainedmore » with the remainder of cases. A receiver operating characteristics (ROC) analysis was performed to determine the best criterion that separates the two case groups using the physician's single-side sparing prescription as ground truth. The final predictive model (combined model) takes into account the single-side sparing by switching between the standard and single-side sparing models according to the single-side sparing criterion. The models were tested with 20 additional cases. The significance of the improvement of prediction accuracy by the combined model over the standard model was evaluated using the Wilcoxon rank-sum test. Results: Using the ROC analysis, the best single-side sparing criterion is (1) the predicted median dose of one parotid is higher than 24 Gy; and (2) that of the other is higher than 7 Gy. This criterion gives a true positive rate of 0.82 and a false positive rate of 0.19, respectively. For the bilateral sparing cases, the combined and the standard models performed equally well, with the median of the prediction errors for parotid median dose being 0.34 Gy by both models (p = 0.81). For the single-side sparing cases, the standard model overestimates the median dose by 7.8 Gy on average, while the predictions by the combined model differ from actual values by only 2.2 Gy (p = 0.005). Similarly, the sum of residues between the modeled and the actual plan DVHs is the same for the bilateral sparing cases by both models (p = 0.67), while the standard model predicts significantly higher DVHs than the combined model for the single-side sparing cases (p = 0.01). Conclusions: The combined model for predicting parotid sparing that takes into account single-side sparing improves the prediction accuracy over the previous model.« less

  3. CoopEUS Case Study: Tsunami Modelling and Early Warning Systems for Near Source Areas (Mediterranean, Juan de Fuca).

    NASA Astrophysics Data System (ADS)

    Beranzoli, Laura; Best, Mairi; Chierici, Francesco; Embriaco, Davide; Galbraith, Nan; Heeseman, Martin; Kelley, Deborah; Pirenne, Benoit; Scofield, Oscar; Weller, Robert

    2015-04-01

    There is a need for tsunami modeling and early warning systems for near-source areas. For example this is a common public safety threat in the Mediterranean and Juan de Fuca/NE Pacific Coast of N.A.; Regions covered by the EMSO, OOI, and ONC ocean observatories. Through the CoopEUS international cooperation project, a number of environmental research infrastructures have come together to coordinate efforts on environmental challenges; this tsunami case study tackles one such challenge. There is a mutual need of tsunami event field data and modeling to deepen our experience in testing methodology and developing real-time data processing. Tsunami field data are already available for past events, part of this use case compares these for compatibility, gap analysis, and model groundtruthing. It also reviews sensors needed and harmonizes instrument settings. Sensor metadata and registries are compared, harmonized, and aligned. Data policies and access are also compared and assessed for gap analysis. Modelling algorithms are compared and tested against archived and real-time data. This case study will then be extended to other related tsunami data and model sources globally with similar geographic and seismic scenarios.

  4. Gifted and Talented Education: A National Test Case in Peoria.

    ERIC Educational Resources Information Center

    Fetterman, David M.

    1986-01-01

    This article presents a study of a program in Peoria, Illinois, for the gifted and talented that serves as a national test case for gifted education and minority enrollment. It was concluded that referral, identification, and selection were appropriate for the program model but that inequalities resulted from socioeconomic variables. (Author/LMO)

  5. Testing the EKC hypothesis by considering trade openness, urbanization, and financial development: the case of Turkey.

    PubMed

    Ozatac, Nesrin; Gokmenoglu, Korhan K; Taspinar, Nigar

    2017-07-01

    This study investigates the environmental Kuznets curve (EKC) hypothesis for the case of Turkey from 1960 to 2013 by considering energy consumption, trade, urbanization, and financial development variables. Although previous literature examines various aspects of the EKC hypothesis for the case of Turkey, our model augments the basic model with several covariates to develop a better understanding of the relationship among the variables and to refrain from omitted variable bias. The results of the bounds test and the error correction model under autoregressive distributed lag mechanism suggest long-run relationships among the variables as well as proof of the EKC and the scale effect in Turkey. A conditional Granger causality test reveals that there are causal relationships among the variables. Our findings can have policy implications including the imposition of a "polluter pays" mechanism, such as the implementation of a carbon tax for pollution trading, to raise the urban population's awareness about the importance of adopting renewable energy and to support clean, environmentally friendly technology.

  6. Analysis of messy data with heteroscedastic in mean models

    NASA Astrophysics Data System (ADS)

    Trianasari, Nurvita; Sumarni, Cucu

    2016-02-01

    In the analysis of the data, we often faced with the problem of data where the data did not meet some assumptions. In conditions of such data is often called data messy. This problem is a consequence of the data that generates outliers that bias or error estimation. To analyze the data messy, there are three approaches, namely standard analysis, transform data and data analysis methods rather than a standard. Simulations conducted to determine the performance of a third comparative test procedure on average often the model variance is not homogeneous. Data simulation of each scenario is raised as much as 500 times. Next, we do the analysis of the average comparison test using three methods, Welch test, mixed models and Welch-r test. Data generation is done through software R version 3.1.2. Based on simulation results, these three methods can be used for both normal and abnormal case (homoscedastic). The third method works very well on data balanced or unbalanced when there is no violation in the homogenity's assumptions variance. For balanced data, the three methods still showed an excellent performance despite the violation of the assumption of homogeneity of variance, with the requisite degree of heterogeneity is high. It can be shown from the level of power test above 90 percent, and the best to Welch method (98.4%) and the Welch-r method (97.8%). For unbalanced data, Welch method will be very good moderate at in case of heterogeneity positive pair with a 98.2% power. Mixed models method will be very good at case of highly heterogeneity was negative negative pairs with power. Welch-r method works very well in both cases. However, if the level of heterogeneity of variance is very high, the power of all method will decrease especially for mixed models methods. The method which still works well enough (power more than 50%) is Welch-r method (62.6%), and the method of Welch (58.6%) in the case of balanced data. If the data are unbalanced, Welch-r method works well enough in the case of highly heterogeneous positive positive or negative negative pairs, there power are 68.8% and 51% consequencly. Welch method perform well enough only in the case of highly heterogeneous variety of positive positive pairs with it is power of 64.8%. While mixed models method is good in the case of a very heterogeneous variety of negative partner with 54.6% power. So in general, when there is a variance is not homogeneous case, Welch method is applied to the data rank (Welch-r) has a better performance than the other methods.

  7. NREL Improves Building Energy Simulation Programs Through Diagnostic Testing (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2012-01-01

    This technical highlight describes NREL research to develop Building Energy Simulation Test for Existing Homes (BESTEST-EX) to increase the quality and accuracy of energy analysis tools for the building retrofit market. Researchers at the National Renewable Energy Laboratory (NREL) have developed a new test procedure to increase the quality and accuracy of energy analysis tools for the building retrofit market. The Building Energy Simulation Test for Existing Homes (BESTEST-EX) is a test procedure that enables software developers to evaluate the performance of their audit tools in modeling energy use and savings in existing homes when utility bills are available formore » model calibration. Similar to NREL's previous energy analysis tests, such as HERS BESTEST and other BESTEST suites included in ANSI/ASHRAE Standard 140, BESTEST-EX compares software simulation findings to reference results generated with state-of-the-art simulation tools such as EnergyPlus, SUNREL, and DOE-2.1E. The BESTEST-EX methodology: (1) Tests software predictions of retrofit energy savings in existing homes; (2) Ensures building physics calculations and utility bill calibration procedures perform to a minimum standard; and (3) Quantifies impacts of uncertainties in input audit data and occupant behavior. BESTEST-EX includes building physics and utility bill calibration test cases. The diagram illustrates the utility bill calibration test cases. Participants are given input ranges and synthetic utility bills. Software tools use the utility bills to calibrate key model inputs and predict energy savings for the retrofit cases. Participant energy savings predictions using calibrated models are compared to NREL predictions using state-of-the-art building energy simulation programs.« less

  8. Borehole measurement of the hydraulic properties of low-permeability rock

    NASA Astrophysics Data System (ADS)

    Novakowski, Kentner S.; Bickerton, Gregory S.

    1997-11-01

    Hydraulic tests conducted in low-permeability media are subject to numerous influences and processes, many of which manifest in a nonunique fashion. To explore the accuracy and meaning of the interpretation of hydraulic tests conducted under such conditions, two semianalytical models are developed in which variable well bore storage, variable temperature, and test method are considered. The formation is assumed to be of uniform permeability and uniform storativity in both models. To investigate uncertainty in the use of these models, a comparison is conducted to similar models that account for nonuniform formation properties such as finite skin, double porosity, and fractional flow. Using the models for a finite skin and double porosity as baseline cases, results show that the interpretation of slug tests are normally nonunique when tests are conducted in material of low permeability. Provided that a lower bound is defined for storativity, the uncertainty in a given interpretation conducted with the model for a uniform medium can be established by comparison with a fit to the data obtained using the model incorporating finite skin. It was also found that the degree of uncertainty can be diminished by conducting the test using an open hole period followed by a shut-in period (similar to a drill stem test). Determination of the degree of uncertainty was found to be case specific and must be defined by using at least a comparison between the model for uniform media and that for finite skin. To illustrate the use of the slug test model and determine the degree of uncertainty that will accrue with the use of that model, a field example, potentially influenced by variable well bore storage, is presented and interpreted.

  9. Effectiveness of back-to-back testing

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.; Eckhardt, David E.; Caglayan, Alper; Kelly, John P. J.

    1987-01-01

    Three models of back-to-back testing processes are described. Two models treat the case where there is no intercomponent failure dependence. The third model describes the more realistic case where there is correlation among the failure probabilities of the functionally equivalent components. The theory indicates that back-to-back testing can, under the right conditions, provide a considerable gain in software reliability. The models are used to analyze the data obtained in a fault-tolerant software experiment. It is shown that the expected gain is indeed achieved, and exceeded, provided the intercomponent failure dependence is sufficiently small. However, even with the relatively high correlation the use of several functionally equivalent components coupled with back-to-back testing may provide a considerable reliability gain. Implications of this finding are that the multiversion software development is a feasible and cost effective approach to providing highly reliable software components intended for fault-tolerant software systems, on condition that special attention is directed at early detection and elimination of correlated faults.

  10. How Participatory Should Environmental Governance Be? Testing the Applicability of the Vroom-Yetton-Jago Model in Public Environmental Decision-Making

    NASA Astrophysics Data System (ADS)

    Lührs, Nikolas; Jager, Nicolas W.; Challies, Edward; Newig, Jens

    2018-02-01

    Public participation is potentially useful to improve public environmental decision-making and management processes. In corporate management, the Vroom-Yetton-Jago normative decision-making model has served as a tool to help managers choose appropriate degrees of subordinate participation for effective decision-making given varying decision-making contexts. But does the model recommend participatory mechanisms that would actually benefit environmental management? This study empirically tests the improved Vroom-Jago version of the model in the public environmental decision-making context. To this end, the key variables of the Vroom-Jago model are operationalized and adapted to a public environmental governance context. The model is tested using data from a meta-analysis of 241 published cases of public environmental decision-making, yielding three main sets of findings: (1) The Vroom-Jago model proves limited in its applicability to public environmental governance due to limited variance in its recommendations. We show that adjustments to key model equations make it more likely to produce meaningful recommendations. (2) We find that in most of the studied cases, public environmental managers (implicitly) employ levels of participation close to those that would have been recommended by the model. (3) An ANOVA revealed that such cases, which conform to model recommendations, generally perform better on stakeholder acceptance and environmental standards of outputs than those that diverge from the model. Public environmental management thus benefits from carefully selected and context-sensitive modes of participation.

  11. How Participatory Should Environmental Governance Be? Testing the Applicability of the Vroom-Yetton-Jago Model in Public Environmental Decision-Making.

    PubMed

    Lührs, Nikolas; Jager, Nicolas W; Challies, Edward; Newig, Jens

    2018-02-01

    Public participation is potentially useful to improve public environmental decision-making and management processes. In corporate management, the Vroom-Yetton-Jago normative decision-making model has served as a tool to help managers choose appropriate degrees of subordinate participation for effective decision-making given varying decision-making contexts. But does the model recommend participatory mechanisms that would actually benefit environmental management? This study empirically tests the improved Vroom-Jago version of the model in the public environmental decision-making context. To this end, the key variables of the Vroom-Jago model are operationalized and adapted to a public environmental governance context. The model is tested using data from a meta-analysis of 241 published cases of public environmental decision-making, yielding three main sets of findings: (1) The Vroom-Jago model proves limited in its applicability to public environmental governance due to limited variance in its recommendations. We show that adjustments to key model equations make it more likely to produce meaningful recommendations. (2) We find that in most of the studied cases, public environmental managers (implicitly) employ levels of participation close to those that would have been recommended by the model. (3) An ANOVA revealed that such cases, which conform to model recommendations, generally perform better on stakeholder acceptance and environmental standards of outputs than those that diverge from the model. Public environmental management thus benefits from carefully selected and context-sensitive modes of participation.

  12. Economic Crisis and Marital Problems in Turkey: Testing the Family Stress Model

    ERIC Educational Resources Information Center

    Aytac, Isik A.; Rankin, Bruce H.

    2009-01-01

    This paper applied the family stress model to the case of Turkey in the wake of the 2001 economic crisis. Using structural equation modeling and a nationally representative urban sample of 711 married women and 490 married men, we tested whether economic hardship and the associated family economic strain on families resulted in greater marital…

  13. Analysis of the return period and correlation between the reservoir-induced seismic frequency and the water level based on a copula: A case study of the Three Gorges reservoir in China

    NASA Astrophysics Data System (ADS)

    Liu, Xiaofei; Zhang, Qiuwen

    2016-11-01

    Studies have considered the many factors involved in the mechanism of reservoir seismicity. Focusing on the correlation between reservoir-induced seismicity and the water level, this study proposes to utilize copula theory to build a correlation model to analyze their relationships and perform the risk analysis. The sequences of reservoir induced seismicity events from 2003 to 2011 in the Three Gorges reservoir in China are used as a case study to test this new methodology. Next, we construct four correlation models based on the Gumbel, Clayton, Frank copula and M-copula functions and employ four methods to test the goodness of fit: Q-Q plots, the Kolmogorov-Smirnov (K-S) test, the minimum distance (MD) test and the Akaike Information Criterion (AIC) test. Through a comparison of the four models, the M-copula model fits the sample better than the other three models. Based on the M-copula model, we find that, for the case of a sudden drawdown of the water level, the possibility of seismic frequency decreasing obviously increases, whereas for the case of a sudden rising of the water level, the possibility of seismic frequency increasing obviously increases, with the former being greater than the latter. The seismic frequency is mainly distributed in the low-frequency region (Y ⩽ 20) for the low water level and in the middle-frequency region (20 < Y ≤ 80) for both the medium and high water levels; the seismic frequency in the high-frequency region (Y > 80) is the least likely. For the conditional return period, it can be seen that the period of the high-frequency seismicity is much longer than those of the normal and medium frequency seismicity, and the high water level shortens the periods.

  14. Lattice Boltzmann scheme for mixture modeling: analysis of the continuum diffusion regimes recovering Maxwell-Stefan model and incompressible Navier-Stokes equations.

    PubMed

    Asinari, Pietro

    2009-11-01

    A finite difference lattice Boltzmann scheme for homogeneous mixture modeling, which recovers Maxwell-Stefan diffusion model in the continuum limit, without the restriction of the mixture-averaged diffusion approximation, was recently proposed [P. Asinari, Phys. Rev. E 77, 056706 (2008)]. The theoretical basis is the Bhatnagar-Gross-Krook-type kinetic model for gas mixtures [P. Andries, K. Aoki, and B. Perthame, J. Stat. Phys. 106, 993 (2002)]. In the present paper, the recovered macroscopic equations in the continuum limit are systematically investigated by varying the ratio between the characteristic diffusion speed and the characteristic barycentric speed. It comes out that the diffusion speed must be at least one order of magnitude (in terms of Knudsen number) smaller than the barycentric speed, in order to recover the Navier-Stokes equations for mixtures in the incompressible limit. Some further numerical tests are also reported. In particular, (1) the solvent and dilute test cases are considered, because they are limiting cases in which the Maxwell-Stefan model reduces automatically to Fickian cases. Moreover, (2) some tests based on the Stefan diffusion tube are reported for proving the complete capabilities of the proposed scheme in solving Maxwell-Stefan diffusion problems. The proposed scheme agrees well with the expected theoretical results.

  15. Mining peripheral arterial disease cases from narrative clinical notes using natural language processing.

    PubMed

    Afzal, Naveed; Sohn, Sunghwan; Abram, Sara; Scott, Christopher G; Chaudhry, Rajeev; Liu, Hongfang; Kullo, Iftikhar J; Arruda-Olson, Adelaide M

    2017-06-01

    Lower extremity peripheral arterial disease (PAD) is highly prevalent and affects millions of individuals worldwide. We developed a natural language processing (NLP) system for automated ascertainment of PAD cases from clinical narrative notes and compared the performance of the NLP algorithm with billing code algorithms, using ankle-brachial index test results as the gold standard. We compared the performance of the NLP algorithm to (1) results of gold standard ankle-brachial index; (2) previously validated algorithms based on relevant International Classification of Diseases, Ninth Revision diagnostic codes (simple model); and (3) a combination of International Classification of Diseases, Ninth Revision codes with procedural codes (full model). A dataset of 1569 patients with PAD and controls was randomly divided into training (n = 935) and testing (n = 634) subsets. We iteratively refined the NLP algorithm in the training set including narrative note sections, note types, and service types, to maximize its accuracy. In the testing dataset, when compared with both simple and full models, the NLP algorithm had better accuracy (NLP, 91.8%; full model, 81.8%; simple model, 83%; P < .001), positive predictive value (NLP, 92.9%; full model, 74.3%; simple model, 79.9%; P < .001), and specificity (NLP, 92.5%; full model, 64.2%; simple model, 75.9%; P < .001). A knowledge-driven NLP algorithm for automatic ascertainment of PAD cases from clinical notes had greater accuracy than billing code algorithms. Our findings highlight the potential of NLP tools for rapid and efficient ascertainment of PAD cases from electronic health records to facilitate clinical investigation and eventually improve care by clinical decision support. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  16. NASA Structural Analysis Report on the American Airlines Flight 587 Accident - Local Analysis of the Right Rear Lug

    NASA Technical Reports Server (NTRS)

    Raju, Ivatury S; Glaessgen, Edward H.; Mason, Brian H; Krishnamurthy, Thiagarajan; Davila, Carlos G

    2005-01-01

    A detailed finite element analysis of the right rear lug of the American Airlines Flight 587 - Airbus A300-600R was performed as part of the National Transportation Safety Board s failure investigation of the accident that occurred on November 12, 2001. The loads experienced by the right rear lug are evaluated using global models of the vertical tail, local models near the right rear lug, and a global-local analysis procedure. The right rear lug was analyzed using two modeling approaches. In the first approach, solid-shell type modeling is used, and in the second approach, layered-shell type modeling is used. The solid-shell and the layered-shell modeling approaches were used in progressive failure analyses (PFA) to determine the load, mode, and location of failure in the right rear lug under loading representative of an Airbus certification test conducted in 1985 (the 1985-certification test). Both analyses were in excellent agreement with each other on the predicted failure loads, failure mode, and location of failure. The solid-shell type modeling was then used to analyze both a subcomponent test conducted by Airbus in 2003 (the 2003-subcomponent test) and the accident condition. Excellent agreement was observed between the analyses and the observed failures in both cases. From the analyses conducted and presented in this paper, the following conclusions were drawn. The moment, Mx (moment about the fuselage longitudinal axis), has significant effect on the failure load of the lugs. Higher absolute values of Mx give lower failure loads. The predicted load, mode, and location of the failure of the 1985-certification test, 2003-subcomponent test, and the accident condition are in very good agreement. This agreement suggests that the 1985-certification and 2003- subcomponent tests represent the accident condition accurately. The failure mode of the right rear lug for the 1985-certification test, 2003-subcomponent test, and the accident load case is identified as a cleavage-type failure. For the accident case, the predicted failure load for the right rear lug from the PFA is greater than 1.98 times the limit load of the lugs. I.

  17. Structural Analysis of the Right Rear Lug of American Airlines Flight 587

    NASA Technical Reports Server (NTRS)

    Raju, Ivatury S.; Glaessgen, Edward H.; Mason, Brian H.; Krishnamurthy, Thiagarajan; Davila, Carlos G.

    2006-01-01

    A detailed finite element analysis of the right rear lug of the American Airlines Flight 587 - Airbus A300-600R was performed as part of the National Transportation Safety Board s failure investigation of the accident that occurred on November 12, 2001. The loads experienced by the right rear lug are evaluated using global models of the vertical tail, local models near the right rear lug, and a global-local analysis procedure. The right rear lug was analyzed using two modeling approaches. In the first approach, solid-shell type modeling is used, and in the second approach, layered-shell type modeling is used. The solid-shell and the layered-shell modeling approaches were used in progressive failure analyses (PFA) to determine the load, mode, and location of failure in the right rear lug under loading representative of an Airbus certification test conducted in 1985 (the 1985-certification test). Both analyses were in excellent agreement with each other on the predicted failure loads, failure mode, and location of failure. The solid-shell type modeling was then used to analyze both a subcomponent test conducted by Airbus in 2003 (the 2003-subcomponent test) and the accident condition. Excellent agreement was observed between the analyses and the observed failures in both cases. The moment, Mx (moment about the fuselage longitudinal axis), has significant effect on the failure load of the lugs. Higher absolute values of Mx give lower failure loads. The predicted load, mode, and location of the failure of the 1985- certification test, 2003-subcomponent test, and the accident condition are in very good agreement. This agreement suggests that the 1985-certification and 2003-subcomponent tests represent the accident condition accurately. The failure mode of the right rear lug for the 1985-certification test, 2003-subcomponent test, and the accident load case is identified as a cleavage-type failure. For the accident case, the predicted failure load for the right rear lug from the PFA is greater than 1.98 times the limit load of the lugs.

  18. Prediction model for the return to work of workers with injuries in Hong Kong.

    PubMed

    Xu, Yanwen; Chan, Chetwyn C H; Lo, Karen Hui Yu-Ling; Tang, Dan

    2008-01-01

    This study attempts to formulate a prediction model of return to work for a group of workers who have been suffering from chronic pain and physical injury while also being out of work in Hong Kong. The study used Case-based Reasoning (CBR) method, and compared the result with the statistical method of logistic regression model. The database of the algorithm of CBR was composed of 67 cases who were also used in the logistic regression model. The testing cases were 32 participants who had a similar background and characteristics to those in the database. The methods of setting constraints and Euclidean distance metric were used in CBR to search the closest cases to the trial case based on the matrix. The usefulness of the algorithm was tested on 32 new participants, and the accuracy of predicting return to work outcomes was 62.5%, which was no better than the 71.2% accuracy derived from the logistic regression model. The results of the study would enable us to have a better understanding of the CBR applied in the field of occupational rehabilitation by comparing with the conventional regression analysis. The findings would also shed light on the development of relevant interventions for the return-to-work process of these workers.

  19. Validation of 2D flood models with insurance claims

    NASA Astrophysics Data System (ADS)

    Zischg, Andreas Paul; Mosimann, Markus; Bernet, Daniel Benjamin; Röthlisberger, Veronika

    2018-02-01

    Flood impact modelling requires reliable models for the simulation of flood processes. In recent years, flood inundation models have been remarkably improved and widely used for flood hazard simulation, flood exposure and loss analyses. In this study, we validate a 2D inundation model for the purpose of flood exposure analysis at the river reach scale. We validate the BASEMENT simulation model with insurance claims using conventional validation metrics. The flood model is established on the basis of available topographic data in a high spatial resolution for four test cases. The validation metrics were calculated with two different datasets; a dataset of event documentations reporting flooded areas and a dataset of insurance claims. The model fit relating to insurance claims is in three out of four test cases slightly lower than the model fit computed on the basis of the observed inundation areas. This comparison between two independent validation data sets suggests that validation metrics using insurance claims can be compared to conventional validation data, such as the flooded area. However, a validation on the basis of insurance claims might be more conservative in cases where model errors are more pronounced in areas with a high density of values at risk.

  20. An Illustrative Case Study of the Heuristic Practices of a High-Performing Research Department: Toward Building a Model Applicable in the Context of Large Urban Districts

    ERIC Educational Resources Information Center

    Munoz, Marco A.; Rodosky, Robert J.

    2011-01-01

    This case study provides an illustration of the heuristic practices of a high-performing research department, which in turn, will help build much needed models applicable in the context of large urban districts. This case study examines the accountability, planning, evaluation, testing, and research functions of a research department in a large…

  1. A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan W.

    2014-01-01

    This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.

  2. A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan Walker

    2015-01-01

    This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.

  3. Efficient fuzzy Bayesian inference algorithms for incorporating expert knowledge in parameter estimation

    NASA Astrophysics Data System (ADS)

    Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad

    2016-05-01

    Bayesian inference has traditionally been conceived as the proper framework for the formal incorporation of expert knowledge in parameter estimation of groundwater models. However, conventional Bayesian inference is incapable of taking into account the imprecision essentially embedded in expert provided information. In order to solve this problem, a number of extensions to conventional Bayesian inference have been introduced in recent years. One of these extensions is 'fuzzy Bayesian inference' which is the result of integrating fuzzy techniques into Bayesian statistics. Fuzzy Bayesian inference has a number of desirable features which makes it an attractive approach for incorporating expert knowledge in the parameter estimation process of groundwater models: (1) it is well adapted to the nature of expert provided information, (2) it allows to distinguishably model both uncertainty and imprecision, and (3) it presents a framework for fusing expert provided information regarding the various inputs of the Bayesian inference algorithm. However an important obstacle in employing fuzzy Bayesian inference in groundwater numerical modeling applications is the computational burden, as the required number of numerical model simulations often becomes extremely exhaustive and often computationally infeasible. In this paper, a novel approach of accelerating the fuzzy Bayesian inference algorithm is proposed which is based on using approximate posterior distributions derived from surrogate modeling, as a screening tool in the computations. The proposed approach is first applied to a synthetic test case of seawater intrusion (SWI) in a coastal aquifer. It is shown that for this synthetic test case, the proposed approach decreases the number of required numerical simulations by an order of magnitude. Then the proposed approach is applied to a real-world test case involving three-dimensional numerical modeling of SWI in Kish Island, located in the Persian Gulf. An expert elicitation methodology is developed and applied to the real-world test case in order to provide a road map for the use of fuzzy Bayesian inference in groundwater modeling applications.

  4. Generalized functional linear models for gene-based case-control association studies.

    PubMed

    Fan, Ruzong; Wang, Yifan; Mills, James L; Carter, Tonia C; Lobach, Iryna; Wilson, Alexander F; Bailey-Wilson, Joan E; Weeks, Daniel E; Xiong, Momiao

    2014-11-01

    By using functional data analysis techniques, we developed generalized functional linear models for testing association between a dichotomous trait and multiple genetic variants in a genetic region while adjusting for covariates. Both fixed and mixed effect models are developed and compared. Extensive simulations show that Rao's efficient score tests of the fixed effect models are very conservative since they generate lower type I errors than nominal levels, and global tests of the mixed effect models generate accurate type I errors. Furthermore, we found that the Rao's efficient score test statistics of the fixed effect models have higher power than the sequence kernel association test (SKAT) and its optimal unified version (SKAT-O) in most cases when the causal variants are both rare and common. When the causal variants are all rare (i.e., minor allele frequencies less than 0.03), the Rao's efficient score test statistics and the global tests have similar or slightly lower power than SKAT and SKAT-O. In practice, it is not known whether rare variants or common variants in a gene region are disease related. All we can assume is that a combination of rare and common variants influences disease susceptibility. Thus, the improved performance of our models when the causal variants are both rare and common shows that the proposed models can be very useful in dissecting complex traits. We compare the performance of our methods with SKAT and SKAT-O on real neural tube defects and Hirschsprung's disease datasets. The Rao's efficient score test statistics and the global tests are more sensitive than SKAT and SKAT-O in the real data analysis. Our methods can be used in either gene-disease genome-wide/exome-wide association studies or candidate gene analyses. © 2014 WILEY PERIODICALS, INC.

  5. Generalized Functional Linear Models for Gene-based Case-Control Association Studies

    PubMed Central

    Mills, James L.; Carter, Tonia C.; Lobach, Iryna; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Weeks, Daniel E.; Xiong, Momiao

    2014-01-01

    By using functional data analysis techniques, we developed generalized functional linear models for testing association between a dichotomous trait and multiple genetic variants in a genetic region while adjusting for covariates. Both fixed and mixed effect models are developed and compared. Extensive simulations show that Rao's efficient score tests of the fixed effect models are very conservative since they generate lower type I errors than nominal levels, and global tests of the mixed effect models generate accurate type I errors. Furthermore, we found that the Rao's efficient score test statistics of the fixed effect models have higher power than the sequence kernel association test (SKAT) and its optimal unified version (SKAT-O) in most cases when the causal variants are both rare and common. When the causal variants are all rare (i.e., minor allele frequencies less than 0.03), the Rao's efficient score test statistics and the global tests have similar or slightly lower power than SKAT and SKAT-O. In practice, it is not known whether rare variants or common variants in a gene are disease-related. All we can assume is that a combination of rare and common variants influences disease susceptibility. Thus, the improved performance of our models when the causal variants are both rare and common shows that the proposed models can be very useful in dissecting complex traits. We compare the performance of our methods with SKAT and SKAT-O on real neural tube defects and Hirschsprung's disease data sets. The Rao's efficient score test statistics and the global tests are more sensitive than SKAT and SKAT-O in the real data analysis. Our methods can be used in either gene-disease genome-wide/exome-wide association studies or candidate gene analyses. PMID:25203683

  6. A Longitudinal Test of the Demand–Control Model Using Specific Job Demands and Specific Job Control

    PubMed Central

    van Vegchel, Natasja; Shimazu, Akihito; Schaufeli, Wilmar; Dormann, Christian

    2010-01-01

    Background Supportive studies of the demand–control (DC) model were more likely to measure specific demands combined with a corresponding aspect of control. Purpose A longitudinal test of Karasek’s (Adm Sci Q. 24:285–308, 1) job strain hypothesis including specific measures of job demands and job control, and both self-report and objectively recorded well-being. Method Job strain hypothesis was tested among 267 health care employees from a two-wave Dutch panel survey with a 2-year time lag. Results Significant demand/control interactions were found for mental and emotional demands, but not for physical demands. The association between job demands and job satisfaction was positive in case of high job control, whereas this association was negative in case of low job control. In addition, the relation between job demands and psychosomatic health symptoms/sickness absence was negative in case of high job control and positive in case of low control. Conclusion Longitudinal support was found for the core assumption of the DC model with specific measures of job demands and job control as well as self-report and objectively recorded well-being. PMID:20195810

  7. Electronic delay ignition module for single bridgewire Apollo standard initiator

    NASA Technical Reports Server (NTRS)

    Ward, R. D.

    1975-01-01

    An engineering model and a qualification model of the EDIM were constructed and tested to Scout flight qualification criteria. The qualification model incorporated design improvements resulting from the engineering model tests. Compatibility with single bridgewire Apollo standard initiator (SBASI) was proven by test firing forty-five (45) SBASI's with worst case voltage and temperature conditions. The EDIM was successfully qualified for Scout flight application with no failures during testing of the qualification unit. Included is a method of implementing the EDIM into Scout vehicle hardware and the ground support equipment necessary to check out the system.

  8. On the Relationship between Classical Test Theory and Item Response Theory: From One to the Other and Back

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2016-01-01

    The frequently neglected and often misunderstood relationship between classical test theory and item response theory is discussed for the unidimensional case with binary measures and no guessing. It is pointed out that popular item response models can be directly obtained from classical test theory-based models by accounting for the discrete…

  9. Vorticity-divergence semi-Lagrangian global atmospheric model SL-AV20: dynamical core

    NASA Astrophysics Data System (ADS)

    Tolstykh, Mikhail; Shashkin, Vladimir; Fadeev, Rostislav; Goyman, Gordey

    2017-05-01

    SL-AV (semi-Lagrangian, based on the absolute vorticity equation) is a global hydrostatic atmospheric model. Its latest version, SL-AV20, provides global operational medium-range weather forecast with 20 km resolution over Russia. The lower-resolution configurations of SL-AV20 are being tested for seasonal prediction and climate modeling. The article presents the model dynamical core. Its main features are a vorticity-divergence formulation at the unstaggered grid, high-order finite-difference approximations, semi-Lagrangian semi-implicit discretization and the reduced latitude-longitude grid with variable resolution in latitude. The accuracy of SL-AV20 numerical solutions using a reduced lat-lon grid and the variable resolution in latitude is tested with two idealized test cases. Accuracy and stability of SL-AV20 in the presence of the orography forcing are tested using the mountain-induced Rossby wave test case. The results of all three tests are in good agreement with other published model solutions. It is shown that the use of the reduced grid does not significantly affect the accuracy up to the 25 % reduction in the number of grid points with respect to the regular grid. Variable resolution in latitude allows us to improve the accuracy of a solution in the region of interest.

  10. Using Response Surface Methods to Correlate the Modal Test of an Inflatable Test Article

    NASA Technical Reports Server (NTRS)

    Gupta, Anju

    2013-01-01

    This paper presents a practical application of response surface methods (RSM) to correlate a finite element model of a structural modal test. The test article is a quasi-cylindrical inflatable structure which primarily consists of a fabric weave, with an internal bladder and metallic bulkheads on either end. To mitigate model size, the fabric weave was simplified by representing it with shell elements. The task at hand is to represent the material behavior of the weave. The success of the model correlation is measured by comparing the four major modal frequencies of the analysis model to the four major modal frequencies of the test article. Given that only individual strap material properties were provided and material properties of the overall weave were not available, defining the material properties of the finite element model became very complex. First it was necessary to determine which material properties (modulus of elasticity in the hoop and longitudinal directions, shear modulus, Poisson's ratio, etc.) affected the modal frequencies. Then a Latin Hypercube of the parameter space was created to form an efficiently distributed finite case set. Each case was then analyzed with the results input into RSM. In the resulting response surface it was possible to see how each material parameter affected the modal frequencies of the analysis model. If the modal frequencies of the analysis model and its corresponding parameters match the test with acceptable accuracy, it can be said that the model correlation is successful.

  11. Solving mixed integer nonlinear programming problems using spiral dynamics optimization algorithm

    NASA Astrophysics Data System (ADS)

    Kania, Adhe; Sidarto, Kuntjoro Adji

    2016-02-01

    Many engineering and practical problem can be modeled by mixed integer nonlinear programming. This paper proposes to solve the problem with modified spiral dynamics inspired optimization method of Tamura and Yasuda. Four test cases have been examined, including problem in engineering and sport. This method succeeds in obtaining the optimal result in all test cases.

  12. An Ethnographic Case Study of the Administrative Organization, Processes, and Behavior in a Model Comprehensive High School.

    ERIC Educational Resources Information Center

    Zimman, Richard N.

    Using ethnographic case study methodology (involving open-ended interviews, participant observation, and document analysis) theories of administrative organization, processes, and behavior were tested during a three-week observation of a model comprehensive (experimental) high school. Although the study is limited in its general application, it…

  13. A Socioecological Model of Rape Survivors' Decisions to Aid in Case Prosecution

    ERIC Educational Resources Information Center

    Anders, Mary C.; Christopher, F. Scott

    2011-01-01

    The purpose of our study was to identify factors underlying rape survivors' post-assault prosecution decisions by testing a decision model that included the complex relations between the multiple social ecological systems within which rape survivors are embedded. We coded 440 police rape cases for characteristics of the assault and characteristics…

  14. Ground vibration tests of a high fidelity truss for verification of on orbit damage location techniques

    NASA Technical Reports Server (NTRS)

    Kashangaki, Thomas A. L.

    1992-01-01

    This paper describes a series of modal tests that were performed on a cantilevered truss structure. The goal of the tests was to assemble a large database of high quality modal test data for use in verification of proposed methods for on orbit model verification and damage detection in flexible truss structures. A description of the hardware is provided along with details of the experimental setup and procedures for 16 damage cases. Results from selected cases are presented and discussed. Differences between ground vibration testing and on orbit modal testing are also described.

  15. Predictive Feedback and Feedforward Control for Systems with Unknown Disturbances

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Eure, Kenneth W.

    1998-01-01

    Predictive feedback control has been successfully used in the regulation of plate vibrations when no reference signal is available for feedforward control. However, if a reference signal is available it may be used to enhance regulation by incorporating a feedforward path in the feedback controller. Such a controller is known as a hybrid controller. This paper presents the theory and implementation of the hybrid controller for general linear systems, in particular for structural vibration induced by acoustic noise. The generalized predictive control is extended to include a feedforward path in the multi-input multi-output case and implemented on a single-input single-output test plant to achieve plate vibration regulation. There are cases in acoustic-induce vibration where the disturbance signal is not available to be used by the hybrid controller, but a disturbance model is available. In this case the disturbance model may be used in the feedback controller to enhance performance. In practice, however, neither the disturbance signal nor the disturbance model is available. This paper presents the theory of identifying and incorporating the noise model into the feedback controller. Implementations are performed on a test plant and regulation improvements over the case where no noise model is used are demonstrated.

  16. Two-step sensitivity testing of parametrized and regionalized life cycle assessments: methodology and case study.

    PubMed

    Mutel, Christopher L; de Baan, Laura; Hellweg, Stefanie

    2013-06-04

    Comprehensive sensitivity analysis is a significant tool to interpret and improve life cycle assessment (LCA) models, but is rarely performed. Sensitivity analysis will increase in importance as inventory databases become regionalized, increasing the number of system parameters, and parametrized, adding complexity through variables and nonlinear formulas. We propose and implement a new two-step approach to sensitivity analysis. First, we identify parameters with high global sensitivities for further examination and analysis with a screening step, the method of elementary effects. Second, the more computationally intensive contribution to variance test is used to quantify the relative importance of these parameters. The two-step sensitivity test is illustrated on a regionalized, nonlinear case study of the biodiversity impacts from land use of cocoa production, including a worldwide cocoa products trade model. Our simplified trade model can be used for transformable commodities where one is assessing market shares that vary over time. In the case study, the highly uncertain characterization factors for the Ivory Coast and Ghana contributed more than 50% of variance for almost all countries and years examined. The two-step sensitivity test allows for the interpretation, understanding, and improvement of large, complex, and nonlinear LCA systems.

  17. Numerical Modelling of Solitary Wave Experiments on Rubble Mound Breakwaters

    NASA Astrophysics Data System (ADS)

    Guler, H. G.; Arikawa, T.; Baykal, C.; Yalciner, A. C.

    2016-12-01

    Performance of a rubble mound breakwater protecting Haydarpasa Port, Turkey, has been tested under tsunami attack by physical model tests conducted at Port and Airport Research Institute (Guler et al, 2015). It is aimed to understand dynamic force of the tsunami by conducting solitary wave tests (Arikawa, 2015). In this study, the main objective is to perform numerical modelling of solitary wave tests in order to verify accuracy of the CFD model IHFOAM, developed in OpenFOAM environment (Higuera et al, 2013), by comparing results of the numerical computations with the experimental results. IHFOAM is the numerical modelling tool which is based on VARANS equations with a k-ω SST turbulence model including realistic wave generation, and active wave absorption. Experiments are performed using a Froude scale of 1/30, measuring surface elevation and flow velocity at several locations in the wave channel, and wave pressure around the crown wall of the breakwater. Solitary wave tests with wave heights of H=7.5 cm and H=10 cm are selected which represent the results of the experiments. The first test (H=7.5 cm) is the case that resulted in no damage whereas the second case (H=10 cm) resulted in total damage due to the sliding of the crown wall. After comparison of the preliminary results of numerical simulations with experimental data for both cases, it is observed that solitary wave experiments could be accurately modeled using IHFOAM focusing water surface elevations, flow velocities, and wave pressures on the crown wall of the breakwater (Figure, result of sim. at t=29.6 sec). ACKNOWLEDGEMENTSThe authors acknowledge developers of IHFOAM, further extend their acknowledgements for the partial supports from the research projects MarDiM, ASTARTE, RAPSODI, and TUBITAK 213M534. REFERENCESArikawa (2015) "Consideration of Characteristics of Pressure on Seawall by Solitary Waves Based on Hydraulic Experiments", Jour. of Japan. Soc. of Civ. Eng. Ser. B2 (Coast. Eng.), Vol 71, p I889-I894 Guler, Arikawa, Oei, Yalciner (2015) "Performance of Rubble Mound Breakwaters under Tsunami Attack, A Case Study: Haydarpasa Port, Istanbul, Turkey", Coast. Eng. 104, 43-53 Higuera, Lara, Losada (2013) "Realistic Wave Generation and Active Wave Absorption for Navier-Stokes Models, Application to OpenFOAM", Coast. Eng. 71, 102-118

  18. Effects of a blended learning module on self-reported learning performances in baccalaureate nursing students.

    PubMed

    Hsu, Li-Ling; Hsieh, Suh-Ing

    2011-11-01

    This article is a report of a quasi-experimental study of the effects of blended modules on nursing students' learning of ethics course content. There is yet to be an empirically supported mix of strategies on which a working blended learning model can be built for nursing education. This was a two-group pretest and post-test quasi-experimental study in 2008 involving a total of 233 students. Two of the five clusters were designated the experimental group to experience a blended learning model, and the rest were designated the control group to be given classroom lectures only. The Case Analysis Attitude Scale, Case Analysis Self-Evaluation Scale, Blended Learning Satisfaction Scale, and Metacognition Scale were used in pretests and post-tests for the students to rate their own performance. In this study, the experimental group did not register significantly higher mean scores on the Case Analysis Attitude Scale at post-test and higher mean ranks on the Case Analysis Self-Evaluation Scale, the Blended Learning Satisfaction Scale, and the Metacognition Scale at post-test than the control group. Moreover, the experimental group registered significant progress in the mean ranks on the Case Analysis Self-Evaluation Scale and the Metacognition Scale from pretest to post-test. No between-subjects effects of four scales at post-test were found. Newly developed course modules, be it blended learning or a combination of traditional and innovative components, should be tested repeatedly for effectiveness and popularity for the purpose of facilitating the ultimate creation of a most effective course module for nursing education. © 2011 Blackwell Publishing Ltd.

  19. A Case-Series Test of the Interactive Two-Step Model of Lexical Access: Predicting Word Repetition from Picture Naming

    ERIC Educational Resources Information Center

    Dell, Gary S.; Martin, Nadine; Schwartz, Myrna F.

    2007-01-01

    Lexical access in language production, and particularly pathologies of lexical access, are often investigated by examining errors in picture naming and word repetition. In this article, we test a computational approach to lexical access, the two-step interactive model, by examining whether the model can quantitatively predict the repetition-error…

  20. Standardized verification of fuel cycle modeling

    DOE PAGES

    Feng, B.; Dixon, B.; Sunny, E.; ...

    2016-04-05

    A nuclear fuel cycle systems modeling and code-to-code comparison effort was coordinated across multiple national laboratories to verify the tools needed to perform fuel cycle analyses of the transition from a once-through nuclear fuel cycle to a sustainable potential future fuel cycle. For this verification study, a simplified example transition scenario was developed to serve as a test case for the four systems codes involved (DYMOND, VISION, ORION, and MARKAL), each used by a different laboratory participant. In addition, all participants produced spreadsheet solutions for the test case to check all the mass flows and reactor/facility profiles on a year-by-yearmore » basis throughout the simulation period. The test case specifications describe a transition from the current US fleet of light water reactors to a future fleet of sodium-cooled fast reactors that continuously recycle transuranic elements as fuel. After several initial coordinated modeling and calculation attempts, it was revealed that most of the differences in code results were not due to different code algorithms or calculation approaches, but due to different interpretations of the input specifications among the analysts. Therefore, the specifications for the test case itself were iteratively updated to remove ambiguity and to help calibrate interpretations. In addition, a few corrections and modifications were made to the codes as well, which led to excellent agreement between all codes and spreadsheets for this test case. Although no fuel cycle transition analysis codes matched the spreadsheet results exactly, all remaining differences in the results were due to fundamental differences in code structure and/or were thoroughly explained. As a result, the specifications and example results are provided so that they can be used to verify additional codes in the future for such fuel cycle transition scenarios.« less

  1. An assessment of some non-gray global radiation models in enclosures

    NASA Astrophysics Data System (ADS)

    Meulemans, J.

    2016-01-01

    The accuracy of several non-gray global gas/soot radiation models, namely the Wide-Band Correlated-K (WBCK) model, the Spectral Line Weighted-sum-of-gray-gases model with one optimized gray gas (SLW-1), the (non-gray) Weighted-Sum-of-Gray-Gases (WSGG) model with different sets of coefficients (Smith et al., Soufiani and Djavdan, Taylor and Foster) was assessed on several test cases from the literature. Non-isothermal (or isothermal) participating media containing non-homogeneous (or homogeneous) mixtures of water vapor, carbon dioxide and soot in one-dimensional planar enclosures and multi-dimensional rectangular enclosures were investigated. For all the considered test cases, a benchmark solution (LBL or SNB) was used in order to compute the relative error of each model on the predicted radiative source term and the wall net radiative heat flux.

  2. Modeling of electrodes and implantable pulse generator cases for the analysis of implant tip heating under MR imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Acikel, Volkan, E-mail: vacik@ee.bilkent.edu.tr; Atalar, Ergin; Uslubas, Ali

    Purpose: The authors’ purpose is to model the case of an implantable pulse generator (IPG) and the electrode of an active implantable medical device using lumped circuit elements in order to analyze their effect on radio frequency induced tissue heating problem during a magnetic resonance imaging (MRI) examination. Methods: In this study, IPG case and electrode are modeled with a voltage source and impedance. Values of these parameters are found using the modified transmission line method (MoTLiM) and the method of moments (MoM) simulations. Once the parameter values of an electrode/IPG case model are determined, they can be connected tomore » any lead, and tip heating can be analyzed. To validate these models, both MoM simulations and MR experiments were used. The induced currents on the leads with the IPG case or electrode connections were solved using the proposed models and the MoTLiM. These results were compared with the MoM simulations. In addition, an electrode was connected to a lead via an inductor. The dissipated power on the electrode was calculated using the MoTLiM by changing the inductance and the results were compared with the specific absorption rate results that were obtained using MoM. Then, MRI experiments were conducted to test the IPG case and the electrode models. To test the IPG case, a bare lead was connected to the case and placed inside a uniform phantom. During a MRI scan, the temperature rise at the lead was measured by changing the lead length. The power at the lead tip for the same scenario was also calculated using the IPG case model and MoTLiM. Then, an electrode was connected to a lead via an inductor and placed inside a uniform phantom. During a MRI scan, the temperature rise at the electrode was measured by changing the inductance and compared with the dissipated power on the electrode resistance. Results: The induced currents on leads with the IPG case or electrode connection were solved for using the combination of the MoTLiM and the proposed lumped circuit models. These results were compared with those from the MoM simulations. The mean square error was less than 9%. During the MRI experiments, when the IPG case was introduced, the resonance lengths were calculated to have an error less than 13%. Also the change in tip temperature rise at resonance lengths was predicted with less than 4% error. For the electrode experiments, the value of the matching impedance was predicted with an error less than 1%. Conclusions: Electrical models for the IPG case and electrode are suggested, and the method is proposed to determine the parameter values. The concept of matching of the electrode to the lead is clarified using the defined electrode impedance and the lead Thevenin impedance. The effect of the IPG case and electrode on tip heating can be predicted using the proposed theory. With these models, understanding the tissue heating due to the implants becomes easier. Also, these models are beneficial for implant safety testers and designers. Using these models, worst case conditions can be determined and the corresponding implant test experiments can be planned.« less

  3. A Bayesian hierarchical model with novel prior specifications for estimating HIV testing rates

    PubMed Central

    An, Qian; Kang, Jian; Song, Ruiguang; Hall, H. Irene

    2016-01-01

    Human immunodeficiency virus (HIV) infection is a severe infectious disease actively spreading globally, and acquired immunodeficiency syndrome (AIDS) is an advanced stage of HIV infection. The HIV testing rate, that is, the probability that an AIDS-free HIV infected person seeks a test for HIV during a particular time interval, given no previous positive test has been obtained prior to the start of the time, is an important parameter for public health. In this paper, we propose a Bayesian hierarchical model with two levels of hierarchy to estimate the HIV testing rate using annual AIDS and AIDS-free HIV diagnoses data. At level one, we model the latent number of HIV infections for each year using a Poisson distribution with the intensity parameter representing the HIV incidence rate. At level two, the annual numbers of AIDS and AIDS-free HIV diagnosed cases and all undiagnosed cases stratified by the HIV infections at different years are modeled using a multinomial distribution with parameters including the HIV testing rate. We propose a new class of priors for the HIV incidence rate and HIV testing rate taking into account the temporal dependence of these parameters to improve the estimation accuracy. We develop an efficient posterior computation algorithm based on the adaptive rejection metropolis sampling technique. We demonstrate our model using simulation studies and the analysis of the national HIV surveillance data in the USA. PMID:26567891

  4. A Worst-Case Approach for On-Line Flutter Prediction

    NASA Technical Reports Server (NTRS)

    Lind, Rick C.; Brenner, Martin J.

    1998-01-01

    Worst-case flutter margins may be computed for a linear model with respect to a set of uncertainty operators using the structured singular value. This paper considers an on-line implementation to compute these robust margins in a flight test program. Uncertainty descriptions are updated at test points to account for unmodeled time-varying dynamics of the airplane by ensuring the robust model is not invalidated by measured flight data. Robust margins computed with respect to this uncertainty remain conservative to the changing dynamics throughout the flight. A simulation clearly demonstrates this method can improve the efficiency of flight testing by accurately predicting the flutter margin to improve safety while reducing the necessary flight time.

  5. Prediction of the backflow and recovery regions in the backward facing step at various Reynolds numbers

    NASA Technical Reports Server (NTRS)

    Michelassi, V.; Durbin, P. A.; Mansour, N. N.

    1996-01-01

    A four-equation model of turbulence is applied to the numerical simulation of flows with massive separation induced by a sudden expansion. The model constants are a function of the flow parameters, and two different formulations for these functions are tested. The results are compared with experimental data for a high Reynolds-number case and with experimental and DNS data for a low Reynolds-number case. The computations prove that the recovery region downstream of the massive separation is properly modeled only for the high Re case. The problems in this case stem from the gradient diffusion hypothesis, which underestimates the turbulent diffusion.

  6. How allele frequency and study design affect association test statistics with misrepresentation errors.

    PubMed

    Escott-Price, Valentina; Ghodsi, Mansoureh; Schmidt, Karl Michael

    2014-04-01

    We evaluate the effect of genotyping errors on the type-I error of a general association test based on genotypes, showing that, in the presence of errors in the case and control samples, the test statistic asymptotically follows a scaled non-central $\\chi ^2$ distribution. We give explicit formulae for the scaling factor and non-centrality parameter for the symmetric allele-based genotyping error model and for additive and recessive disease models. They show how genotyping errors can lead to a significantly higher false-positive rate, growing with sample size, compared with the nominal significance levels. The strength of this effect depends very strongly on the population distribution of the genotype, with a pronounced effect in the case of rare alleles, and a great robustness against error in the case of large minor allele frequency. We also show how these results can be used to correct $p$-values.

  7. Correlation of Wissler Human Thermal Model Blood Flow and Shiver Algorithms

    NASA Technical Reports Server (NTRS)

    Bue, Grant; Makinen, Janice; Cognata, Thomas

    2010-01-01

    The Wissler Human Thermal Model (WHTM) is a thermal math model of the human body that has been widely used to predict the human thermoregulatory response to a variety of cold and hot environments. The model has been shown to predict core temperature and skin temperatures higher and lower, respectively, than in tests of subjects in crew escape suit working in a controlled hot environments. Conversely the model predicts core temperature and skin temperatures lower and higher, respectively, than in tests of lightly clad subjects immersed in cold water conditions. The blood flow algorithms of the model has been investigated to allow for more and less flow, respectively, for the cold and hot case. These changes in the model have yielded better correlation of skin and core temperatures in the cold and hot cases. The algorithm for onset of shiver did not need to be modified to achieve good agreement in cold immersion simulations

  8. Launch Vehicle Propulsion Parameter Design Multiple Selection Criteria

    NASA Technical Reports Server (NTRS)

    Shelton, Joey Dewayne

    2004-01-01

    The optimization tool described herein addresses and emphasizes the use of computer tools to model a system and focuses on a concept development approach for a liquid hydrogen/liquid oxygen single-stage-to-orbit system, but more particularly the development of the optimized system using new techniques. This methodology uses new and innovative tools to run Monte Carlo simulations, genetic algorithm solvers, and statistical models in order to optimize a design concept. The concept launch vehicle and propulsion system were modeled and optimized to determine the best design for weight and cost by varying design and technology parameters. Uncertainty levels were applied using Monte Carlo Simulations and the model output was compared to the National Aeronautics and Space Administration Space Shuttle Main Engine. Several key conclusions are summarized here for the model results. First, the Gross Liftoff Weight and Dry Weight were 67% higher for the design case for minimization of Design, Development, Test and Evaluation cost when compared to the weights determined by the minimization of Gross Liftoff Weight case. In turn, the Design, Development, Test and Evaluation cost was 53% higher for optimized Gross Liftoff Weight case when compared to the cost determined by case for minimization of Design, Development, Test and Evaluation cost. Therefore, a 53% increase in Design, Development, Test and Evaluation cost results in a 67% reduction in Gross Liftoff Weight. Secondly, the tool outputs define the sensitivity of propulsion parameters, technology and cost factors and how these parameters differ when cost and weight are optimized separately. A key finding was that for a Space Shuttle Main Engine thrust level the oxidizer/fuel ratio of 6.6 resulted in the lowest Gross Liftoff Weight rather than at 5.2 for the maximum specific impulse, demonstrating the relationships between specific impulse, engine weight, tank volume and tank weight. Lastly, the optimum chamber pressure for Gross Liftoff Weight minimization was 2713 pounds per square inch as compared to 3162 for the Design, Development, Test and Evaluation cost optimization case. This chamber pressure range is close to 3000 pounds per square inch for the Space Shuttle Main Engine.

  9. Generation IV benchmarking of TRISO fuel performance models under accident conditions: Modeling input data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collin, Blaise P.

    2014-09-01

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: the modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release; the modeling of the AGR-1 and HFR-EU1bis safety testing experiments; and, the comparisonmore » of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from ''Case 5'' of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. ''Case 5'' of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to ''effects of the numerical calculation method rather than the physical model''[IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read this document thoroughly to make sure all the data needed for their calculations is provided in the document. Missing data will be added to a revision of the document if necessary.« less

  10. Aerothermal modeling program, phase 1

    NASA Technical Reports Server (NTRS)

    Sturgess, G. J.

    1983-01-01

    The physical modeling embodied in the computational fluid dynamics codes is discussed. The objectives were to identify shortcomings in the models and to provide a program plan to improve the quantitative accuracy. The physical models studied were for: turbulent mass and momentum transport, heat release, liquid fuel spray, and gaseous radiation. The approach adopted was to test the models against appropriate benchmark-quality test cases from experiments in the literature for the constituent flows that together make up the combustor real flow.

  11. Model-Based Development of Automotive Electronic Climate Control Software

    NASA Astrophysics Data System (ADS)

    Kakade, Rupesh; Murugesan, Mohan; Perugu, Bhupal; Nair, Mohanan

    With increasing complexity of software in today's products, writing and maintaining thousands of lines of code is a tedious task. Instead, an alternative methodology must be employed. Model-based development is one candidate that offers several benefits and allows engineers to focus on the domain of their expertise than writing huge codes. In this paper, we discuss the application of model-based development to the electronic climate control software of vehicles. The back-to-back testing approach is presented that ensures flawless and smooth transition from legacy designs to the model-based development. Simulink report generator to create design documents from the models is presented along with its usage to run the simulation model and capture the results into the test report. Test automation using model-based development tool that support the use of unique set of test cases for several testing levels and the test procedure that is independent of software and hardware platform is also presented.

  12. Case Studies for the Statistical Design of Experiments Applied to Powered Rotor Wind Tunnel Tests

    NASA Technical Reports Server (NTRS)

    Overmeyer, Austin D.; Tanner, Philip E.; Martin, Preston B.; Commo, Sean A.

    2015-01-01

    The application of statistical Design of Experiments (DOE) to helicopter wind tunnel testing was explored during two powered rotor wind tunnel entries during the summers of 2012 and 2013. These tests were performed jointly by the U.S. Army Aviation Development Directorate Joint Research Program Office and NASA Rotary Wing Project Office, currently the Revolutionary Vertical Lift Project, at NASA Langley Research Center located in Hampton, Virginia. Both entries were conducted in the 14- by 22-Foot Subsonic Tunnel with a small portion of the overall tests devoted to developing case studies of the DOE approach as it applies to powered rotor testing. A 16-47 times reduction in the number of data points required was estimated by comparing the DOE approach to conventional testing methods. The average error for the DOE surface response model for the OH-58F test was 0.95 percent and 4.06 percent for drag and download, respectively. The DOE surface response model of the Active Flow Control test captured the drag within 4.1 percent of measured data. The operational differences between the two testing approaches are identified, but did not prevent the safe operation of the powered rotor model throughout the DOE test matrices.

  13. Equivalent intraperitoneal doses of ibuprofen supplemented in drinking water or in diet: a behavioral and biochemical assay using antinociceptive and thromboxane inhibitory dose–response curves in mice

    PubMed Central

    El Gayar, Nesreen H.; Georgy, Sonia S.

    2016-01-01

    Background. Ibuprofen is used chronically in different animal models of inflammation by administration in drinking water or in diet due to its short half-life. Though this practice has been used for years, ibuprofen doses were never assayed against parenteral dose–response curves. This study aims at identifying the equivalent intraperitoneal (i.p.) doses of ibuprofen, when it is administered in drinking water or in diet. Methods. Bioassays were performed using formalin test and incisional pain model for antinociceptive efficacy and serum TXB2 for eicosanoid inhibitory activity. The dose–response curve of i.p. administered ibuprofen was constructed for each test using 50, 75, 100 and 200 mg/kg body weight (b.w.). The dose–response curves were constructed of phase 2a of the formalin test (the most sensitive phase to COX inhibitory agents), the area under the ‘change in mechanical threshold’-time curve in the incisional pain model and serum TXB2 levels. The assayed ibuprofen concentrations administered in drinking water were 0.2, 0.35, 0.6 mg/ml and those administered in diet were 82, 263, 375 mg/kg diet. Results. The 3 concentrations applied in drinking water lay between 73.6 and 85.5 mg/kg b.w., i.p., in case of the formalin test; between 58.9 and 77.8 mg/kg b.w., i.p., in case of the incisional pain model; and between 71.8 and 125.8 mg/kg b.w., i.p., in case of serum TXB2 levels. The 3 concentrations administered in diet lay between 67.6 and 83.8 mg/kg b.w., i.p., in case of the formalin test; between 52.7 and 68.6 mg/kg b.w., i.p., in case of the incisional pain model; and between 63.6 and 92.5 mg/kg b.w., i.p., in case of serum TXB2 levels. Discussion. The increment in pharmacological effects of different doses of continuously administered ibuprofen in drinking water or diet do not parallel those of i.p. administered ibuprofen. It is therefore difficult to assume the equivalent parenteral daily doses based on mathematical calculations. PMID:27547547

  14. Uncertainty quantification of resonant ultrasound spectroscopy for material property and single crystal orientation estimation on a complex part

    NASA Astrophysics Data System (ADS)

    Aldrin, John C.; Mayes, Alexander; Jauriqui, Leanne; Biedermann, Eric; Heffernan, Julieanne; Livings, Richard; Goodlet, Brent; Mazdiyasni, Siamack

    2018-04-01

    A case study is presented evaluating uncertainty in Resonance Ultrasound Spectroscopy (RUS) inversion for a single crystal (SX) Ni-based superalloy Mar-M247 cylindrical dog-bone specimens. A number of surrogate models were developed with FEM model solutions, using different sampling schemes (regular grid, Monte Carlo sampling, Latin Hyper-cube sampling) and model approaches, N-dimensional cubic spline interpolation and Kriging. Repeated studies were used to quantify the well-posedness of the inversion problem, and the uncertainty was assessed in material property and crystallographic orientation estimates given typical geometric dimension variability in aerospace components. Surrogate model quality was found to be an important factor in inversion results when the model more closely represents the test data. One important discovery was when the model matches well with test data, a Kriging surrogate model using un-sorted Latin Hypercube sampled data performed as well as the best results from an N-dimensional interpolation model using sorted data. However, both surrogate model quality and mode sorting were found to be less critical when inverting properties from either experimental data or simulated test cases with uncontrolled geometric variation.

  15. SVM-PB-Pred: SVM based protein block prediction method using sequence profiles and secondary structures.

    PubMed

    Suresh, V; Parthasarathy, S

    2014-01-01

    We developed a support vector machine based web server called SVM-PB-Pred, to predict the Protein Block for any given amino acid sequence. The input features of SVM-PB-Pred include i) sequence profiles (PSSM) and ii) actual secondary structures (SS) from DSSP method or predicted secondary structures from NPS@ and GOR4 methods. There were three combined input features PSSM+SS(DSSP), PSSM+SS(NPS@) and PSSM+SS(GOR4) used to test and train the SVM models. Similarly, four datasets RS90, DB433, LI1264 and SP1577 were used to develop the SVM models. These four SVM models developed were tested using three different benchmarking tests namely; (i) self consistency, (ii) seven fold cross validation test and (iii) independent case test. The maximum possible prediction accuracy of ~70% was observed in self consistency test for the SVM models of both LI1264 and SP1577 datasets, where PSSM+SS(DSSP) input features was used to test. The prediction accuracies were reduced to ~53% for PSSM+SS(NPS@) and ~43% for PSSM+SS(GOR4) in independent case test, for the SVM models of above two same datasets. Using our method, it is possible to predict the protein block letters for any query protein sequence with ~53% accuracy, when the SP1577 dataset and predicted secondary structure from NPS@ server were used. The SVM-PB-Pred server can be freely accessed through http://bioinfo.bdu.ac.in/~svmpbpred.

  16. A comparative study of turbulence models in predicting hypersonic inlet flows

    NASA Technical Reports Server (NTRS)

    Kapoor, Kamlesh

    1993-01-01

    A computational study has been conducted to evaluate the performance of various turbulence models. The NASA P8 inlet, which represents cruise condition of a typical hypersonic air-breathing vehicle, was selected as a test case for the study; the PARC2D code, which solves the full two dimensional Reynolds-averaged Navier-Stokes equations, was used. Results are presented for a total of six versions of zero- and two-equation turbulence models. Zero-equation models tested are the Baldwin-Lomax model, the Thomas model, and a combination of the two. Two-equation models tested are low-Reynolds number models (the Chien model and the Speziale model) and a high-Reynolds number model (the Launder and Spalding model).

  17. A semi-empirical model for the estimation of maximum horizontal displacement due to liquefaction-induced lateral spreading

    USGS Publications Warehouse

    Faris, Allison T.; Seed, Raymond B.; Kayen, Robert E.; Wu, Jiaer

    2006-01-01

    During the 1906 San Francisco Earthquake, liquefaction-induced lateral spreading and resultant ground displacements damaged bridges, buried utilities, and lifelines, conventional structures, and other developed works. This paper presents an improved engineering tool for the prediction of maximum displacement due to liquefaction-induced lateral spreading. A semi-empirical approach is employed, combining mechanistic understanding and data from laboratory testing with data and lessons from full-scale earthquake field case histories. The principle of strain potential index, based primary on correlation of cyclic simple shear laboratory testing results with in-situ Standard Penetration Test (SPT) results, is used as an index to characterized the deformation potential of soils after they liquefy. A Bayesian probabilistic approach is adopted for development of the final predictive model, in order to take fullest advantage of the data available and to deal with the inherent uncertainties intrinstiic to the back-analyses of field case histories. A case history from the 1906 San Francisco Earthquake is utilized to demonstrate the ability of the resultant semi-empirical model to estimate maximum horizontal displacement due to liquefaction-induced lateral spreading.

  18. Inlet Acoustic Data from a High Bypass Ratio Turbofan Rotor in an Internal Flow Component Test Facility

    NASA Technical Reports Server (NTRS)

    Bozak, Richard F.

    2017-01-01

    In February 2017, aerodynamic and acoustic testing was completed on a scale-model high bypass ratio turbofan rotor, R4, in an internal flow component test facility. The objective of testing was to determine the aerodynamic and acoustic impact of fan casing treatments designed to reduce noise. The baseline configuration consisted of the R4 rotor with a hardwall fan case. Data are presented for a baseline acoustic run with fan exit instrumentation removed to give a clean acoustic configuration.

  19. Predicted thermal response of a cryogenic fuel tank exposed to simulated aerodynamic heating profiles with different cryogens and fill levels

    NASA Technical Reports Server (NTRS)

    Hanna, Gregory J.; Stephens, Craig A.

    1991-01-01

    A two dimensional finite difference thermal model was developed to predict the effects of heating profile, fill level, and cryogen type prior to experimental testing the Generic Research Cryogenic Tank (GRCT). These numerical predictions will assist in defining test scenarios, sensor locations, and venting requirements for the GRCT experimental tests. Boiloff rates, tank-wall and fluid temperatures, and wall heat fluxes were determined for 20 computational test cases. The test cases spanned three discrete fill levels and three heating profiles for hydrogen and nitrogen.

  20. Testing the effectiveness of family therapeutic assessment: a case study using a time-series design.

    PubMed

    Smith, Justin D; Wolf, Nicole J; Handler, Leonard; Nash, Michael R

    2009-11-01

    We describe a family Therapeutic Assessment (TA) case study employing 2 assessors, 2 assessment rooms, and a video link. In the study, we employed a daily measures time-series design with a pretreatment baseline and follow-up period to examine the family TA treatment model. In addition to being an illustrative addition to a number of clinical reports suggesting the efficacy of family TA, this study is the first to apply a case-based time-series design to test whether family TA leads to clinical improvement and also illustrates when that improvement occurs. Results support the trajectory of change proposed by Finn (2007), the TA model's creator, who posits that benefits continue beyond the formal treatment itself.

  1. Implementing secure laptop-based testing in an undergraduate nursing program: a case study.

    PubMed

    Tao, Jinyuan; Lorentz, B Chris; Hawes, Stacey; Rugless, Fely; Preston, Janice

    2012-07-01

    This article presents the implementation of secure laptop-based testing in an undergraduate nursing program. Details on how to design, develop, implement, and secure tests are discussed. Laptop-based testing mode is also compared with the computer-laboratory-based testing model. Five elements of the laptop-based testing model are illustrated: (1) it simulates the national board examination, (2) security is achievable, (3) it is convenient for both instructors and students, (4) it provides students hands-on practice, (5) continuous technical support is the key.

  2. Method of evaluating the impact of ERP implementation critical success factors - a case study in oil and gas industries

    NASA Astrophysics Data System (ADS)

    Gajic, Gordana; Stankovski, Stevan; Ostojic, Gordana; Tesic, Zdravko; Miladinovic, Ljubomir

    2014-01-01

    The so far implemented enterprise resource planning (ERP) systems have in many cases failed to meet the requirements regarding the business process control, decrease of business costs and increase of company profit margin. Therefore, there is a real need for an evaluation of the influence of ERP on the company's performance indicators. Proposed in this article is an advanced model for the evaluation of the success of ERP implementation on organisational and operational performance indicators in oil-gas companies. The recommended method establishes a correlation between a process-based method, a scorecard model and ERP critical success factors. The method was verified and tested on two case studies in oil-gas companies using the following procedure: the model was developed, tested and implemented in a pilot gas-oil company, while the results were implemented and verified in another gas-oil company.

  3. Reliability and Model Fit

    ERIC Educational Resources Information Center

    Stanley, Leanne M.; Edwards, Michael C.

    2016-01-01

    The purpose of this article is to highlight the distinction between the reliability of test scores and the fit of psychometric measurement models, reminding readers why it is important to consider both when evaluating whether test scores are valid for a proposed interpretation and/or use. It is often the case that an investigator judges both the…

  4. GEN-IV Benchmarking of Triso Fuel Performance Models under accident conditions modeling input data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collin, Blaise Paul

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: • The modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release. • The modeling of the AGR-1 and HFR-EU1bis safety testing experiments. •more » The comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from “Case 5” of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. “Case 5” of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to “effects of the numerical calculation method rather than the physical model” [IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read this document thoroughly to make sure all the data needed for their calculations is provided in the document. Missing data will be added to a revision of the document if necessary. 09/2016: Tables 6 and 8 updated. AGR-2 input data added« less

  5. Reusable Solid Rocket Motor Nozzle Joint-4 Thermal Analysis

    NASA Technical Reports Server (NTRS)

    Clayton, J. Louie

    2001-01-01

    This study provides for development and test verification of a thermal model used for prediction of joint heating environments, structural temperatures and seal erosions in the Space Shuttle Reusable Solid Rocket Motor (RSRM) Nozzle Joint-4. The heating environments are a result of rapid pressurization of the joint free volume assuming a leak path has occurred in the filler material used for assembly gap close out. Combustion gases flow along the leak path from nozzle environment to joint O-ring gland resulting in local heating to the metal housing and erosion of seal materials. Analysis of this condition was based on usage of the NASA Joint Pressurization Routine (JPR) for environment determination and the Systems Improved Numerical Differencing Analyzer (SINDA) for structural temperature prediction. Model generated temperatures, pressures and seal erosions are compared to hot fire test data for several different leak path situations. Investigated in the hot fire test program were nozzle joint-4 O-ring erosion sensitivities to leak path width in both open and confined joint geometries. Model predictions were in generally good agreement with the test data for the confined leak path cases. Worst case flight predictions are provided using the test-calibrated model. Analysis issues are discussed based on model calibration procedures.

  6. Development of a Three-Dimensional Spectral Element Model for NWP: Idealized Simulations on the Sphere

    NASA Astrophysics Data System (ADS)

    Viner, K.; Reinecke, P. A.; Gabersek, S.; Flagg, D. D.; Doyle, J. D.; Martini, M.; Ryglicki, D.; Michalakes, J.; Giraldo, F.

    2016-12-01

    NEPTUNE: the Navy Environmental Prediction sysTem Using the NUMA*corE, is a 3D spectral element atmospheric model composed of a full suite of physics parameterizations and pre- and post-processing infrastructure with plans for data assimilation and coupling components to a variety of Earth-system models. This talk will focus on the initial struggles and solutions in adapting NUMA for stable and accurate integration on the sphere using both the deep atmosphere equations and a newly developed shallow-atmosphere approximation, as demonstrated through idealized test cases. In addition, details of the physics-dynamics coupling methodology will be discussed. NEPTUNE results for test cases from the 2016 Dynamical Core Model Intercomparison Project (DCMIP-2016) will be shown and discussed. *NUMA: Nonhydrostatic Unified Model of the Atmosphere; Kelly and Giraldo 2012, JCP

  7. Pleiotropy Analysis of Quantitative Traits at Gene Level by Multivariate Functional Linear Models

    PubMed Central

    Wang, Yifan; Liu, Aiyi; Mills, James L.; Boehnke, Michael; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Xiong, Momiao; Wu, Colin O.; Fan, Ruzong

    2015-01-01

    In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai–Bartlett trace, Hotelling–Lawley trace, and Wilks’s Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. PMID:25809955

  8. Pleiotropy analysis of quantitative traits at gene level by multivariate functional linear models.

    PubMed

    Wang, Yifan; Liu, Aiyi; Mills, James L; Boehnke, Michael; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao; Wu, Colin O; Fan, Ruzong

    2015-05-01

    In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai-Bartlett trace, Hotelling-Lawley trace, and Wilks's Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. © 2015 WILEY PERIODICALS, INC.

  9. Wall modeled LES of wind turbine wakes with geometrical effects

    NASA Astrophysics Data System (ADS)

    Bricteux, Laurent; Benard, Pierre; Zeoli, Stephanie; Moureau, Vincent; Lartigue, Ghislain; Vire, Axelle

    2017-11-01

    This study focuses on prediction of wind turbine wakes when geometrical effects such as nacelle, tower, and built environment, are taken into account. The aim is to demonstrate the ability of a high order unstructured solver called YALES2 to perform wall modeled LES of wind turbine wake turbulence. The wind turbine rotor is modeled using an Actuator Line Model (ALM) while the geometrical details are explicitly meshed thanks to the use of an unstructured grid. As high Reynolds number flows are considered, sub-grid scale models as well as wall modeling are required. The first test case investigated concerns a wind turbine flow located in a wind tunnel that allows to validate the proposed methodology using experimental data. The second test case concerns the simulation of a wind turbine wake in a complex environment (e.g. a Building) using realistic turbulent inflow conditions.

  10. Modeling of the UAE Wind Turbine for Refinement of FAST{_}AD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jonkman, J. M.

    The Unsteady Aerodynamics Experiment (UAE) research wind turbine was modeled both aerodynamically and structurally in the FAST{_}AD wind turbine design code, and its response to wind inflows was simulated for a sample of test cases. A study was conducted to determine why wind turbine load magnitude discrepancies-inconsistencies in aerodynamic force coefficients, rotor shaft torque, and out-of-plane bending moments at the blade root across a range of operating conditions-exist between load predictions made by FAST{_}AD and other modeling tools and measured loads taken from the actual UAE wind turbine during the NASA-Ames wind tunnel tests. The acquired experimental test data representmore » the finest, most accurate set of wind turbine aerodynamic and induced flow field data available today. A sample of the FAST{_}AD model input parameters most critical to the aerodynamics computations was also systematically perturbed to determine their effect on load and performance predictions. Attention was focused on the simpler upwind rotor configuration, zero yaw error test cases. Inconsistencies in input file parameters, such as aerodynamic performance characteristics, explain a noteworthy fraction of the load prediction discrepancies of the various modeling tools.« less

  11. Second-Generation Large Civil Tiltrotor 7- by 10-Foot Wind Tunnel Test Data Report

    NASA Technical Reports Server (NTRS)

    Theodore, Colin R.; Russell, Carl R.; Willink, Gina C.; Pete, Ashley E.; Adibi, Sierra A.; Ewert, Adam; Theuns, Lieselotte; Beierle, Connor

    2016-01-01

    An approximately 6-percent scale model of the NASA Second-Generation Large Civil Tiltrotor (LCTR2) Aircraft was tested in the U.S. Army 7- by 10-Foot Wind Tunnel at NASA Ames Research Center January 4 to April 19, 2012, and September 18 to November 1, 2013. The full model was tested, along with modified versions in order to determine the effects of the wing tip extensions and nacelles; the wing was also tested separately in the various configurations. In both cases, the wing and nacelles used were adopted from the U.S. Army High Efficiency Tilt Rotor (HETR) aircraft, in order to limit the cost of the experiment. The full airframe was tested in high-speed cruise and low-speed hover flight conditions, while the wing was tested only in cruise conditions, with Reynolds numbers ranging from 0 to 1.4 million. In all cases, the external scale system of the wind tunnel was used to collect data. Both models were mounted to the scale using two support struts attached underneath the wing; the full airframe model also used a third strut attached at the tail. The collected data provides insight into the performance of the preliminary design of the LCTR2 and will be used for computational fluid dynamics (CFD) validation and the development of flight dynamics simulation models.

  12. Climate variations and salmonellosis transmission in Adelaide, South Australia: a comparison between regression models

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Bi, Peng; Hiller, Janet

    2008-01-01

    This is the first study to identify appropriate regression models for the association between climate variation and salmonellosis transmission. A comparison between different regression models was conducted using surveillance data in Adelaide, South Australia. By using notified salmonellosis cases and climatic variables from the Adelaide metropolitan area over the period 1990-2003, four regression methods were examined: standard Poisson regression, autoregressive adjusted Poisson regression, multiple linear regression, and a seasonal autoregressive integrated moving average (SARIMA) model. Notified salmonellosis cases in 2004 were used to test the forecasting ability of the four models. Parameter estimation, goodness-of-fit and forecasting ability of the four regression models were compared. Temperatures occurring 2 weeks prior to cases were positively associated with cases of salmonellosis. Rainfall was also inversely related to the number of cases. The comparison of the goodness-of-fit and forecasting ability suggest that the SARIMA model is better than the other three regression models. Temperature and rainfall may be used as climatic predictors of salmonellosis cases in regions with climatic characteristics similar to those of Adelaide. The SARIMA model could, thus, be adopted to quantify the relationship between climate variations and salmonellosis transmission.

  13. Effectiveness of screening hospital admissions to detect asymptomatic carriers of Clostridium difficile: a modeling evaluation.

    PubMed

    Lanzas, Cristina; Dubberke, Erik R

    2014-08-01

    Both asymptomatic and symptomatic Clostridium difficile carriers contribute to new colonizations and infections within a hospital, but current control strategies focus only on preventing transmission from symptomatic carriers. Our objective was to evaluate the potential effectiveness of methods targeting asymptomatic carriers to control C. difficile colonization and infection (CDI) rates in a hospital ward: screening patients at admission to detect asymptomatic C. difficile carriers and placing positive patients into contact precautions. We developed an agent-based transmission model for C. difficile that incorporates screening and contact precautions for asymptomatic carriers in a hospital ward. We simulated scenarios that vary according to screening test characteristics, colonization prevalence, and type of strain present at admission. In our baseline scenario, on average, 42% of CDI cases were community-onset cases. Within the hospital-onset (HO) cases, approximately half were patients admitted as asymptomatic carriers who became symptomatic in the ward. On average, testing for asymptomatic carriers reduced the number of new colonizations and HO-CDI cases by 40%-50% and 10%-25%, respectively, compared with the baseline scenario. Test sensitivity, turnaround time, colonization prevalence at admission, and strain type had significant effects on testing efficacy. Testing for asymptomatic carriers at admission may reduce both the number of new colonizations and HO-CDI cases. Additional reductions could be achieved by preventing disease in patients who are admitted as asymptomatic carriers and developed CDI during the hospital stay.

  14. Verification of CFD model of plane jet used for smoke free zone separation in case of fire

    NASA Astrophysics Data System (ADS)

    Krajewski, Grzegorz; Suchy, Przemysław

    2018-01-01

    This paper presents the basic information about the use of air curtains in fire safety, as a barrier for heat and smoke. Mathematical model of an air curtain presented hereallows estimation of velocity of air in various points of space, including the velocity of air from an angled air curtain. Presented equations show how various parameters influence the performance of air curtain. Further, authors present results of their air curtain performance. Authors of that article have done tests in a real scale model. Tests results were used to verify chosen turbulence model and boundary conditions. Results of new studies are presented with regards to the performance of air curtain in case of fire, and final remarks on its design are given.

  15. Model-based analysis of costs and outcomes of non-invasive prenatal testing for Down's syndrome using cell free fetal DNA in the UK National Health Service.

    PubMed

    Morris, Stephen; Karlsen, Saffron; Chung, Nancy; Hill, Melissa; Chitty, Lyn S

    2014-01-01

    Non-invasive prenatal testing (NIPT) for Down's syndrome (DS) using cell free fetal DNA in maternal blood has the potential to dramatically alter the way prenatal screening and diagnosis is delivered. Before NIPT can be implemented into routine practice, information is required on its costs and benefits. We investigated the costs and outcomes of NIPT for DS as contingent testing and as first-line testing compared with the current DS screening programme in the UK National Health Service. We used a pre-existing model to evaluate the costs and outcomes associated with NIPT compared with the current DS screening programme. The analysis was based on a hypothetical screening population of 10,000 pregnant women. Model inputs were taken from published sources. The main outcome measures were number of DS cases detected, number of procedure-related miscarriages and total cost. At a screening risk cut-off of 1∶150 NIPT as contingent testing detects slightly fewer DS cases, has fewer procedure-related miscarriages, and costs the same as current DS screening (around UK£280,000) at a cost of £500 per NIPT. As first-line testing NIPT detects more DS cases, has fewer procedure-related miscarriages, and is more expensive than current screening at a cost of £50 per NIPT. When NIPT uptake increases, NIPT detects more DS cases with a small increase in procedure-related miscarriages and costs. NIPT is currently available in the private sector in the UK at a price of £400-£900. If the NHS cost was at the lower end of this range then at a screening risk cut-off of 1∶150 NIPT as contingent testing would be cost neutral or cost saving compared with current DS screening. As first-line testing NIPT is likely to produce more favourable outcomes but at greater cost. Further research is needed to evaluate NIPT under real world conditions.

  16. Summary of EASM Turbulence Models in CFL3D With Validation Test Cases

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Gatski, Thomas B.

    2003-01-01

    This paper summarizes the Explicit Algebraic Stress Model in k-omega form (EASM-ko) and in k-epsilon form (EASM-ke) in the Reynolds-averaged Navier-Stokes code CFL3D. These models have been actively used over the last several years in CFL3D, and have undergone some minor modifications during that time. Details of the equations and method for coding the latest versions of the models are given, and numerous validation cases are presented. This paper serves as a validation archive for these models.

  17. Testing density-dependent groundwater models: Two-dimensional steady state unstable convection in infinite, finite and inclined porous layers

    USGS Publications Warehouse

    Weatherill, D.; Simmons, C.T.; Voss, C.I.; Robinson, N.I.

    2004-01-01

    This study proposes the use of several problems of unstable steady state convection with variable fluid density in a porous layer of infinite horizontal extent as two-dimensional (2-D) test cases for density-dependent groundwater flow and solute transport simulators. Unlike existing density-dependent model benchmarks, these problems have well-defined stability criteria that are determined analytically. These analytical stability indicators can be compared with numerical model results to test the ability of a code to accurately simulate buoyancy driven flow and diffusion. The basic analytical solution is for a horizontally infinite fluid-filled porous layer in which fluid density decreases with depth. The proposed test problems include unstable convection in an infinite horizontal box, in a finite horizontal box, and in an infinite inclined box. A dimensionless Rayleigh number incorporating properties of the fluid and the porous media determines the stability of the layer in each case. Testing the ability of numerical codes to match both the critical Rayleigh number at which convection occurs and the wavelength of convection cells is an addition to the benchmark problems currently in use. The proposed test problems are modelled in 2-D using the SUTRA [SUTRA-A model for saturated-unsaturated variable-density ground-water flow with solute or energy transport. US Geological Survey Water-Resources Investigations Report, 02-4231, 2002. 250 p] density-dependent groundwater flow and solute transport code. For the case of an infinite horizontal box, SUTRA results show a distinct change from stable to unstable behaviour around the theoretical critical Rayleigh number of 4??2 and the simulated wavelength of unstable convection agrees with that predicted by the analytical solution. The effects of finite layer aspect ratio and inclination on stability indicators are also tested and numerical results are in excellent agreement with theoretical stability criteria and with numerical results previously reported in traditional fluid mechanics literature. ?? 2004 Elsevier Ltd. All rights reserved.

  18. Remote control missile model test

    NASA Technical Reports Server (NTRS)

    Allen, Jerry M.; Shaw, David S.; Sawyer, Wallace C.

    1989-01-01

    An extremely large, systematic, axisymmetric body/tail fin data base was gathered through tests of an innovative missile model design which is described herein. These data were originally obtained for incorporation into a missile aerodynamics code based on engineering methods (Program MISSILE3), but can also be used as diagnostic test cases for developing computational methods because of the individual-fin data included in the data base. Detailed analysis of four sample cases from these data are presented to illustrate interesting individual-fin force and moment trends. These samples quantitatively show how bow shock, fin orientation, fin deflection, and body vortices can produce strong, unusual, and computationally challenging effects on individual fin loads. Comparisons between these data and calculations from the SWINT Euler code are also presented.

  19. Assessment of impact damage of composite rocket motor cases

    NASA Technical Reports Server (NTRS)

    Paris, Henry G.

    1994-01-01

    This contract reviewed the available literature on mechanisms of low velocity impact damage in filament wound rocket motor cases, MDE methods to quantify damage, critical coupon level test methods, manufacturing and material process variables and empirical and analytical modeling off impact damage. The critical design properties for rocket motor cases are biaxial hoop and axial tensile strength. Low velocity impact damage is insidious because it can create serious nonvisible damage at very low impact velocities. In thick rocket motor cases the prevalent low velocity impact damage is fiber fracture and matrix cracking adjacent to the front face. In contrast, low velocity loading of thin wall cylinders induces flexure, depending on span length and the flexure induces delamination and tensile cracking on the back face wall opposed to impact occurs due to flexural stresses imposed by impact loading. Important NDE methods for rocket motor cases are non-contacting methods that allow inspection from one side. Among these are vibrothermography, and pulse-echo methods based on acoustic-ultrasonic methods. High resolution techniques such as x-ray computed tomography appear to have merit for accurate geometrical characterization of local damage to support development of analytical models of micromechanics. The challenge of coupon level testing is to reproduce the biaxial stress state that the full scale article experiences, and to determine how to scale the composite structure to model full sized behavior. Biaxial tensile testing has been performed by uniaxially tensile loading internally pressurized cylinders. This is experimentally difficult due to gripping problems and pressure containment. Much prior work focused on uniaxial tensile testing of model filament wound cylinders. Interpretation of the results of some studies is complicated by the fact that the fabrication process did not duplicate full scale manufacturing. It is difficult to scale results from testing subscale cylinders since there are significant differences in out time of the resins relative to full scale cylinder fabrication, differences in hoop fiber tensioning and unsatisfactory coupon configurations. It appears that development of a new test method for subscale cylinders is merited. Damage tolerance may be improved by material optimization that uses fiber treatments and matrix modifications to control the fiber matrix interface bonding. It is difficult to develop process optimization in subscale cylinders without also modeling the longer out times resins experience in full scale testing. A major breakthrough in characterizing the effect of impact damage on residual strength, and understanding how to scale results of subscale evaluations, will be a sound micromechanical model that described progressive failure of the composite. Such models will utilize a three dimensional stress analysis due to the complex nature of low velocity impact stresses in thick composites. When these models are coupled with non-contact NDE methods that geometrically characterize the damage and acoustic methods that characterize the effective local elastic properties, accurate assessment of residual strength from impact damage may be possible. Directions for further development are suggested.

  20. Assessment of impact damage of composite rocket motor cases

    NASA Astrophysics Data System (ADS)

    Paris, Henry G.

    1994-02-01

    This contract reviewed the available literature on mechanisms of low velocity impact damage in filament wound rocket motor cases, MDE methods to quantify damage, critical coupon level test methods, manufacturing and material process variables and empirical and analytical modeling off impact damage. The critical design properties for rocket motor cases are biaxial hoop and axial tensile strength. Low velocity impact damage is insidious because it can create serious nonvisible damage at very low impact velocities. In thick rocket motor cases the prevalent low velocity impact damage is fiber fracture and matrix cracking adjacent to the front face. In contrast, low velocity loading of thin wall cylinders induces flexure, depending on span length and the flexure induces delamination and tensile cracking on the back face wall opposed to impact occurs due to flexural stresses imposed by impact loading. Important NDE methods for rocket motor cases are non-contacting methods that allow inspection from one side. Among these are vibrothermography, and pulse-echo methods based on acoustic-ultrasonic methods. High resolution techniques such as x-ray computed tomography appear to have merit for accurate geometrical characterization of local damage to support development of analytical models of micromechanics. The challenge of coupon level testing is to reproduce the biaxial stress state that the full scale article experiences, and to determine how to scale the composite structure to model full sized behavior. Biaxial tensile testing has been performed by uniaxially tensile loading internally pressurized cylinders. This is experimentally difficult due to gripping problems and pressure containment. Much prior work focused on uniaxial tensile testing of model filament wound cylinders. Interpretation of the results of some studies is complicated by the fact that the fabrication process did not duplicate full scale manufacturing. It is difficult to scale results from testing subscale cylinders since there are significant differences in out time of the resins relative to full scale cylinder fabrication, differences in hoop fiber tensioning and unsatisfactory coupon configurations. It appears that development of a new test method for subscale cylinders is merited. Damage tolerance may be improved by material optimization that uses fiber treatments and matrix modifications to control the fiber matrix interface bonding. It is difficult to develop process optimization in subscale cylinders without also modeling the longer out times resins experience in full scale testing. A major breakthrough in characterizing the effect of impact damage on residual strength, and understanding how to scale results of subscale evaluations, will be a sound micromechanical model that described progressive failure of the composite.

  1. Using generalized additive (mixed) models to analyze single case designs.

    PubMed

    Shadish, William R; Zuur, Alain F; Sullivan, Kristynn J

    2014-04-01

    This article shows how to apply generalized additive models and generalized additive mixed models to single-case design data. These models excel at detecting the functional form between two variables (often called trend), that is, whether trend exists, and if it does, what its shape is (e.g., linear and nonlinear). In many respects, however, these models are also an ideal vehicle for analyzing single-case designs because they can consider level, trend, variability, overlap, immediacy of effect, and phase consistency that single-case design researchers examine when interpreting a functional relation. We show how these models can be implemented in a wide variety of ways to test whether treatment is effective, whether cases differ from each other, whether treatment effects vary over cases, and whether trend varies over cases. We illustrate diagnostic statistics and graphs, and we discuss overdispersion of data in detail, with examples of quasibinomial models for overdispersed data, including how to compute dispersion and quasi-AIC fit indices in generalized additive models. We show how generalized additive mixed models can be used to estimate autoregressive models and random effects and discuss the limitations of the mixed models compared to generalized additive models. We provide extensive annotated syntax for doing all these analyses in the free computer program R. Copyright © 2013 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  2. Modal Survey of ETM-3, A 5-Segment Derivative of the Space Shuttle Solid Rocket Booster

    NASA Technical Reports Server (NTRS)

    Nielsen, D.; Townsend, J.; Kappus, K.; Driskill, T.; Torres, I.; Parks, R.

    2005-01-01

    The complex interactions between internal motor generated pressure oscillations and motor structural vibration modes associated with the static test configuration of a Reusable Solid Rocket Motor have potential to generate significant dynamic thrust loads in the 5-segment configuration (Engineering Test Motor 3). Finite element model load predictions for worst-case conditions were generated based on extrapolation of a previously correlated 4-segment motor model. A modal survey was performed on the largest rocket motor to date, Engineering Test Motor #3 (ETM-3), to provide data for finite element model correlation and validation of model generated design loads. The modal survey preparation included pretest analyses to determine an efficient analysis set selection using the Effective Independence Method and test simulations to assure critical test stand component loads did not exceed design limits. Historical Reusable Solid Rocket Motor modal testing, ETM-3 test analysis model development and pre-test loads analyses, as well as test execution, and a comparison of results to pre-test predictions are discussed.

  3. Demonstration of risk based, goal driven framework for hydrological field campaigns and inverse modeling with case studies

    NASA Astrophysics Data System (ADS)

    Harken, B.; Geiges, A.; Rubin, Y.

    2013-12-01

    There are several stages in any hydrological modeling campaign, including: formulation and analysis of a priori information, data acquisition through field campaigns, inverse modeling, and forward modeling and prediction of some environmental performance metric (EPM). The EPM being predicted could be, for example, contaminant concentration, plume travel time, or aquifer recharge rate. These predictions often have significant bearing on some decision that must be made. Examples include: how to allocate limited remediation resources between multiple contaminated groundwater sites, where to place a waste repository site, and what extraction rates can be considered sustainable in an aquifer. Providing an answer to these questions depends on predictions of EPMs using forward models as well as levels of uncertainty related to these predictions. Uncertainty in model parameters, such as hydraulic conductivity, leads to uncertainty in EPM predictions. Often, field campaigns and inverse modeling efforts are planned and undertaken with reduction of parametric uncertainty as the objective. The tool of hypothesis testing allows this to be taken one step further by considering uncertainty reduction in the ultimate prediction of the EPM as the objective and gives a rational basis for weighing costs and benefits at each stage. When using the tool of statistical hypothesis testing, the EPM is cast into a binary outcome. This is formulated as null and alternative hypotheses, which can be accepted and rejected with statistical formality. When accounting for all sources of uncertainty at each stage, the level of significance of this test provides a rational basis for planning, optimization, and evaluation of the entire campaign. Case-specific information, such as consequences prediction error and site-specific costs can be used in establishing selection criteria based on what level of risk is deemed acceptable. This framework is demonstrated and discussed using various synthetic case studies. The case studies involve contaminated aquifers where a decision must be made based on prediction of when a contaminant will arrive at a given location. The EPM, in this case contaminant travel time, is cast into the hypothesis testing framework. The null hypothesis states that the contaminant plume will arrive at the specified location before a critical value of time passes, and the alternative hypothesis states that the plume will arrive after the critical time passes. Different field campaigns are analyzed based on effectiveness in reducing the probability of selecting the wrong hypothesis, which in this case corresponds to reducing uncertainty in the prediction of plume arrival time. To examine the role of inverse modeling in this framework, case studies involving both Maximum Likelihood parameter estimation and Bayesian inversion are used.

  4. A Bayesian hierarchical model with novel prior specifications for estimating HIV testing rates.

    PubMed

    An, Qian; Kang, Jian; Song, Ruiguang; Hall, H Irene

    2016-04-30

    Human immunodeficiency virus (HIV) infection is a severe infectious disease actively spreading globally, and acquired immunodeficiency syndrome (AIDS) is an advanced stage of HIV infection. The HIV testing rate, that is, the probability that an AIDS-free HIV infected person seeks a test for HIV during a particular time interval, given no previous positive test has been obtained prior to the start of the time, is an important parameter for public health. In this paper, we propose a Bayesian hierarchical model with two levels of hierarchy to estimate the HIV testing rate using annual AIDS and AIDS-free HIV diagnoses data. At level one, we model the latent number of HIV infections for each year using a Poisson distribution with the intensity parameter representing the HIV incidence rate. At level two, the annual numbers of AIDS and AIDS-free HIV diagnosed cases and all undiagnosed cases stratified by the HIV infections at different years are modeled using a multinomial distribution with parameters including the HIV testing rate. We propose a new class of priors for the HIV incidence rate and HIV testing rate taking into account the temporal dependence of these parameters to improve the estimation accuracy. We develop an efficient posterior computation algorithm based on the adaptive rejection metropolis sampling technique. We demonstrate our model using simulation studies and the analysis of the national HIV surveillance data in the USA. Copyright © 2015 John Wiley & Sons, Ltd.

  5. Overview of MSFC AMSD Integrated Modeling and Analysis

    NASA Technical Reports Server (NTRS)

    Cummings, Ramona; Russell, Kevin (Technical Monitor)

    2002-01-01

    Structural, thermal, dynamic, and optical models of the NGST AMSD mirror assemblies are being finalized and integrated for predicting cryogenic vacuum test performance of the developing designs. Analyzers in use by the MSFC Modeling and Analysis Team are identified, with overview of approach to integrate simulated effects. Guidelines to verify the individual models and calibration cases for comparison with the vendors' analyses are presented. In addition, baseline and proposed additional scenarios for the cryogenic vacuum testing are briefly described.

  6. A broad scope knowledge based model for optimization of VMAT in esophageal cancer: validation and assessment of plan quality among different treatment centers.

    PubMed

    Fogliata, Antonella; Nicolini, Giorgia; Clivio, Alessandro; Vanetti, Eugenio; Laksar, Sarbani; Tozzi, Angelo; Scorsetti, Marta; Cozzi, Luca

    2015-10-31

    To evaluate the performance of a broad scope model-based optimisation process for volumetric modulated arc therapy applied to esophageal cancer. A set of 70 previously treated patients in two different institutions, were selected to train a model for the prediction of dose-volume constraints. The model was built with a broad-scope purpose, aiming to be effective for different dose prescriptions and tumour localisations. It was validated on three groups of patients from the same institution and from another clinic not providing patients for the training phase. Comparison of the automated plans was done against reference cases given by the clinically accepted plans. Quantitative improvements (statistically significant for the majority of the analysed dose-volume parameters) were observed between the benchmark and the test plans. Of 624 dose-volume objectives assessed for plan evaluation, in 21 cases (3.3 %) the reference plans failed to respect the constraints while the model-based plans succeeded. Only in 3 cases (<0.5 %) the reference plans passed the criteria while the model-based failed. In 5.3 % of the cases both groups of plans failed and in the remaining cases both passed the tests. Plans were optimised using a broad scope knowledge-based model to determine the dose-volume constraints. The results showed dosimetric improvements when compared to the benchmark data. Particularly the plans optimised for patients from the third centre, not participating to the training, resulted in superior quality. The data suggests that the new engine is reliable and could encourage its application to clinical practice.

  7. Pretest Plan for a Quarter Scale AFT Segment of the SRB Filament Wound Case in the NSWC Hydroballistics Facility. [space shuttle boosters

    NASA Technical Reports Server (NTRS)

    Adoue, J. A.

    1984-01-01

    In support of preflight design loads definition, preliminary water impact scale model are being conducted of space shuttle rocket boosters. The model to be used as well as the instrumentation, test facilities, and test procedures are described for water impact tests being conducted at test conditions to simulate full-scale initial impact at vertical velocities from 65 to 85 ft/sec. zero horizontal velocity, and angles of 0,5, and 10 degrees.

  8. Finite-sample and asymptotic sign-based tests for parameters of non-linear quantile regression with Markov noise

    NASA Astrophysics Data System (ADS)

    Sirenko, M. A.; Tarasenko, P. F.; Pushkarev, M. I.

    2017-01-01

    One of the most noticeable features of sign-based statistical procedures is an opportunity to build an exact test for simple hypothesis testing of parameters in a regression model. In this article, we expanded a sing-based approach to the nonlinear case with dependent noise. The examined model is a multi-quantile regression, which makes it possible to test hypothesis not only of regression parameters, but of noise parameters as well.

  9. Finite element modeling of ROPS in static testing and rear overturns.

    PubMed

    Harris, J R; Mucino, V H; Etherton, J R; Snyder, K A; Means, K H

    2000-08-01

    Even with the technological advances of the last several decades, agricultural production remains one of the most hazardous occupations in the United States. Death due to tractor rollover is a prime contributor to this hazard. Standards for rollover protective structures (ROPS) performance and certification have been developed by groups such as the Society of Automotive Engineers (SAE) and the American Society of Agricultural Engineers (ASAE) to combat these problems. The current ROPS certification standard, SAE J2194, requires either a dynamic or static testing sequence or both. Although some ROPS manufacturers perform both the dynamic and static phases of SAE J2194 testing, it is possible for a ROPS to be certified for field operation using static testing alone. This research compared ROPS deformation response from a simulated SAE J2194 static loading sequence to ROPS deformation response as a result of a simulated rearward tractor rollover. Finite element analysis techniques for plastic deformation were used to simulate both the static and dynamic rear rollover scenarios. Stress results from the rear rollover model were compared to results from simulated static testing per SAE J2194. Maximum stress values from simulated rear rollovers exceeded maximum stress values recorded during simulated static testing for half of the elements comprising the uprights. In the worst case, the static model underpredicts dynamic model results by approximately 7%. In the best case, the static model overpredicts dynamic model results by approximately 32%. These results suggest the need for additional experimental work to characterize ROPS stress levels during staged overturns and during testing according to the SAE standard.

  10. Preliminary dynamic tests of a flight-type ejector

    NASA Technical Reports Server (NTRS)

    Drummond, Colin K.

    1992-01-01

    A thrust augmenting ejector was tested to provide experimental data to assist in the assessment of theoretical models to predict duct and ejector fluid-dynamic characteristics. Eleven full-scale thrust augmenting ejector tests were conducted in which a rapid increase in the ejector nozzle pressure ratio was effected through a unique facility, bypass/burst-disk subsystem. The present work examines two cases representative of the test performance window. In the first case, the primary nozzle pressure ration (NPR) increased 36 percent from one unchoked (NPR = 1.29) primary flow condition to another (NPR = 1.75) over a 0.15 second interval. The second case involves choked primary flow conditions, where a 17 percent increase in primary nozzle flowrate (from NPR = 2.35 to NPR = 2.77) occurred over approximately 0.1 seconds. Although the real-time signal measurements support qualitative remarks on ejector performance, extracting quantitative ejector dynamic response was impeded by excessive aerodynamic noise and thrust stand dynamic (resonance) characteristics. It does appear, however, that a quasi-steady performance assumption is valid for this model with primary nozzle pressure increased on the order of 50 lb(sub f)/s. Transient signal treatment of the present dataset is discussed and initial interpretations of the results are compared with theoretical predictions for a similar Short Takeoff and Vertical Landing (STOVL) ejector model.

  11. Mars Science Laboratory Rover System Thermal Test

    NASA Technical Reports Server (NTRS)

    Novak, Keith S.; Kempenaar, Joshua E.; Liu, Yuanming; Bhandari, Pradeep; Dudik, Brenda A.

    2012-01-01

    On November 26, 2011, NASA launched a large (900 kg) rover as part of the Mars Science Laboratory (MSL) mission to Mars. The MSL rover is scheduled to land on Mars on August 5, 2012. Prior to launch, the Rover was successfully operated in simulated mission extreme environments during a 16-day long Rover System Thermal Test (STT). This paper describes the MSL Rover STT, test planning, test execution, test results, thermal model correlation and flight predictions. The rover was tested in the JPL 25-Foot Diameter Space Simulator Facility at the Jet Propulsion Laboratory (JPL). The Rover operated in simulated Cruise (vacuum) and Mars Surface environments (8 Torr nitrogen gas) with mission extreme hot and cold boundary conditions. A Xenon lamp solar simulator was used to impose simulated solar loads on the rover during a bounding hot case and during a simulated Mars diurnal test case. All thermal hardware was exercised and performed nominally. The Rover Heat Rejection System, a liquid-phase fluid loop used to transport heat in and out of the electronics boxes inside the rover chassis, performed better than predicted. Steady state and transient data were collected to allow correlation of analytical thermal models. These thermal models were subsequently used to predict rover thermal performance for the MSL Gale Crater landing site. Models predict that critical hardware temperatures will be maintained within allowable flight limits over the entire 669 Sol surface mission.

  12. A Novel Wind Speed Forecasting Model for Wind Farms of Northwest China

    NASA Astrophysics Data System (ADS)

    Wang, Jian-Zhou; Wang, Yun

    2017-01-01

    Wind resources are becoming increasingly significant due to their clean and renewable characteristics, and the integration of wind power into existing electricity systems is imminent. To maintain a stable power supply system that takes into account the stochastic nature of wind speed, accurate wind speed forecasting is pivotal. However, no single model can be applied to all cases. Recent studies show that wind speed forecasting errors are approximately 25% to 40% in Chinese wind farms. Presently, hybrid wind speed forecasting models are widely used and have been verified to perform better than conventional single forecasting models, not only in short-term wind speed forecasting but also in long-term forecasting. In this paper, a hybrid forecasting model is developed, the Similar Coefficient Sum (SCS) and Hermite Interpolation are exploited to process the original wind speed data, and the SVM model whose parameters are tuned by an artificial intelligence model is built to make forecast. The results of case studies show that the MAPE value of the hybrid model varies from 22.96% to 28.87 %, and the MAE value varies from 0.47 m/s to 1.30 m/s. Generally, Sign test, Wilcoxon's Signed-Rank test, and Morgan-Granger-Newbold test tell us that the proposed model is different from the compared models.

  13. How TK-TD and population models for aquatic macrophytes could support the risk assessment for plant protection products.

    PubMed

    Hommen, Udo; Schmitt, Walter; Heine, Simon; Brock, Theo Cm; Duquesne, Sabine; Manson, Phil; Meregalli, Giovanna; Ochoa-Acuña, Hugo; van Vliet, Peter; Arts, Gertie

    2016-01-01

    This case study of the Society of Environmental Toxicology and Chemistry (SETAC) workshop MODELINK demonstrates the potential use of mechanistic effects models for macrophytes to extrapolate from effects of a plant protection product observed in laboratory tests to effects resulting from dynamic exposure on macrophyte populations in edge-of-field water bodies. A standard European Union (EU) risk assessment for an example herbicide based on macrophyte laboratory tests indicated risks for several exposure scenarios. Three of these scenarios are further analyzed using effect models for 2 aquatic macrophytes, the free-floating standard test species Lemna sp., and the sediment-rooted submerged additional standard test species Myriophyllum spicatum. Both models include a toxicokinetic (TK) part, describing uptake and elimination of the toxicant, a toxicodynamic (TD) part, describing the internal concentration-response function for growth inhibition, and a description of biomass growth as a function of environmental factors to allow simulating seasonal dynamics. The TK-TD models are calibrated and tested using laboratory tests, whereas the growth models were assumed to be fit for purpose based on comparisons of predictions with typical growth patterns observed in the field. For the risk assessment, biomass dynamics are predicted for the control situation and for several exposure levels. Based on specific protection goals for macrophytes, preliminary example decision criteria are suggested for evaluating the model outputs. The models refined the risk indicated by lower tier testing for 2 exposure scenarios, while confirming the risk associated for the third. Uncertainties related to the experimental and the modeling approaches and their application in the risk assessment are discussed. Based on this case study and the assumption that the models prove suitable for risk assessment once fully evaluated, we recommend that 1) ecological scenarios be developed that are also linked to the exposure scenarios, and 2) quantitative protection goals be set to facilitate the interpretation of model results for risk assessment. © 2015 SETAC.

  14. The price of performance: a cost and performance analysis of the implementation of cell-free fetal DNA testing for Down syndrome in Ontario, Canada.

    PubMed

    Okun, N; Teitelbaum, M; Huang, T; Dewa, C S; Hoch, J S

    2014-04-01

    To examine the cost and performance implications of introducing cell-free fetal DNA (cffDNA) testing within modeled scenarios in a publicly funded Canadian provincial Down syndrome (DS) prenatal screening program. Two clinical algorithms were created: the first to represent the current screening program and the second to represent one that incorporates cffDNA testing. From these algorithms, eight distinct scenarios were modeled to examine: (1) the current program (no cffDNA), (2) the current program with first trimester screening (FTS) as the nuchal translucency-based primary screen (no cffDNA), (3) a program substituting current screening with primary cffDNA, (4) contingent cffDNA with current FTS performance, (5) contingent cffDNA at a fixed price to result in overall cost neutrality,(6) contingent cffDNA with an improved detection rate (DR) of FTS, (7) contingent cffDNA with higher uptake of FTS, and (8) contingent cffDNA with optimized FTS (higher uptake and improved DR). This modeling study demonstrates that introducing contingent cffDNA testing improves performance by increasing the number of cases of DS detected prenatally, and reducing the number of amniocenteses performed and concomitant iatrogenic pregnancy loss of pregnancies not affected by DS. Costs are modestly increased, although the cost per case of DS detected is decreased with contingent cffDNA testing. Contingent models of cffDNA testing can improve overall screening performance while maintaining the provision of an 11- to 13-week scan. Costs are modestly increased, but cost per prenatally detected case of DS is decreased. © 2013 John Wiley & Sons, Ltd.

  15. Estimating the cost-effectiveness of detecting cases of chronic hepatitis C infection on reception into prison

    PubMed Central

    Sutton, Andrew J; Edmunds, W John; Gill, O Noel

    2006-01-01

    Background In England and Wales where less than 1% of the population are Injecting drug users (IDUs), 97% of HCV reports are attributed to injecting drug use. As over 60% of the IDU population will have been imprisoned by the age of 30 years, prison may provide a good location in which to offer HCV screening and treatment. The aim of this work is to examine the cost effectiveness of a number of alternative HCV case-finding strategies on prison reception Methods A decision analysis model embedded in a model of the flow of IDUs through prison was used to estimate the cost effectiveness of a number of alternative case-finding strategies. The model estimates the average cost of identifying a new case of HCV from the perspective of the health care provider and how these estimates may evolve over time. Results The results suggest that administering verbal screening for a past positive HCV test and for ever having engaged in illicit drug use prior to the administering of ELISA and PCR tests can have a significant impact on the cost effectiveness of HCV case-finding strategies on prison reception; the discounted cost in 2017 being £2,102 per new HCV case detected compared to £3,107 when no verbal screening is employed. Conclusion The work here demonstrates the importance of targeting those individuals that have ever engaged in illicit drug use for HCV testing in prisons, these individuals can then be targeted for future intervention measures such as treatment or monitored to prevent future transmission. PMID:16803622

  16. Corrected goodness-of-fit test in covariance structure analysis.

    PubMed

    Hayakawa, Kazuhiko

    2018-05-17

    Many previous studies report simulation evidence that the goodness-of-fit test in covariance structure analysis or structural equation modeling suffers from the overrejection problem when the number of manifest variables is large compared with the sample size. In this study, we demonstrate that one of the tests considered in Browne (1974) can address this long-standing problem. We also propose a simple modification of Satorra and Bentler's mean and variance adjusted test for non-normal data. A Monte Carlo simulation is carried out to investigate the performance of the corrected tests in the context of a confirmatory factor model, a panel autoregressive model, and a cross-lagged panel (panel vector autoregressive) model. The simulation results reveal that the corrected tests overcome the overrejection problem and outperform existing tests in most cases. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  17. Artificial neural networks for modeling ammonia emissions released from sewage sludge composting

    NASA Astrophysics Data System (ADS)

    Boniecki, P.; Dach, J.; Pilarski, K.; Piekarska-Boniecka, H.

    2012-09-01

    The project was designed to develop, test and validate an original Neural Model describing ammonia emissions generated in composting sewage sludge. The composting mix was to include the addition of such selected structural ingredients as cereal straw, sawdust and tree bark. All created neural models contain 7 input variables (chemical and physical parameters of composting) and 1 output (ammonia emission). The α data file was subdivided into three subfiles: the learning file (ZU) containing 330 cases, the validation file (ZW) containing 110 cases and the test file (ZT) containing 110 cases. The standard deviation ratios (for all 4 created networks) ranged from 0.193 to 0.218. For all of the selected models, the correlation coefficient reached the high values of 0.972-0.981. The results show that he predictive neural model describing ammonia emissions from composted sewage sludge is well suited for assessing such emissions. The sensitivity analysis of the model for the input of variables of the process in question has shown that the key parameters describing ammonia emissions released in composting sewage sludge are pH and the carbon to nitrogen ratio (C:N).

  18. A prevalence-based association test for case-control studies.

    PubMed

    Ryckman, Kelli K; Jiang, Lan; Li, Chun; Bartlett, Jacquelaine; Haines, Jonathan L; Williams, Scott M

    2008-11-01

    Genetic association is often determined in case-control studies by the differential distribution of alleles or genotypes. Recent work has demonstrated that association can also be assessed by deviations from the expected distributions of alleles or genotypes. Specifically, multiple methods motivated by the principles of Hardy-Weinberg equilibrium (HWE) have been developed. However, these methods do not take into account many of the assumptions of HWE. Therefore, we have developed a prevalence-based association test (PRAT) as an alternative method for detecting association in case-control studies. This method, also motivated by the principles of HWE, uses an estimated population allele frequency to generate expected genotype frequencies instead of using the case and control frequencies separately. Our method often has greater power, under a wide variety of genetic models, to detect association than genotypic, allelic or Cochran-Armitage trend association tests. Therefore, we propose PRAT as a powerful alternative method of testing for association.

  19. Finite element simulation and analysis of local stress concentration in polymers with a nonlinear viscoelastic constitutive model

    NASA Astrophysics Data System (ADS)

    Chea, Limdara O.

    Given a nonlinear viscoelastic (NLVE) constitutive model for a polymer, this numerical study aims at simulating local stress concentrations in a boundary value problem with a corner stress singularity. A rectangular sample of Polyvinyl Acetate (PVAc)-like cross-linked polymer clamped by two metallic rigid grips and subjected to a compression and tension load is numerically simulated. A modified version of the finite element code FEAP, that incorporated a NLVE model based on the free volume theory, was used. First, the program was validated by comparing numerical and analytical results. Two simple mechanical tests (a uniaxial and a simple shear test) were performed on a Standard Linear Solid material model, using a linear viscoelastic (LVE) constitutive model. The LVE model was obtained by setting the proportionality coefficient [...] to zero in the free volume theory equations. Second, the LVE model was used on the corner singularity boundary value problem for three material models with different bulk relaxation functions K(t). The time-dependent stress field distribution was investigated using two sets of plots: the stress distribution contour plots and the stress time curves. Third, using the NLVE constitutive model, compression and tension cases were compared using the stress results (normal stress [...] and shear stress [...]). These two cases assessed the effect of the creep retardation-creep acceleration phenomena. The shift between the beginning of the relaxation moduli was shown to play an important role. This parameter affects strongly the fluctuation pattern of the stress curves. For two different shift values, in one case, the stress response presents a 'double peak' and 'stress inversion' characteristic whereas, in the other case, it presents a 'single peak' and no 'inversion'. Another important factor was the material's compressibility. In the case of a nearly-incompressible material, the LVE and NLVE models yielded identical results; thus, the simpler LVE model is preferable. However, in the case of sufficient volume dilatation (or contraction), the NLVE model predicted correct characteristic responses, whereas LVE results were erroneous. This proves the necessity of using the NLVE model over the LVE model.

  20. Cost-effectiveness of one-time genetic testing to minimize lifetime adverse drug reactions.

    PubMed

    Alagoz, O; Durham, D; Kasirajan, K

    2016-04-01

    We evaluated the cost-effectiveness of one-time pharmacogenomic testing for preventing adverse drug reactions (ADRs) over a patient's lifetime. We developed a Markov-based Monte Carlo microsimulation model to represent the ADR events in the lifetime of each patient. The base-case considered a 40-year-old patient. We measured health outcomes in life years (LYs) and quality-adjusted LYs (QALYs) and estimated costs using 2013 US$. In the base-case, one-time genetic testing had an incremental cost-effectiveness ratio (ICER) of $43,165 (95% confidence interval (CI) is ($42,769,$43,561)) per additional LY and $53,680 per additional QALY (95% CI is ($53,182,$54,179)), hence under the base-case one-time genetic testing is cost-effective. The ICER values were most sensitive to the average probability of death due to ADR, reduction in ADR rate due to genetic testing, mean ADR rate and cost of genetic testing.

  1. Modified Multiple Model Adaptive Estimation (M3AE) for Simultaneous Parameter and State Estimation

    DTIC Science & Technology

    1998-03-01

    Contents Page Dedication : iv Acknowledgments v Table Of Contents vi List of Figures . . ; x List of Tables xv Abstract xvii Chapter 1 ...INTRODUCTION 1 1.1 Overview 1 1.2 Background 7 1.2.1 The Chi-Square Test 9 1.2.2 Generalized Likelihood Ratio (GLR) Testing 10 1.2.3 Multiple...M3AE Covariance Analysis 115 4.1.3 Simulations and Performance Analysis 121 4.1.3.1 Test Case 1 : aT = 32.0 124 4.1.3.2 Test Case 2: aT = 37.89, and

  2. Proteomic Approach for Diagnostic Applications in Head and Neck Cancer — EDRN Public Portal

    Cancer.gov

    To evaluate the test characteristics of a panel of biomarkers for identifying patients with early stage head and neck squamous cell carcinoma (HNSCC). The primary endpoints are sensitivity, specificity and accuracy of the marker panel. This study of the test characteristics of a modeling strategy for diagnosing HNSCC uses a case-control design, with several types of cases and several types of controls.

  3. Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models

    USGS Publications Warehouse

    Rakovec, O.; Hill, Mary C.; Clark, M.P.; Weerts, A. H.; Teuling, A. J.; Uijlenhoet, R.

    2014-01-01

    This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based “local” methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative “bucket-style” hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models.

  4. Planning a Study for Testing the Rasch Model given Missing Values due to the use of Test-booklets.

    PubMed

    Yanagida, Takuya; Kubinger, Klaus D; Rasch, Dieter

    2015-01-01

    Though calibration of an achievement test within a psychological and educational context is very often carried out by the Rasch model, data sampling is hardly designed according to statistical foundations. However, Kubinger, Rasch, and Yanagida (2009, 2011) suggested an approach for the determination of sample size according to a given Type-I and Type-II risk and a certain effect of model contradiction when testing the Rasch model. The approach uses a three-way analysis of variance design with mixed classification. For the while, their simulation studies deal with complete data, meaning every examinee is administered with all of the items of an item pool. The simulation study now presented in this paper deals with the practical relevant case, in particular for large-scale assessments, that item presentation happens to use several test-booklets. As a consequence, there are missing values by design. Therefore, the question to be considered is, whether this approach works in this case as well. Besides the fact, that data are not normally distributed but there is a dichotomous variable (an examinee either solves an item or fails to solve it), only a single entry for each cell exists in the given three-way analysis of variance design, if at all, due to missing values. Hence, the obligatory test-statistic's distribution may not be retained, in contrast to the case of having no missing values. The result of our simulation study, despite applying only to a very special scenario, is that this approach works, indeed: Whether test-booklets were used or every examinee is administered all of the items changes nothing in respect to the actual Type-I risk or to the power of the test, given almost the same amount of information of examinees per item. However, as the results are limited to a special scenario, we currently recommend any interested researcher to simulate the appropriate one in advance by him/herself.

  5. Detection of antibiotic resistance is essential for gonorrhoea point-of-care testing: a mathematical modelling study.

    PubMed

    Fingerhuth, Stephanie M; Low, Nicola; Bonhoeffer, Sebastian; Althaus, Christian L

    2017-07-26

    Antibiotic resistance is threatening to make gonorrhoea untreatable. Point-of-care (POC) tests that detect resistance promise individually tailored treatment, but might lead to more treatment and higher levels of resistance. We investigate the impact of POC tests on antibiotic-resistant gonorrhoea. We used data about the prevalence and incidence of gonorrhoea in men who have sex with men (MSM) and heterosexual men and women (HMW) to calibrate a mathematical gonorrhoea transmission model. With this model, we simulated four clinical pathways for the diagnosis and treatment of gonorrhoea: POC test with (POC+R) and without (POC-R) resistance detection, culture and nucleic acid amplification tests (NAATs). We calculated the proportion of resistant infections and cases averted after 5 years, and compared how fast resistant infections spread in the populations. The proportion of resistant infections after 30 years is lowest for POC+R (median MSM: 0.18%, HMW: 0.12%), and increases for culture (MSM: 1.19%, HMW: 0.13%), NAAT (MSM: 100%, HMW: 99.27%), and POC-R (MSM: 100%, HMW: 99.73%). Per 100 000 persons, NAAT leads to 36 366 (median MSM) and 1228 (median HMW) observed cases after 5 years. Compared with NAAT, POC+R averts more cases after 5 years (median MSM: 3353, HMW: 118). POC tests that detect resistance with intermediate sensitivity slow down resistance spread more than NAAT. POC tests with very high sensitivity for the detection of resistance are needed to slow down resistance spread more than by using culture. POC with high sensitivity to detect antibiotic resistance can keep gonorrhoea treatable longer than culture or NAAT. POC tests without reliable resistance detection should not be introduced because they can accelerate the spread of antibiotic-resistant gonorrhoea.

  6. Probabilistic Flexural Fatigue in Plain and Fiber-Reinforced Concrete

    PubMed Central

    Ríos, José D.

    2017-01-01

    The objective of this work is two-fold. First, we attempt to fit the experimental data on the flexural fatigue of plain and fiber-reinforced concrete with a probabilistic model (Saucedo, Yu, Medeiros, Zhang and Ruiz, Int. J. Fatigue, 2013, 48, 308–318). This model was validated for compressive fatigue at various loading frequencies, but not for flexural fatigue. Since the model is probabilistic, it is not necessarily related to the specific mechanism of fatigue damage, but rather generically explains the fatigue distribution in concrete (plain or reinforced with fibers) for damage under compression, tension or flexion. In this work, more than 100 series of flexural fatigue tests in the literature are fit with excellent results. Since the distribution of monotonic tests was not available in the majority of cases, a two-step procedure is established to estimate the model parameters based solely on fatigue tests. The coefficient of regression was more than 0.90 except for particular cases where not all tests were strictly performed under the same loading conditions, which confirms the applicability of the model to flexural fatigue data analysis. Moreover, the model parameters are closely related to fatigue performance, which demonstrates the predictive capacity of the model. For instance, the scale parameter is related to flexural strength, which improves with the addition of fibers. Similarly, fiber increases the scattering of fatigue life, which is reflected by the decreasing shape parameter. PMID:28773123

  7. Probabilistic Flexural Fatigue in Plain and Fiber-Reinforced Concrete.

    PubMed

    Ríos, José D; Cifuentes, Héctor; Yu, Rena C; Ruiz, Gonzalo

    2017-07-07

    The objective of this work is two-fold. First, we attempt to fit the experimental data on the flexural fatigue of plain and fiber-reinforced concrete with a probabilistic model (Saucedo, Yu, Medeiros, Zhang and Ruiz, Int. J. Fatigue, 2013, 48, 308-318). This model was validated for compressive fatigue at various loading frequencies, but not for flexural fatigue. Since the model is probabilistic, it is not necessarily related to the specific mechanism of fatigue damage, but rather generically explains the fatigue distribution in concrete (plain or reinforced with fibers) for damage under compression, tension or flexion. In this work, more than 100 series of flexural fatigue tests in the literature are fit with excellent results. Since the distribution of monotonic tests was not available in the majority of cases, a two-step procedure is established to estimate the model parameters based solely on fatigue tests. The coefficient of regression was more than 0.90 except for particular cases where not all tests were strictly performed under the same loading conditions, which confirms the applicability of the model to flexural fatigue data analysis. Moreover, the model parameters are closely related to fatigue performance, which demonstrates the predictive capacity of the model. For instance, the scale parameter is related to flexural strength, which improves with the addition of fibers. Similarly, fiber increases the scattering of fatigue life, which is reflected by the decreasing shape parameter.

  8. A model for family-based case-control studies of genetic imprinting and epistasis.

    PubMed

    Li, Xin; Sui, Yihan; Liu, Tian; Wang, Jianxin; Li, Yongci; Lin, Zhenwu; Hegarty, John; Koltun, Walter A; Wang, Zuoheng; Wu, Rongling

    2014-11-01

    Genetic imprinting, or called the parent-of-origin effect, has been recognized to play an important role in the formation and pathogenesis of human diseases. Although the epigenetic mechanisms that establish genetic imprinting have been a focus of many genetic studies, our knowledge about the number of imprinting genes and their chromosomal locations and interactions with other genes is still scarce, limiting precise inference of the genetic architecture of complex diseases. In this article, we present a statistical model for testing and estimating the effects of genetic imprinting on complex diseases using a commonly used case-control design with family structure. For each subject sampled from a case and control population, we not only genotype its own single nucleotide polymorphisms (SNPs) but also collect its parents' genotypes. By tracing the transmission pattern of SNP alleles from parental to offspring generation, the model allows the characterization of genetic imprinting effects based on Pearson tests of a 2 × 2 contingency table. The model is expanded to test the interactions between imprinting effects and additive, dominant and epistatic effects in a complex web of genetic interactions. Statistical properties of the model are investigated, and its practical usefulness is validated by a real data analysis. The model will provide a useful tool for genome-wide association studies aimed to elucidate the picture of genetic control over complex human diseases. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  9. An artificial neural network prediction model of congenital heart disease based on risk factors: A hospital-based case-control study.

    PubMed

    Li, Huixia; Luo, Miyang; Zheng, Jianfei; Luo, Jiayou; Zeng, Rong; Feng, Na; Du, Qiyun; Fang, Junqun

    2017-02-01

    An artificial neural network (ANN) model was developed to predict the risks of congenital heart disease (CHD) in pregnant women.This hospital-based case-control study involved 119 CHD cases and 239 controls all recruited from birth defect surveillance hospitals in Hunan Province between July 2013 and June 2014. All subjects were interviewed face-to-face to fill in a questionnaire that covered 36 CHD-related variables. The 358 subjects were randomly divided into a training set and a testing set at the ratio of 85:15. The training set was used to identify the significant predictors of CHD by univariate logistic regression analyses and develop a standard feed-forward back-propagation neural network (BPNN) model for the prediction of CHD. The testing set was used to test and evaluate the performance of the ANN model. Univariate logistic regression analyses were performed on SPSS 18.0. The ANN models were developed on Matlab 7.1.The univariate logistic regression identified 15 predictors that were significantly associated with CHD, including education level (odds ratio  = 0.55), gravidity (1.95), parity (2.01), history of abnormal reproduction (2.49), family history of CHD (5.23), maternal chronic disease (4.19), maternal upper respiratory tract infection (2.08), environmental pollution around maternal dwelling place (3.63), maternal exposure to occupational hazards (3.53), maternal mental stress (2.48), paternal chronic disease (4.87), paternal exposure to occupational hazards (2.51), intake of vegetable/fruit (0.45), intake of fish/shrimp/meat/egg (0.59), and intake of milk/soymilk (0.55). After many trials, we selected a 3-layer BPNN model with 15, 12, and 1 neuron in the input, hidden, and output layers, respectively, as the best prediction model. The prediction model has accuracies of 0.91 and 0.86 on the training and testing sets, respectively. The sensitivity, specificity, and Yuden Index on the testing set (training set) are 0.78 (0.83), 0.90 (0.95), and 0.68 (0.78), respectively. The areas under the receiver operating curve on the testing and training sets are 0.87 and 0.97, respectively.This study suggests that the BPNN model could be used to predict the risk of CHD in individuals. This model should be further improved by large-sample-size research.

  10. Validation and Simulation of Ares I Scale Model Acoustic Test - 3 - Modeling and Evaluating the Effect of Rainbird Water Deluge Inclusion

    NASA Technical Reports Server (NTRS)

    Strutzenberg, Louise L.; Putman, Gabriel C.

    2011-01-01

    The Ares I Scale Model Acoustics Test (ASMAT) is a series of live-fire tests of scaled rocket motors meant to simulate the conditions of the Ares I launch configuration. These tests have provided a well documented set of high fidelity measurements useful for validation including data taken over a range of test conditions and containing phenomena like Ignition Over-Pressure and water suppression of acoustics. Building on dry simulations of the ASMAT tests with the vehicle at 5 ft. elevation (100 ft. real vehicle elevation), wet simulations of the ASMAT test setup have been performed using the Loci/CHEM computational fluid dynamics software to explore the effect of rainbird water suppression inclusion on the launch platform deck. Two-phase water simulation has been performed using an energy and mass coupled lagrangian particle system module where liquid phase emissions are segregated into clouds of virtual particles and gas phase mass transfer is accomplished through simple Weber number controlled breakup and boiling models. Comparisons have been performed to the dry 5 ft. elevation cases, using configurations with and without launch mounts. These cases have been used to explore the interaction between rainbird spray patterns and launch mount geometry and evaluate the acoustic sound pressure level knockdown achieved through above-deck rainbird deluge inclusion. This comparison has been anchored with validation from live-fire test data which showed a reduction in rainbird effectiveness with the presence of a launch mount.

  11. Groundwater flow and heat transport for systems undergoing freeze-thaw: Intercomparison of numerical simulators for 2D test cases

    NASA Astrophysics Data System (ADS)

    Grenier, Christophe; Anbergen, Hauke; Bense, Victor; Chanzy, Quentin; Coon, Ethan; Collier, Nathaniel; Costard, François; Ferry, Michel; Frampton, Andrew; Frederick, Jennifer; Gonçalvès, Julio; Holmén, Johann; Jost, Anne; Kokh, Samuel; Kurylyk, Barret; McKenzie, Jeffrey; Molson, John; Mouche, Emmanuel; Orgogozo, Laurent; Pannetier, Romain; Rivière, Agnès; Roux, Nicolas; Rühaak, Wolfram; Scheidegger, Johanna; Selroos, Jan-Olof; Therrien, René; Vidstrand, Patrik; Voss, Clifford

    2018-04-01

    In high-elevation, boreal and arctic regions, hydrological processes and associated water bodies can be strongly influenced by the distribution of permafrost. Recent field and modeling studies indicate that a fully-coupled multidimensional thermo-hydraulic approach is required to accurately model the evolution of these permafrost-impacted landscapes and groundwater systems. However, the relatively new and complex numerical codes being developed for coupled non-linear freeze-thaw systems require verification. This issue is addressed by means of an intercomparison of thirteen numerical codes for two-dimensional test cases with several performance metrics (PMs). These codes comprise a wide range of numerical approaches, spatial and temporal discretization strategies, and computational efficiencies. Results suggest that the codes provide robust results for the test cases considered and that minor discrepancies are explained by computational precision. However, larger discrepancies are observed for some PMs resulting from differences in the governing equations, discretization issues, or in the freezing curve used by some codes.

  12. Private traits and attributes are predictable from digital records of human behavior.

    PubMed

    Kosinski, Michal; Stillwell, David; Graepel, Thore

    2013-04-09

    We show that easily accessible digital records of behavior, Facebook Likes, can be used to automatically and accurately predict a range of highly sensitive personal attributes including: sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, and gender. The analysis presented is based on a dataset of over 58,000 volunteers who provided their Facebook Likes, detailed demographic profiles, and the results of several psychometric tests. The proposed model uses dimensionality reduction for preprocessing the Likes data, which are then entered into logistic/linear regression to predict individual psychodemographic profiles from Likes. The model correctly discriminates between homosexual and heterosexual men in 88% of cases, African Americans and Caucasian Americans in 95% of cases, and between Democrat and Republican in 85% of cases. For the personality trait "Openness," prediction accuracy is close to the test-retest accuracy of a standard personality test. We give examples of associations between attributes and Likes and discuss implications for online personalization and privacy.

  13. A Random Variable Approach to Nuclear Targeting and Survivability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Undem, Halvor A.

    We demonstrate a common mathematical formalism for analyzing problems in nuclear survivability and targeting. This formalism, beginning with a random variable approach, can be used to interpret past efforts in nuclear-effects analysis, including targeting analysis. It can also be used to analyze new problems brought about by the post Cold War Era, such as the potential effects of yield degradation in a permanently untested nuclear stockpile. In particular, we illustrate the formalism through four natural case studies or illustrative problems, linking these to actual past data, modeling, and simulation, and suggesting future uses. In the first problem, we illustrate themore » case of a deterministically modeled weapon used against a deterministically responding target. Classic "Cookie Cutter" damage functions result. In the second problem, we illustrate, with actual target test data, the case of a deterministically modeled weapon used against a statistically responding target. This case matches many of the results of current nuclear targeting modeling and simulation tools, including the result of distance damage functions as complementary cumulative lognormal functions in the range variable. In the third problem, we illustrate the case of a statistically behaving weapon used against a deterministically responding target. In particular, we show the dependence of target damage on weapon yield for an untested nuclear stockpile experiencing yield degradation. Finally, and using actual unclassified weapon test data, we illustrate in the fourth problem the case of a statistically behaving weapon used against a statistically responding target.« less

  14. Evaluation of integration methods for hybrid simulation of complex structural systems through collapse

    NASA Astrophysics Data System (ADS)

    Del Carpio R., Maikol; Hashemi, M. Javad; Mosqueda, Gilberto

    2017-10-01

    This study examines the performance of integration methods for hybrid simulation of large and complex structural systems in the context of structural collapse due to seismic excitations. The target application is not necessarily for real-time testing, but rather for models that involve large-scale physical sub-structures and highly nonlinear numerical models. Four case studies are presented and discussed. In the first case study, the accuracy of integration schemes including two widely used methods, namely, modified version of the implicit Newmark with fixed-number of iteration (iterative) and the operator-splitting (non-iterative) is examined through pure numerical simulations. The second case study presents the results of 10 hybrid simulations repeated with the two aforementioned integration methods considering various time steps and fixed-number of iterations for the iterative integration method. The physical sub-structure in these tests consists of a single-degree-of-freedom (SDOF) cantilever column with replaceable steel coupons that provides repeatable highlynonlinear behavior including fracture-type strength and stiffness degradations. In case study three, the implicit Newmark with fixed-number of iterations is applied for hybrid simulations of a 1:2 scale steel moment frame that includes a relatively complex nonlinear numerical substructure. Lastly, a more complex numerical substructure is considered by constructing a nonlinear computational model of a moment frame coupled to a hybrid model of a 1:2 scale steel gravity frame. The last two case studies are conducted on the same porotype structure and the selection of time steps and fixed number of iterations are closely examined in pre-test simulations. The generated unbalance forces is used as an index to track the equilibrium error and predict the accuracy and stability of the simulations.

  15. Risk-Based, Hypothesis-Driven Framework for Hydrological Field Campaigns with Case Studies

    NASA Astrophysics Data System (ADS)

    Harken, B.; Rubin, Y.

    2014-12-01

    There are several stages in any hydrological modeling campaign, including: formulation and analysis of a priori information, data acquisition through field campaigns, inverse modeling, and prediction of some environmental performance metric (EPM). The EPM being predicted could be, for example, contaminant concentration or plume travel time. These predictions often have significant bearing on a decision that must be made. Examples include: how to allocate limited remediation resources between contaminated groundwater sites or where to place a waste repository site. Answering such questions depends on predictions of EPMs using forward models as well as levels of uncertainty related to these predictions. Uncertainty in EPM predictions stems from uncertainty in model parameters, which can be reduced by measurements taken in field campaigns. The costly nature of field measurements motivates a rational basis for determining a measurement strategy that is optimal with respect to the uncertainty in the EPM prediction. The tool of hypothesis testing allows this uncertainty to be quantified by computing the significance of the test resulting from a proposed field campaign. The significance of the test gives a rational basis for determining the optimality of a proposed field campaign. This hypothesis testing framework is demonstrated and discussed using various synthetic case studies. This study involves contaminated aquifers where a decision must be made based on prediction of when a contaminant will arrive at a specified location. The EPM, in this case contaminant travel time, is cast into the hypothesis testing framework. The null hypothesis states that the contaminant plume will arrive at the specified location before a critical amount of time passes, and the alternative hypothesis states that the plume will arrive after the critical time passes. The optimality of different field campaigns is assessed by computing the significance of the test resulting from each one. Evaluating the level of significance caused by a field campaign involves steps including likelihood-based inverse modeling and semi-analytical conditional particle tracking.

  16. Normality of raw data in general linear models: The most widespread myth in statistics

    USGS Publications Warehouse

    Kery, Marc; Hatfield, Jeff S.

    2003-01-01

    In years of statistical consulting for ecologists and wildlife biologists, by far the most common misconception we have come across has been the one about normality in general linear models. These comprise a very large part of the statistical models used in ecology and include t tests, simple and multiple linear regression, polynomial regression, and analysis of variance (ANOVA) and covariance (ANCOVA). There is a widely held belief that the normality assumption pertains to the raw data rather than to the model residuals. We suspect that this error may also occur in countless published studies, whenever the normality assumption is tested prior to analysis. This may lead to the use of nonparametric alternatives (if there are any), when parametric tests would indeed be appropriate, or to use of transformations of raw data, which may introduce hidden assumptions such as multiplicative effects on the natural scale in the case of log-transformed data. Our aim here is to dispel this myth. We very briefly describe relevant theory for two cases of general linear models to show that the residuals need to be normally distributed if tests requiring normality are to be used, such as t and F tests. We then give two examples demonstrating that the distribution of the response variable may be nonnormal, and yet the residuals are well behaved. We do not go into the issue of how to test normality; instead we display the distributions of response variables and residuals graphically.

  17. Photometric Uncertainties

    NASA Astrophysics Data System (ADS)

    Zou, Xiao-Duan; Li, Jian-Yang; Clark, Beth Ellen; Golish, Dathon

    2018-01-01

    The OSIRIS-REx spacecraft, launched in September, 2016, will study the asteroid Bennu and return a sample from its surface to Earth in 2023. Bennu is a near-Earth carbonaceous asteroid which will provide insight into the formation and evolution of the solar system. OSIRIS-REx will first approach Bennu in August 2018 and will study the asteroid for approximately two years before sampling. OSIRIS-REx will develop its photometric model (including Lommel-Seelinger, ROLO, McEwen, Minnaert and Akimov) of Bennu with OCAM and OVIRS during the Detailed Survey mission phase. The model developed during this phase will be used to photometrically correct the OCAM and OVIRS data.Here we present the analysis of the error for the photometric corrections. Based on our testing data sets, we find:1. The model uncertainties is only correct when we use the covariance matrix to calculate, because the parameters are highly correlated.2. No evidence of domination of any parameter in each model.3. And both model error and the data error contribute to the final correction error comparably.4. We tested the uncertainty module on fake and real data sets, and find that model performance depends on the data coverage and data quality. These tests gave us a better understanding of how different model behave in different case.5. L-S model is more reliable than others. Maybe because the simulated data are based on L-S model. However, the test on real data (SPDIF) does show slight advantage of L-S, too. ROLO is not reliable to use when calculating bond albedo. The uncertainty of McEwen model is big in most cases. Akimov performs unphysical on SOPIE 1 data.6. Better use L-S as our default choice, this conclusion is based mainly on our test on SOPIE data and IPDIF.

  18. Test Cases for a Rectangular Supercritical Wing Undergoing Pitching Oscillations

    NASA Technical Reports Server (NTRS)

    Bennett, Robert M.

    2000-01-01

    Steady and unsteady measured pressures for a Rectangular Supercritical Wing (RSW) undergoing pitching oscillations have been presented. From the several hundred compiled data points, 27 static and 36 pitching oscillation cases have been proposed for computational Test Cases to illustrate the trends with Mach number, reduced frequency, and angle of attack. The wing was designed to be a simple configuration for Computational Fluid Dynamics (CFD) comparisons. The wing had an unswept rectangular planform plus a tip of revolution, a panel aspect ratio of 2.0, a twelve per cent thick supercritical airfoil section, and no twist. The model was tested over a wide range of Mach numbers, from 0.27 to 0.90, corresponding to low subsonic flows up to strong transonic flows. The higher Mach numbers are well beyond the design Mach number such as might be required for flutter verification beyond cruise conditions. The pitching oscillations covered a broad range of reduced frequencies. Some early calculations for this wing are given for lifting pressure as calculated from a linear lifting surface program and from a transonic small perturbation program. The unsteady results were given primarily for a mild transonic condition at M = 0.70. For these cases the agreement with the data was only fair, possibly resulting from the omission of viscous effects. Supercritical airfoil sections are known to be sensitive to viscous effects (for example, one case cited). Calculations using a higher level code with the full potential equations have been presented for one of the same cases, and with the Euler equations. The agreement around the leading edge was improved, but overall the agreement was not completely satisfactory. Typically for low-aspect-ratio rectangular wings, transonic shock waves on the wing tend to sweep forward from root to tip such that there are strong three-dimensional effects. It might also be noted that for most of the test, the model was tested with free transition, but a few points were taken with an added transition strip for comparison. Some unpublished results of a rigid wing of the same airfoil and planform that was tested on the pitch and plunge apparatus mount system (PAPA) showed effects of the lower surface transition Strip on flutter at the lower subsonic Mach numbers. Significant effects of a transition strip were also obtained on a wing with a thicker supercritical section on the PAPA mount system. Both of these flutter tests on the PAPA resulted in very low reduced frequencies that may be a factor in this influence of the transition strip. However, these results indicate that correlation studies for RSW may require some attention to the estimation of transition location to accurately treat viscous effects. In this report several Test Cases are selected to illustrate trends for a variety of different conditions with emphasis on transonic flow effects. An overview of the model and tests is given and the standard formulary for these data is listed. Sample data points are presented in both tabular and graphical form. A complete tabulation and plotting of all the Test Cases is given. Only the static pressures and the real and imaginary parts of the first harmonic of the unsteady pressures are available. All the data for the test are available in electronic file form. The Test Cases are also available as separate electronic files.

  19. A comparison of turbulence models in computing multi-element airfoil flows

    NASA Technical Reports Server (NTRS)

    Rogers, Stuart E.; Menter, Florian; Durbin, Paul A.; Mansour, Nagi N.

    1994-01-01

    Four different turbulence models are used to compute the flow over a three-element airfoil configuration. These models are the one-equation Baldwin-Barth model, the one-equation Spalart-Allmaras model, a two-equation k-omega model, and a new one-equation Durbin-Mansour model. The flow is computed using the INS2D two-dimensional incompressible Navier-Stokes solver. An overset Chimera grid approach is utilized. Grid resolution tests are presented, and manual solution-adaptation of the grid was performed. The performance of each of the models is evaluated for test cases involving different angles-of-attack, Reynolds numbers, and flap riggings. The resulting surface pressure coefficients, skin friction, velocity profiles, and lift, drag, and moment coefficients are compared with experimental data. The models produce very similar results in most cases. Excellent agreement between computational and experimental surface pressures was observed, but only moderately good agreement was seen in the velocity profile data. In general, the difference between the predictions of the different models was less than the difference between the computational and experimental data.

  20. Modeling and Characterization of a Graphite Nanoplatelet/Epoxy Composite

    NASA Technical Reports Server (NTRS)

    Odegard, Gregory M.; Chasiotis, I.; Chen, Q.; Gates, T. S.

    2004-01-01

    A micromechanical modeling procedure is developed to predict the viscoelastic properties of a graphite nanoplatelet/epoxy composite as a function of volume fraction and nanoplatelet diameter. The predicted storage and loss moduli from the model are compared to measured values from the same material using Dynamical Mechanical Analysis, nanoindentation, and tensile tests. In most cases, the model and experiments indicate that for increasing volume fractions of nanoplatelets, both the storage and loss moduli increase. Also, in most cases, the model and experiments indicate that as the nanoplatelet diameter is increased, the storage and loss moduli decrease and increase, respectively.

  1. The Information a Test Provides on an Ability Parameter. Research Report. ETS RR-07-18

    ERIC Educational Resources Information Center

    Haberman, Shelby J.

    2007-01-01

    In item-response theory, if a latent-structure model has an ability variable, then elementary information theory may be employed to provide a criterion for evaluation of the information the test provides concerning ability. This criterion may be considered even in cases in which the latent-structure model is not valid, although interpretation of…

  2. Free Fall Misconceptions: Results of a Graph Based Pre-Test of Sophomore Civil Engineering Students

    ERIC Educational Resources Information Center

    Montecinos, Alicia M.

    2014-01-01

    A partially unusual behaviour was found among 14 sophomore students of civil engineering who took a pre test for a free fall laboratory session, in the context of a general mechanics course. An analysis contemplating mathematics models and physics models consistency was made. In all cases, the students presented evidence favoring a correct free…

  3. A New Method for Incremental Testing of Finite State Machines

    NASA Technical Reports Server (NTRS)

    Pedrosa, Lehilton Lelis Chaves; Moura, Arnaldo Vieira

    2010-01-01

    The automatic generation of test cases is an important issue for conformance testing of several critical systems. We present a new method for the derivation of test suites when the specification is modeled as a combined Finite State Machine (FSM). A combined FSM is obtained conjoining previously tested submachines with newly added states. This new concept is used to describe a fault model suitable for incremental testing of new systems, or for retesting modified implementations. For this fault model, only the newly added or modified states need to be tested, thereby considerably reducing the size of the test suites. The new method is a generalization of the well-known W-method and the G-method, but is scalable, and so it can be used to test FSMs with an arbitrarily large number of states.

  4. The Complementary Use of Audience Response Systems and Online Tests to Implement Repeat Testing: A Case Study

    ERIC Educational Resources Information Center

    Stratling, Rebecca

    2017-01-01

    Although learning theories suggest that repeat testing can be highly beneficial for students' retention and understanding of material, there is, so far, little guidance on how to implement repeat testing in higher education. This paper introduces one method for implementing a three-stage model of repeat testing via computer-aided formative…

  5. Interfacing the Generalized Fluid System Simulation Program with the SINDA/G Thermal Program

    NASA Technical Reports Server (NTRS)

    Schallhorn, Paul; Palmiter, Christopher; Farmer, Jeffery; Lycans, Randall; Tiller, Bruce

    2000-01-01

    A general purpose, one dimensional fluid flow code has been interfaced with the thermal analysis program SINDA/G. The flow code, GFSSP, is capable of analyzing steady state and transient flow in a complex network. The flow code is capable of modeling several physical phenomena including compressibility effects, phase changes, body forces (such as gravity and centrifugal) and mixture thermodynamics for multiple species. The addition of GFSSP to SINDA/G provides a significant improvement in convective heat transfer modeling for SINDA/G. The interface development was conducted in two phases. This paper describes the first (which allows for steady and quasi-steady - unsteady solid, steady fluid - conjugate heat transfer modeling). The second (full transient conjugate heat transfer modeling) phase of the interface development will be addressed in a later paper. Phase 1 development has been benchmarked to an analytical solution with excellent agreement. Additional test cases for each development phase demonstrate desired features of the interface. The results of the benchmark case, three additional test cases and a practical application are presented herein.

  6. Fractional flow in fractured chalk; a flow and tracer test revisited.

    PubMed

    Odling, N E; West, L J; Hartmann, S; Kilpatrick, A

    2013-04-01

    A multi-borehole pumping and tracer test in fractured chalk is revisited and reinterpreted in the light of fractional flow. Pumping test data analyzed using a fractional flow model gives sub-spherical flow dimensions of 2.2-2.4 which are interpreted as due to the partially penetrating nature of the pumped borehole. The fractional flow model offers greater versatility than classical methods for interpreting pumping tests in fractured aquifers but its use has been hampered because the hydraulic parameters derived are hard to interpret. A method is developed to convert apparent transmissivity and storativity (L(4-n)/T and S(2-n)) to conventional transmissivity and storativity (L2/T and dimensionless) for the case where flow dimension, 2

  7. A Method for Calculating the Probability of Successfully Completing a Rocket Propulsion Ground Test

    NASA Technical Reports Server (NTRS)

    Messer, Bradley

    2007-01-01

    Propulsion ground test facilities face the daily challenge of scheduling multiple customers into limited facility space and successfully completing their propulsion test projects. Over the last decade NASA s propulsion test facilities have performed hundreds of tests, collected thousands of seconds of test data, and exceeded the capabilities of numerous test facility and test article components. A logistic regression mathematical modeling technique has been developed to predict the probability of successfully completing a rocket propulsion test. A logistic regression model is a mathematical modeling approach that can be used to describe the relationship of several independent predictor variables X(sub 1), X(sub 2),.., X(sub k) to a binary or dichotomous dependent variable Y, where Y can only be one of two possible outcomes, in this case Success or Failure of accomplishing a full duration test. The use of logistic regression modeling is not new; however, modeling propulsion ground test facilities using logistic regression is both a new and unique application of the statistical technique. Results from this type of model provide project managers with insight and confidence into the effectiveness of rocket propulsion ground testing.

  8. An Investigation Into the Effects of Frequency Response Function Estimators on Model Updating

    NASA Astrophysics Data System (ADS)

    Ratcliffe, M. J.; Lieven, N. A. J.

    1999-03-01

    Model updating is a very active research field, in which significant effort has been invested in recent years. Model updating methodologies are invariably successful when used on noise-free simulated data, but tend to be unpredictable when presented with real experimental data that are—unavoidably—corrupted with uncorrelated noise content. In the development and validation of model-updating strategies, a random zero-mean Gaussian variable is added to simulated test data to tax the updating routines more fully. This paper proposes a more sophisticated model for experimental measurement noise, and this is used in conjunction with several different frequency response function estimators, from the classical H1and H2to more refined estimators that purport to be unbiased. Finite-element model case studies, in conjunction with a genuine experimental test, suggest that the proposed noise model is a more realistic representation of experimental noise phenomena. The choice of estimator is shown to have a significant influence on the viability of the FRF sensitivity method. These test cases find that the use of the H2estimator for model updating purposes is contraindicated, and that there is no advantage to be gained by using the sophisticated estimators over the classical H1estimator.

  9. A benchmark initiative on mantle convection with melting and melt segregation

    NASA Astrophysics Data System (ADS)

    Schmeling, Harro; Dohmen, Janik; Wallner, Herbert; Noack, Lena; Tosi, Nicola; Plesa, Ana-Catalina; Maurice, Maxime

    2015-04-01

    In recent years a number of mantle convection models have been developed which include partial melting within the asthenosphere, estimation of melt volumes, as well as melt extraction with and without redistribution at the surface or within the lithosphere. All these approaches use various simplifying modelling assumptions whose effects on the dynamics of convection including the feedback on melting have not been explored in sufficient detail. To better assess the significance of such assumptions and to provide test cases for the modelling community we initiate a benchmark comparison. In the initial phase of this endeavor we focus on the usefulness of the definitions of the test cases keeping the physics as sound as possible. The reference model is taken from the mantle convection benchmark, case 1b (Blanckenbach et al., 1989), assuming a square box with free slip boundary conditions, the Boussinesq approximation, constant viscosity and a Rayleigh number of 1e5. Melting is modelled assuming a simplified binary solid solution with linearly depth dependent solidus and liquidus temperatures, as well as a solidus temperature depending linearly on depletion. Starting from a plume free initial temperature condition (to avoid melting at the onset time) three cases are investigated: Case 1 includes melting, but without thermal or dynamic feedback on the convection flow. This case provides a total melt generation rate (qm) in a steady state. Case 2 includes batch melting, melt buoyancy (melt Rayleigh number Rm), depletion buoyancy and latent heat, but no melt percolation. Output quantities are the Nusselt number (Nu), root mean square velocity (vrms) and qm approaching a statistical steady state. Case 3 includes two-phase flow, i.e. melt percolation, assuming a constant shear and bulk viscosity of the matrix and various melt retention numbers (Rt). These cases should be carried out using the Compaction Boussinseq Approximation (Schmeling, 2000) or the full compaction formulation. Variations of cases 1 - 3 may be tested, particularly studying the effect of melt extraction. The motivation of this presentation is to summarize first experiences, suggest possible modifications of the case definitions and call interested modelers to join this benchmark exercise. References: Blanckenbach, B., Busse, F., Christensen, U., Cserepes, L. Gun¬kel, D., Hansen, U., Har¬der, H. Jarvis, G., Koch, M., Mar¬quart, G., Moore D., Olson, P., and Schmeling, H., 1989: A benchmark comparison for mantle convection codes, J. Geo¬phys., 98, 23 38. Schmeling, H., 2000: Partial melting and melt segregation in a convecting mantle. In: Physics and Chemistry of Partially Molten Rocks, eds. N. Bagdassarov, D. Laporte, and A.B. Thompson, Kluwer Academic Publ., Dordrecht, pp. 141 - 178.

  10. Modelling of dissolved oxygen content using artificial neural networks: Danube River, North Serbia, case study.

    PubMed

    Antanasijević, Davor; Pocajt, Viktor; Povrenović, Dragan; Perić-Grujić, Aleksandra; Ristić, Mirjana

    2013-12-01

    The aims of this study are to create an artificial neural network (ANN) model using non-specific water quality parameters and to examine the accuracy of three different ANN architectures: General Regression Neural Network (GRNN), Backpropagation Neural Network (BPNN) and Recurrent Neural Network (RNN), for prediction of dissolved oxygen (DO) concentration in the Danube River. The neural network model has been developed using measured data collected from the Bezdan monitoring station on the Danube River. The input variables used for the ANN model are water flow, temperature, pH and electrical conductivity. The model was trained and validated using available data from 2004 to 2008 and tested using the data from 2009. The order of performance for the created architectures based on their comparison with the test data is RNN > GRNN > BPNN. The ANN results are compared with multiple linear regression (MLR) model using multiple statistical indicators. The comparison of the RNN model with the MLR model indicates that the RNN model performs much better, since all predictions of the RNN model for the test data were within the error of less than ± 10 %. In case of the MLR, only 55 % of predictions were within the error of less than ± 10 %. The developed RNN model can be used as a tool for the prediction of DO in river waters.

  11. Model-Selection Theory: The Need for a More Nuanced Picture of Use-Novelty and Double-Counting.

    PubMed

    Steele, Katie; Werndl, Charlotte

    2018-06-01

    This article argues that common intuitions regarding (a) the specialness of 'use-novel' data for confirmation and (b) that this specialness implies the 'no-double-counting rule', which says that data used in 'constructing' (calibrating) a model cannot also play a role in confirming the model's predictions, are too crude. The intuitions in question are pertinent in all the sciences, but we appeal to a climate science case study to illustrate what is at stake. Our strategy is to analyse the intuitive claims in light of prominent accounts of confirmation of model predictions. We show that on the Bayesian account of confirmation, and also on the standard classical hypothesis-testing account, claims (a) and (b) are not generally true; but for some select cases, it is possible to distinguish data used for calibration from use-novel data, where only the latter confirm. The more specialized classical model-selection methods, on the other hand, uphold a nuanced version of claim (a), but this comes apart from (b), which must be rejected in favour of a more refined account of the relationship between calibration and confirmation. Thus, depending on the framework of confirmation, either the scope or the simplicity of the intuitive position must be revised. 1   Introduction 2   A Climate Case Study 3   The Bayesian Method vis-à-vis Intuitions 4   Classical Tests vis-à-vis Intuitions 5   Classical Model-Selection Methods vis-à-vis Intuitions    5.1   Introducing classical model-selection methods    5.2   Two cases 6   Re-examining Our Case Study 7   Conclusion .

  12. Individualization of Instruction: High School Chemistry - A Case Study.

    ERIC Educational Resources Information Center

    Altieri, Donald; Becht, Paul

    This publication contains information on the individualization of instruction in high school chemistry in the form of a case study. The subject of the case study is the P. K. Yonge Laboratory School of the University of Florida, Gainesville. The instructional model, however, was also field-tested in 18 schools during 1971-72 and 1972-73. The…

  13. [Automated detection of estrus and mastitis in dairy cows].

    PubMed

    de Mol, R M

    2001-02-15

    The development and test of detection models for oestrus and mastitis in dairy cows is described in a PhD thesis that was defended in Wageningen on June 5, 2000. These models were based on sensors for milk yield, milk temperature, electrical conductivity of milk, and cow activity and concentrate intake, and on combined processing of the sensor data. The models alert farmers to cows that need attention, because of possible oestrus or mastitis. A first detection model for cows, milked twice a day, was based on time series models for the sensor variables. A time series model describes the dependence between successive observations. The parameters of the time series models were fitted on-line for each cow after each milking by means of a Kalman filter, a mathematical method to estimate the state of a system on-line. The Kalman filter gives the best estimate of the current state of a system based on all preceding observations. This model was tested for 2 years on two experimental farms, and under field conditions on four farms over several years. A second detection model, for cow milked in an automatic milking system (AMS), was based on a generalization of the first model. Two data sets (one small, one large) were used for testing. The results for oestrus detection were good for both models. The results for mastitis detection were varying (in some cases good, in other cases moderate). Fuzzy logic was used to classify mastitis and oestrus alerts with both detection models, to reduce the number of false positive alerts. Fuzzy logic makes approximate reasoning possible, where statements can be partly true or false. Input for the fuzzy logic model were alerts from the detection models and additional information. The number of false positive alerts decreased considerably, while the number of detected cases remained at the same level. These models make automated detection possible in practice.

  14. An Explanatory Item Response Theory Approach for a Computer-Based Case Simulation Test

    ERIC Educational Resources Information Center

    Kahraman, Nilüfer

    2014-01-01

    Problem: Practitioners working with multiple-choice tests have long utilized Item Response Theory (IRT) models to evaluate the performance of test items for quality assurance. The use of similar applications for performance tests, however, is often encumbered due to the challenges encountered in working with complicated data sets in which local…

  15. Multiphase Modeling of Secondary Atomization in a Shock Environment

    NASA Astrophysics Data System (ADS)

    St. Clair, Jeffrey; McGrath, Thomas; Balachandar, Sivaramakrishnan

    2017-06-01

    Understanding and developing accurate modeling strategies for shock-particulate interaction remains a challenging and important topic, with application to energetic materials development, volcanic eruptions, and safety/risk assessment. This work presents computational modeling of compressible multiphase flows with shock-induced droplet atomization. Droplet size has a strong influence on the interphase momentum and heat transfer. A test case is presented that is sensitive to this, requiring the dynamic modeling of the secondary atomization process occurring when the shock impacts the droplets. An Eulerian-Eulerian computational model that treats all phases as compressible, is hyperbolic and satisfies the 2nd Law of Thermodynamics is applied. Four different breakup models are applied to the test case in which a planar shock wave encounters a cloud of water droplets. The numerical results are compared with both experimental and previously-generated modeling results. The effect of the drag relation used is also investigated. The computed results indicate the necessity of using a droplet breakup model for this application, and the relative accuracy of results obtained with the different droplet breakup and drag models is discussed.

  16. Revised Reynolds Stress and Triple Product Models

    NASA Technical Reports Server (NTRS)

    Olsen, Michael E.; Lillard, Randolph P.

    2017-01-01

    Revised versions of Lag methodology Reynolds-stress and triple product models are applied to accepted test cases to assess the improvement, or lack thereof, in the prediction capability of the models. The Bachalo-Johnson bump flow is shown as an example for this abstract submission.

  17. Comparison of results of an obstacle resolving microscale model with wind tunnel data

    NASA Astrophysics Data System (ADS)

    Grawe, David; Schlünzen, K. Heinke; Pascheke, Frauke

    2013-11-01

    The microscale transport and stream model MITRAS has been improved and a new technique has been implemented to improve numerical stability for complex obstacle configurations. Results of the updated version have been compared with wind tunnel data using an evaluation method that has been established for simple obstacle configurations. MITRAS is a part of the M-SYS model system for the assessment of ambient air quality. A comparison of model results for the flow field against quality ensured wind tunnel data has been carried out for both idealised and realistic test cases. Results of the comparison show a very good agreement of the wind field for most test cases and identify areas of possible improvement of the model. The evaluated MITRAS results can be used as input data for the M-SYS microscale chemistry model MICTM. This paper describes how such a comparison can be carried out for simple as well as realistic obstacle configurations and what difficulties arise.

  18. Meta-analysis for the comparison of two diagnostic tests to a common gold standard: A generalized linear mixed model approach.

    PubMed

    Hoyer, Annika; Kuss, Oliver

    2018-05-01

    Meta-analysis of diagnostic studies is still a rapidly developing area of biostatistical research. Especially, there is an increasing interest in methods to compare different diagnostic tests to a common gold standard. Restricting to the case of two diagnostic tests, in these meta-analyses the parameters of interest are the differences of sensitivities and specificities (with their corresponding confidence intervals) between the two diagnostic tests while accounting for the various associations across single studies and between the two tests. We propose statistical models with a quadrivariate response (where sensitivity of test 1, specificity of test 1, sensitivity of test 2, and specificity of test 2 are the four responses) as a sensible approach to this task. Using a quadrivariate generalized linear mixed model naturally generalizes the common standard bivariate model of meta-analysis for a single diagnostic test. If information on several thresholds of the tests is available, the quadrivariate model can be further generalized to yield a comparison of full receiver operating characteristic (ROC) curves. We illustrate our model by an example where two screening methods for the diagnosis of type 2 diabetes are compared.

  19. Pressure and temperature fields associated with aero-optics tests. [transonic wind tunnel tests

    NASA Technical Reports Server (NTRS)

    Raman, K. R.

    1980-01-01

    The experimental investigation carried out in a 6 x 6 ft wind tunnel on four model configurations in the aero-optics series of tests are described. The data obtained on the random pressures (static and total pressures) and total temperatures are presented. In addition, the data for static pressure fluctuations on the Coelostat turret model are presented. The measurements indicate that the random pressures and temperature are negligible compared to their own mean (or steady state) values for the four models considered, thus allowing considerable simplification in the calculations to obtain the statistical properties of the density field. In the case of the Coelostat model tests these simplifications cannot be assumed a priori and require further investigation.

  20. Active earth pressure model tests versus finite element analysis

    NASA Astrophysics Data System (ADS)

    Pietrzak, Magdalena

    2017-06-01

    The purpose of the paper is to compare failure mechanisms observed in small scale model tests on granular sample in active state, and simulated by finite element method (FEM) using Plaxis 2D software. Small scale model tests were performed on rectangular granular sample retained by a rigid wall. Deformation of the sample resulted from simple wall translation in the direction `from the soil" (active earth pressure state. Simple Coulomb-Mohr model for soil can be helpful in interpreting experimental findings in case of granular materials. It was found that the general alignment of strain localization pattern (failure mechanism) may belong to macro scale features and be dominated by a test boundary conditions rather than the nature of the granular sample.

  1. A robust, finite element model for hydrostatic surface water flows

    USGS Publications Warehouse

    Walters, R.A.; Casulli, V.

    1998-01-01

    A finite element scheme is introduced for the 2-dimensional shallow water equations using semi-implicit methods in time. A semi-Lagrangian method is used to approximate the effects of advection. A wave equation is formed at the discrete level such that the equations decouple into an equation for surface elevation and a momentum equation for the horizontal velocity. The convergence rates and relative computational efficiency are examined with the use of three test cases representing various degrees of difficulty. A test with a polar-quadrant grid investigates the response to local grid-scale forcing and the presence of spurious modes, a channel test case establishes convergence rates, and a field-scale test case examines problems with highly irregular grids.A finite element scheme is introduced for the 2-dimensional shallow water equations using semi-implicit methods in time. A semi-Lagrangian method is used to approximate the effects of advection. A wave equation is formed at the discrete level such that the equations decouple into an equation for surface elevation and a momentum equation for the horizontal velocity. The convergence rates and relative computational efficiency are examined with the use of three test cases representing various degrees of difficulty. A test with a polar-quadrant grid investigates the response to local grid-scale forcing and the presence of spurious modes, a channel test case establishes convergence rates, and a field-scale test case examines problems with highly irregular grids.

  2. A probability model for evaluating the bias and precision of influenza vaccine effectiveness estimates from case-control studies.

    PubMed

    Haber, M; An, Q; Foppa, I M; Shay, D K; Ferdinands, J M; Orenstein, W A

    2015-05-01

    As influenza vaccination is now widely recommended, randomized clinical trials are no longer ethical in many populations. Therefore, observational studies on patients seeking medical care for acute respiratory illnesses (ARIs) are a popular option for estimating influenza vaccine effectiveness (VE). We developed a probability model for evaluating and comparing bias and precision of estimates of VE against symptomatic influenza from two commonly used case-control study designs: the test-negative design and the traditional case-control design. We show that when vaccination does not affect the probability of developing non-influenza ARI then VE estimates from test-negative design studies are unbiased even if vaccinees and non-vaccinees have different probabilities of seeking medical care against ARI, as long as the ratio of these probabilities is the same for illnesses resulting from influenza and non-influenza infections. Our numerical results suggest that in general, estimates from the test-negative design have smaller bias compared to estimates from the traditional case-control design as long as the probability of non-influenza ARI is similar among vaccinated and unvaccinated individuals. We did not find consistent differences between the standard errors of the estimates from the two study designs.

  3. Application of the Athlete's Performance Passport for Doping Control: A Case Report.

    PubMed

    Iljukov, Sergei; Bermon, Stephane; Schumacher, Yorck O

    2018-01-01

    The efficient use of testing resources is a key issue in the fight against doping. The longitudinal tracking of sporting performances to identify unusual improvements possibly caused by doping, so-called "athlete's performance passport" (APP) is a new concept to improve targeted anti-doping testing. In fact, unusual performances by an athlete would trigger a more thorough testing program. In the present case report, performance data is modeled using the critical power concept for a group of athletes based on their past performances. By these means, an athlete with unusual deviations from his predicted performances was identified. Subsequent target testing using blood testing and the athlete biological passport resulted in an anti-doping rule violation procedure and suspension of the athlete. This case demonstrates the feasibility of the APP approach where athlete's performance is monitored and might serve as an example for the practical implementation of the method.

  4. Application of the Athlete's Performance Passport for Doping Control: A Case Report

    PubMed Central

    Iljukov, Sergei; Bermon, Stephane; Schumacher, Yorck O.

    2018-01-01

    The efficient use of testing resources is a key issue in the fight against doping. The longitudinal tracking of sporting performances to identify unusual improvements possibly caused by doping, so-called “athlete's performance passport” (APP) is a new concept to improve targeted anti-doping testing. In fact, unusual performances by an athlete would trigger a more thorough testing program. In the present case report, performance data is modeled using the critical power concept for a group of athletes based on their past performances. By these means, an athlete with unusual deviations from his predicted performances was identified. Subsequent target testing using blood testing and the athlete biological passport resulted in an anti-doping rule violation procedure and suspension of the athlete. This case demonstrates the feasibility of the APP approach where athlete's performance is monitored and might serve as an example for the practical implementation of the method. PMID:29651247

  5. Partially Observed Mixtures of IRT Models: An Extension of the Generalized Partial-Credit Model

    ERIC Educational Resources Information Center

    Von Davier, Matthias; Yamamoto, Kentaro

    2004-01-01

    The generalized partial-credit model (GPCM) is used frequently in educational testing and in large-scale assessments for analyzing polytomous data. Special cases of the generalized partial-credit model are the partial-credit model--or Rasch model for ordinal data--and the two parameter logistic (2PL) model. This article extends the GPCM to the…

  6. Implementing antiretroviral resistance testing in a primary health care HIV treatment programme in rural KwaZulu-Natal, South Africa: early experiences, achievements and challenges.

    PubMed

    Lessells, Richard J; Stott, Katharine E; Manasa, Justen; Naidu, Kevindra K; Skingsley, Andrew; Rossouw, Theresa; de Oliveira, Tulio

    2014-03-07

    Antiretroviral drug resistance is becoming increasingly common with the expansion of human immunodeficiency virus (HIV) treatment programmes in high prevalence settings. Genotypic resistance testing could have benefit in guiding individual-level treatment decisions but successful models for delivering resistance testing in low- and middle-income countries have not been reported. An HIV Treatment Failure Clinic model was implemented within a large primary health care HIV treatment programme in northern KwaZulu-Natal, South Africa. Genotypic resistance testing was offered to adults (≥16 years) with virological failure on first-line antiretroviral therapy (one viral load >1000 copies/ml after at least 12 months on a standard first-line regimen). A genotypic resistance test report was generated with treatment recommendations from a specialist HIV clinician and sent to medical officers at the clinics who were responsible for patient management. A quantitative process evaluation was conducted to determine how the model was implemented and to provide feedback regarding barriers and challenges to delivery. A total of 508 specimens were submitted for genotyping between 8 April 2011 and 31 January 2013; in 438 cases (86.2%) a complete genotype report with recommendations from the specialist clinician was sent to the medical officer. The median turnaround time from specimen collection to receipt of final report was 18 days (interquartile range (IQR) 13-29). In 114 (26.0%) cases the recommended treatment differed from what would be given in the absence of drug resistance testing. In the majority of cases (n = 315, 71.9%), the subsequent treatment prescribed was in line with the recommendations of the report. Genotypic resistance testing was successfully implemented in this large primary health care HIV programme and the system functioned well enough for the results to influence clinical management decisions in real time. Further research will explore the impact and cost-effectiveness of different implementation models in different settings.

  7. FY17 Status Report on Testing Supporting the Inclusion of Grade 91 Steel as an Acceptable Material for Application of the EPP Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Messner, Mark C.; Sham, Sam; Wang, Yanli

    This report summarizes the experiments performed in FY17 on Gr. 91 steels. The testing of Gr. 91 has technical significance because, currently, it is the only approved material for Class A construction that is strongly cyclic softening. Specific FY17 testing includes the following activities for Gr. 91 steel. First, two types of key feature testing have been initiated, including two-bar thermal ratcheting and Simplified Model Testing (SMT). The goal is to qualify the Elastic – Perfectly Plastic (EPP) design methodologies and to support incorporation of these rules for Gr. 91 into the ASME Division 5 Code. The preliminary SMT testmore » results show that Gr. 91 is most damaging when tested with compression hold mode under the SMT creep fatigue testing condition. Two-bar thermal ratcheting test results at a temperature range between 350 to 650o C were compared with the EPP strain limits code case evaluation, and the results show that the EPP strain limits code case is conservative. The material information obtained from these key feature tests can also be used to verify its material model. Second, to provide experimental data in support of the viscoplastic material model development at Argonne National Laboratory, selective tests were performed to evaluate the effect of cyclic softening on strain rate sensitivity and creep rates. The results show the prior cyclic loading history decreases the strain rate sensitivity and increases creep rates. In addition, isothermal cyclic stress-strain curves were generated at six different temperatures, and a nonisothermal thermomechanical testing was also performed to provide data to calibrate the viscoplastic material model.« less

  8. A SIGNIFICANCE TEST FOR THE LASSO1

    PubMed Central

    Lockhart, Richard; Taylor, Jonathan; Tibshirani, Ryan J.; Tibshirani, Robert

    2014-01-01

    In the sparse linear regression setting, we consider testing the significance of the predictor variable that enters the current lasso model, in the sequence of models visited along the lasso solution path. We propose a simple test statistic based on lasso fitted values, called the covariance test statistic, and show that when the true model is linear, this statistic has an Exp(1) asymptotic distribution under the null hypothesis (the null being that all truly active variables are contained in the current lasso model). Our proof of this result for the special case of the first predictor to enter the model (i.e., testing for a single significant predictor variable against the global null) requires only weak assumptions on the predictor matrix X. On the other hand, our proof for a general step in the lasso path places further technical assumptions on X and the generative model, but still allows for the important high-dimensional case p > n, and does not necessarily require that the current lasso model achieves perfect recovery of the truly active variables. Of course, for testing the significance of an additional variable between two nested linear models, one typically uses the chi-squared test, comparing the drop in residual sum of squares (RSS) to a χ12 distribution. But when this additional variable is not fixed, and has been chosen adaptively or greedily, this test is no longer appropriate: adaptivity makes the drop in RSS stochastically much larger than χ12 under the null hypothesis. Our analysis explicitly accounts for adaptivity, as it must, since the lasso builds an adaptive sequence of linear models as the tuning parameter λ decreases. In this analysis, shrinkage plays a key role: though additional variables are chosen adaptively, the coefficients of lasso active variables are shrunken due to the l1 penalty. Therefore, the test statistic (which is based on lasso fitted values) is in a sense balanced by these two opposing properties—adaptivity and shrinkage—and its null distribution is tractable and asymptotically Exp(1). PMID:25574062

  9. Modelling accelerated degradation data using Wiener diffusion with a time scale transformation.

    PubMed

    Whitmore, G A; Schenkelberg, F

    1997-01-01

    Engineering degradation tests allow industry to assess the potential life span of long-life products that do not fail readily under accelerated conditions in life tests. A general statistical model is presented here for performance degradation of an item of equipment. The degradation process in the model is taken to be a Wiener diffusion process with a time scale transformation. The model incorporates Arrhenius extrapolation for high stress testing. The lifetime of an item is defined as the time until performance deteriorates to a specified failure threshold. The model can be used to predict the lifetime of an item or the extent of degradation of an item at a specified future time. Inference methods for the model parameters, based on accelerated degradation test data, are presented. The model and inference methods are illustrated with a case application involving self-regulating heating cables. The paper also discusses a number of practical issues encountered in applications.

  10. The Clinical and Economic Benefits of Co-Testing Versus Primary HPV Testing for Cervical Cancer Screening: A Modeling Analysis.

    PubMed

    Felix, Juan C; Lacey, Michael J; Miller, Jeffrey D; Lenhart, Gregory M; Spitzer, Mark; Kulkarni, Rucha

    2016-06-01

    Consensus United States cervical cancer screening guidelines recommend use of combination Pap plus human papillomavirus (HPV) testing for women aged 30 to 65 years. An HPV test was approved by the Food and Drug Administration in 2014 for primary cervical cancer screening in women age 25 years and older. Here, we present the results of clinical-economic comparisons of Pap plus HPV mRNA testing including genotyping for HPV 16/18 (co-testing) versus DNA-based primary HPV testing with HPV 16/18 genotyping and reflex cytology (HPV primary) for cervical cancer screening. A health state transition (Markov) model with 1-year cycling was developed using epidemiologic, clinical, and economic data from healthcare databases and published literature. A hypothetical cohort of one million women receiving triennial cervical cancer screening was simulated from ages 30 to 70 years. Screening strategies compared HPV primary to co-testing. Outcomes included total and incremental differences in costs, invasive cervical cancer (ICC) cases, ICC deaths, number of colposcopies, and quality-adjusted life years for cost-effectiveness calculations. Comprehensive sensitivity analyses were performed. In a simulation cohort of one million 30-year-old women modeled up to age 70 years, the model predicted that screening with HPV primary testing instead of co-testing could lead to as many as 2,141 more ICC cases and 2,041 more ICC deaths. In the simulation, co-testing demonstrated a greater number of lifetime quality-adjusted life years (22,334) and yielded $39.0 million in savings compared with HPV primary, thereby conferring greater effectiveness at lower cost. Model results demonstrate that co-testing has the potential to provide improved clinical and economic outcomes when compared with HPV primary. While actual cost and outcome data are evaluated, these findings are relevant to U.S. healthcare payers and women's health policy advocates seeking cost-effective cervical cancer screening technologies.

  11. Numerical Modeling of River Ice Processes on the Lower Nelson River

    NASA Astrophysics Data System (ADS)

    Malenchak, Jarrod Joseph

    Water resource infrastructure in cold regions of the world can be significantly impacted by the existence of river ice. Major engineering concerns related to river ice include ice jam flooding, the design and operation of hydropower facilities and other hydraulic structures, water supplies, as well as ecological, environmental, and morphological effects. The use of numerical simulation models has been identified as one of the most efficient means by which river ice processes can be studied and the effects of river ice be evaluated. The continued advancement of these simulation models will help to develop new theories and evaluate potential mitigation alternatives for these ice issues. In this thesis, a literature review of existing river ice numerical models, of anchor ice formation and modeling studies, and of aufeis formation and modeling studies is conducted. A high level summary of the two-dimensional CRISSP numerical model is presented as well as the developed freeze-up model with a focus specifically on the anchor ice and aufeis growth processes. This model includes development in the detailed heat transfer calculations, an improved surface ice mass exchange model which includes the rapids entrainment process, and an improved dry bed treatment model along with the expanded anchor ice and aufeis growth model. The developed sub-models are tested in an ideal channel setting as somewhat of a model confirmation. A case study of significant anchor ice and aufeis growth on the Nelson River in northern Manitoba, Canada, will be the primary field test case for the anchor ice and aufeis model. A second case study on the same river will be used to evaluate the surface ice components of the model in a field setting. The results from these cases studies will be used to highlight the capabilities and deficiencies in the numerical model and to identify areas of further research and model development.

  12. Solid Propellant Test Article (SPTA) Test Stand

    NASA Technical Reports Server (NTRS)

    1991-01-01

    This photograph shows the Solid Propellant Test Article (SPTA) test stand with the Modified Nasa Motor (M-NASA) test article at the Marshall Space Flight Center (MSFC). The SPTA test stand, 12-feet wide by 12-feet long by 24-feet high, was built in 1989 to provide comparative performance data on nozzle and case insulation material and to verify thermostructural analysis models. A modified NASA 48-inch solid motor (M-NASA motor) with a 12-foot blast tube and 10-inch throat makes up the SPTA. The M-NASA motor is being used to evaluate solid rocket motor internal non-asbestos insulation materials, nozzle designs, materials, and new inspection techniques. New internal motor case instrumentation techniques are also being evaluated.

  13. Reynolds averaged turbulence modelling using deep neural networks with embedded invariance

    DOE PAGES

    Ling, Julia; Kurzawski, Andrew; Templeton, Jeremy

    2016-10-18

    There exists significant demand for improved Reynolds-averaged Navier–Stokes (RANS) turbulence models that are informed by and can represent a richer set of turbulence physics. This paper presents a method of using deep neural networks to learn a model for the Reynolds stress anisotropy tensor from high-fidelity simulation data. A novel neural network architecture is proposed which uses a multiplicative layer with an invariant tensor basis to embed Galilean invariance into the predicted anisotropy tensor. It is demonstrated that this neural network architecture provides improved prediction accuracy compared with a generic neural network architecture that does not embed this invariance property.more » Furthermore, the Reynolds stress anisotropy predictions of this invariant neural network are propagated through to the velocity field for two test cases. For both test cases, significant improvement versus baseline RANS linear eddy viscosity and nonlinear eddy viscosity models is demonstrated.« less

  14. Injector Design Tool Improvements: User's manual for FDNS V.4.5

    NASA Technical Reports Server (NTRS)

    Chen, Yen-Sen; Shang, Huan-Min; Wei, Hong; Liu, Jiwen

    1998-01-01

    The major emphasis of the current effort is in the development and validation of an efficient parallel machine computational model, based on the FDNS code, to analyze the fluid dynamics of a wide variety of liquid jet configurations for general liquid rocket engine injection system applications. This model includes physical models for droplet atomization, breakup/coalescence, evaporation, turbulence mixing and gas-phase combustion. Benchmark validation cases for liquid rocket engine chamber combustion conditions will be performed for model validation purpose. Test cases may include shear coaxial, swirl coaxial and impinging injection systems with combinations LOXIH2 or LOXISP-1 propellant injector elements used in rocket engine designs. As a final goal of this project, a well tested parallel CFD performance methodology together with a user's operation description in a final technical report will be reported at the end of the proposed research effort.

  15. Reynolds averaged turbulence modelling using deep neural networks with embedded invariance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ling, Julia; Kurzawski, Andrew; Templeton, Jeremy

    There exists significant demand for improved Reynolds-averaged Navier–Stokes (RANS) turbulence models that are informed by and can represent a richer set of turbulence physics. This paper presents a method of using deep neural networks to learn a model for the Reynolds stress anisotropy tensor from high-fidelity simulation data. A novel neural network architecture is proposed which uses a multiplicative layer with an invariant tensor basis to embed Galilean invariance into the predicted anisotropy tensor. It is demonstrated that this neural network architecture provides improved prediction accuracy compared with a generic neural network architecture that does not embed this invariance property.more » Furthermore, the Reynolds stress anisotropy predictions of this invariant neural network are propagated through to the velocity field for two test cases. For both test cases, significant improvement versus baseline RANS linear eddy viscosity and nonlinear eddy viscosity models is demonstrated.« less

  16. Reduction in symptomatic malaria prevalence through proactive community treatment in rural Senegal.

    PubMed

    Linn, Annē M; Ndiaye, Youssoupha; Hennessee, Ian; Gaye, Seynabou; Linn, Patrick; Nordstrom, Karin; McLaughlin, Matt

    2015-11-01

    We piloted a community-based proactive malaria case detection model in rural Senegal to evaluate whether this model can increase testing and treatment and reduce prevalence of symptomatic malaria in target communities. Home care providers conducted weekly sweeps of every household in their village throughout the transmission season to identify patients with symptoms of malaria, perform rapid diagnostic tests (RDT) on symptomatic patients and provide treatment for positive cases. The model was implemented in 15 villages from July to November 2013, the high transmission season. Fifteen comparison villages were chosen from those implementing Senegal's original, passive model of community case management of malaria. Three sweeps were conducted in the comparison villages to compare prevalence of symptomatic malaria using difference in differences analysis. At baseline, prevalence of symptomatic malaria confirmed by RDT for all symptomatic individuals found during sweeps was similar in both sets of villages (P = 0.79). At end line, prevalence was 16 times higher in the comparison villages than in the intervention villages (P = 0.003). Adjusting for potential confounders, the intervention was associated with a 30-fold reduction in odds of symptomatic malaria in the intervention villages (AOR = 0.033; 95% CI: 0.017, 0.065). Treatment seeking also increased in the intervention villages, with 57% of consultations by home care providers conducted between sweeps through routine community case management. This pilot study suggests that community-based proactive case detection reduces symptomatic malaria prevalence, likely through more timely case management and improved care seeking behaviour. A randomised controlled trial is needed to further evaluate the impact of this model. © 2015 John Wiley & Sons Ltd.

  17. Use of chaos theory and complex systems modeling to study alcohol effects on fetal condition.

    PubMed

    Mehl, L E; Manchanda, S

    1993-10-01

    A systems dynamics computer model to predict birth complications for individual pregnant woman was developed from prospectively conducted data on a database of 125 pregnant women. The model is based upon nonlinear mathematics derived from the study of chaos and complex systems. The model was then tested prospectively on 27 additional pregnant women, making predictions on their level of obstetrical risk. The model was refined until it correctly predicted the outcomes of all 125 cases in the development database. Prediction was made with an accuracy of 25/27 cases for the prospective test cases. Predictions were made for fetal condition at birth, presence or absence of operative delivery, and presence or absence of uterine dysfunction. Then the model was used to explore alcohol use during pregnancy. A reasonable spread of alcohol use existed among subjects, allowing consideration of alcohol effects. Alcohol was found to have differential effects on fetal condition at birth depending upon the presence or absence of high levels of psychosocial stress and the use of other substances. In all cases, the effect of alcohol was only evident after the 10 drinks per week level was reached. For the high-stress/one other substance group, there could be an 18-fold effect on fetal condition at birth. For the low-stress/one other substance group, the effect was only 3-fold, and for the alcohol alone group, the effect was negligible.

  18. A model of scientific attitudes assessment by observation in physics learning based scientific approach: case study of dynamic fluid topic in high school

    NASA Astrophysics Data System (ADS)

    Yusliana Ekawati, Elvin

    2017-01-01

    This study aimed to produce a model of scientific attitude assessment in terms of the observations for physics learning based scientific approach (case study of dynamic fluid topic in high school). Development of instruments in this study adaptation of the Plomp model, the procedure includes the initial investigation, design, construction, testing, evaluation and revision. The test is done in Surakarta, so that the data obtained are analyzed using Aiken formula to determine the validity of the content of the instrument, Cronbach’s alpha to determine the reliability of the instrument, and construct validity using confirmatory factor analysis with LISREL 8.50 program. The results of this research were conceptual models, instruments and guidelines on scientific attitudes assessment by observation. The construct assessment instruments include components of curiosity, objectivity, suspended judgment, open-mindedness, honesty and perseverance. The construct validity of instruments has been qualified (rated load factor > 0.3). The reliability of the model is quite good with the Alpha value 0.899 (> 0.7). The test showed that the model fits the theoretical models are supported by empirical data, namely p-value 0.315 (≥ 0.05), RMSEA 0.027 (≤ 0.08)

  19. Costs and health consequences of chlamydia management strategies among pregnant women in sub-Saharan Africa.

    PubMed

    Romoren, M; Hussein, F; Steen, T W; Velauthapillai, M; Sundby, J; Hjortdahl, P; Kristiansen, I S

    2007-12-01

    Chlamydia is the most common bacterial sexually transmitted infection worldwide and a major cause of morbidity-particularly among women and neonates. We compared costs and health consequences of using point-of-care (POC) tests with current syndromic management among antenatal care attendees in sub-Saharan Africa. We also compared erythromycin with azithromycin treatment and universal with age-based chlamydia management. A decision analytical model was developed to compare diagnostic and treatment strategies, using Botswana as a case. Model input was based upon (1) a study of pregnant women in Botswana, (2) literature reviews and (3) expert opinion. We expressed the study outcome in terms of costs (US$), cases cured, magnitude of overtreatment and successful partner treatment. Azithromycin was less costly and more effective than erythromycin. Compared with syndromic management, testing all attendees on their first visit with a 75% sensitive POC test increased the number of cases cured from 1500 to 3500 in a population of 100,000 women, at a cost of US$38 per additional case cured. This cost was lower in high-prevalence populations or if testing was restricted to teenagers. The specific POC tests provided the advantage of substantial reductions in overtreatment with antibiotics and improved partner management. Using POC tests to diagnose chlamydia during antenatal care in sub-Saharan Africa entails greater health benefits than syndromic management does-and at acceptable costs-especially when restricted to younger women. Changes in diagnostic strategy and treatment regimens may improve people's health and even reduce healthcare budgets.

  20. A Model Based Security Testing Method for Protocol Implementation

    PubMed Central

    Fu, Yu Long; Xin, Xiao Long

    2014-01-01

    The security of protocol implementation is important and hard to be verified. Since the penetration testing is usually based on the experience of the security tester and the specific protocol specifications, a formal and automatic verification method is always required. In this paper, we propose an extended model of IOLTS to describe the legal roles and intruders of security protocol implementations, and then combine them together to generate the suitable test cases to verify the security of protocol implementation. PMID:25105163

  1. A Descriptive Evaluation of Automated Software Cost-Estimation Models,

    DTIC Science & Technology

    1986-10-01

    Version 1.03D) * PCOC (Version 7.01) - PRICE S • SLIM (Version 1.1) • SoftCost (Version 5. 1) * SPQR /20 (Version 1. 1) - WICOMO (Version 1.3) These...produce detailed GANTT and PERT charts. SPQR /20 is based on a cost model developed at ITT. In addition to cost, schedule, and staffing estimates, it...cases and test runs required, and the effectiveness of pre-test and test activities. SPQR /20 also predicts enhancement and maintenance activities. C

  2. A model based security testing method for protocol implementation.

    PubMed

    Fu, Yu Long; Xin, Xiao Long

    2014-01-01

    The security of protocol implementation is important and hard to be verified. Since the penetration testing is usually based on the experience of the security tester and the specific protocol specifications, a formal and automatic verification method is always required. In this paper, we propose an extended model of IOLTS to describe the legal roles and intruders of security protocol implementations, and then combine them together to generate the suitable test cases to verify the security of protocol implementation.

  3. Bulgarian fuel models developed for implementation in FARSITE simulations for test cases in Zlatograd area

    Treesearch

    Nina Dobrinkova; LaWen Hollingsworth; Faith Ann Heinsch; Greg Dillon; Georgi Dobrinkov

    2014-01-01

    As a key component of the cross-border project between Bulgaria and Greece known as OUTLAND, a team from the Bulgarian Academy of Sciences and Rocky Mountain Research Station started a collaborative project to identify and describe various fuel types for a test area in Bulgaria in order to model fire behavior for recent wildfires. Although there have been various...

  4. Dropouts and Budgets: A Test of a Dropout Reduction Model among Students in Israeli Higher Education

    ERIC Educational Resources Information Center

    Bar-Am, Ran; Arar, Osama

    2017-01-01

    This article deals with the problem of student dropout during the first year in a higher education institution. To date, no model on a budget has been developed and tested to prevent dropout among Engineering Students. This case study was conducted among first-year students taking evening classes in two practical engineering colleges in Israel.…

  5. Using open hole and cased-hole resistivity logs to monitor gas hydrate dissociation during a thermal test in the mallik 5L-38 research well, Mackenzie Delta, Canada

    USGS Publications Warehouse

    Anderson, B.I.; Collett, T.S.; Lewis, R.E.; Dubourg, I.

    2008-01-01

    Gas hydrates, which are naturally occurring ice-like combinations of gas and water, have the potential to provide vast amounts of natural gas from the world's oceans and polar regions. However, producing gas economically from hydrates entails major technical challenges. Proposed recovery methods such as dissociating or melting gas hydrates by heating or depressurization are currently being tested. One such test was conducted in northern Canada by the partners in the Mallik 2002 Gas Hydrate Production Research Well Program. This paper describes how resistivity logs were used to determine the size of the annular region of gas hydrate dissociation that occurred around the wellbore during the thermal test in the Mallik 5L-38 well. An open-hole logging suite, run prior to the thermal test, included array induction, array laterolog, nuclear magnetic resonance and 1.1-GHz electromagnetic propagation logs. The reservoir saturation tool was run both before and after the thermal test to monitor formation changes. A cased-hole formation resistivity log was run after the test.Baseline resistivity values in each formation layer (Rt) were established from the deep laterolog data. The resistivity in the region of gas hydrate dissociation near the wellbore (Rxo) was determined from electromagnetic propagation and reservoir saturation tool measurements. The radius of hydrate dissociation as a function of depth was then determined by means of iterative forward modeling of cased-hole formation resistivity tool response. The solution was obtained by varying the modeled dissociation radius until the modeled log overlaid the field log. Pretest gas hydrate production computer simulations had predicted that dissociation would take place at a uniform radius over the 13-ft test interval. However, the post-test resistivity modeling showed that this was not the case. The resistivity-derived dissociation radius was greatest near the outlet of the pipe that circulated hot water in the wellbore, where the highest temperatures were recorded. The radius was smallest near the center of the test interval, where a conglomerate section with low values of porosity and permeability inhibited dissociation. The free gas volume calculated from the resistivity-derived dissociation radii yielded a value within 20 per cent of surface gauge measurements. These results show that the inversion of resistivity measurements holds promise for use in future gas hydrate monitoring. ?? 2008 Society of Petrophysicists and Well Log Analysts. All rights reserved.

  6. On testing two major cumulus parameterization schemes using the CSU Regional Atmospheric Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kao, C.Y.J.; Bossert, J.E.; Winterkamp, J.

    1993-10-01

    One of the objectives of the DOE ARM Program is to improve the parameterization of clouds in general circulation models (GCMs). The approach taken in this research is two fold. We first examine the behavior of cumulus parameterization schemes by comparing their performance against the results from explicit cloud simulations with state-of-the-art microphysics. This is conducted in a two-dimensional (2-D) configuration of an idealized convective system. We then apply the cumulus parameterization schemes to realistic three-dimensional (3-D) simulations over the western US for a case with an enormous amount of convection in an extended period of five days. In themore » 2-D idealized tests, cloud effects are parameterized in the ``parameterization cases`` with a coarse resolution, whereas each cloud is explicitly resolved by the ``microphysics cases`` with a much finer resolution. Thus, the capability of the parameterization schemes in reproducing the growth and life cycle of a convective system can then be evaluated. These 2-D tests will form the basis for further 3-D realistic simulations which have the model resolution equivalent to that of the next generation of GCMs. Two cumulus parameterizations are used in this research: the Arakawa-Schubert (A-S) scheme (Arakawa and Schubert, 1974) used in Kao and Ogura (1987) and the Kuo scheme (Kuo, 1974) used in Tremback (1990). The numerical model used in this research is the Regional Atmospheric Modeling System (RAMS) developed at Colorado State University (CSU).« less

  7. Nuclear code case development of printed-circuit heat exchangers with thermal and mechanical performance testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aakre, Shaun R.; Jentz, Ian W.; Anderson, Mark H.

    The U.S. Department of Energy has agreed to fund a three-year integrated research project to close technical gaps involved with compact heat exchangers to be used in nuclear applications. This paper introduces the goals of the project, the research institutions, and industrial partners working in collaboration to develop a draft Boiler and Pressure Vessel Code Case for this technology. Heat exchanger testing, as well as non-destructive and destructive evaluation, will be performed by researchers across the country to understand the performance of compact heat exchangers. Testing will be performed using coolants and conditions proposed for Gen IV Reactor designs. Preliminarymore » observations of the mechanical failure mechanisms of the heat exchangers using destructive and non-destructive methods is presented. Unit-cell finite element models assembled to help predict the mechanical behavior of these high-temperature components are discussed as well. Performance testing methodology is laid out in this paper along with preliminary modeling results, an introduction to x-ray and neutron inspection techniques, and results from a recent pressurization test of a printed-circuit heat exchanger. The operational and quality assurance knowledge gained from these models and validation tests will be useful to developers of supercritical CO 2 systems, which commonly employ printed-circuit heat exchangers.« less

  8. The Dynamic Model and Inherent Variability: The Case of Northern France.

    ERIC Educational Resources Information Center

    Hornsby, David

    1999-01-01

    Explores the claims of the "dynamic" model of variation by testing against data recorded in Avion, Northern France. Parallels are drawn between "langue d'oil" areas of France and decreolization situations in which proponents of the dynamic model have generally worked. (Author/VWL)

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seth, Arpan; Klise, Katherine A.; Siirola, John D.

    In the event of contamination in a water distribution network (WDN), source identification (SI) methods that analyze sensor data can be used to identify the source location(s). Knowledge of the source location and characteristics are important to inform contamination control and cleanup operations. Various SI strategies that have been developed by researchers differ in their underlying assumptions and solution techniques. The following manuscript presents a systematic procedure for testing and evaluating SI methods. The performance of these SI methods is affected by various factors including the size of WDN model, measurement error, modeling error, time and number of contaminant injections,more » and time and number of measurements. This paper includes test cases that vary these factors and evaluates three SI methods on the basis of accuracy and specificity. The tests are used to review and compare these different SI methods, highlighting their strengths in handling various identification scenarios. These SI methods and a testing framework that includes the test cases and analysis tools presented in this paper have been integrated into EPA’s Water Security Toolkit (WST), a suite of software tools to help researchers and others in the water industry evaluate and plan various response strategies in case of a contamination incident. Lastly, a set of recommendations are made for users to consider when working with different categories of SI methods.« less

  10. Occupational asthma caused by samba (Triplochiton scleroxylon) wood dust in a professional maker of wooden models of airplanes: a case study.

    PubMed

    Krawczyk-Szulc, Patrycja; Wiszniewska, Marta; Pałczyński, Cezary; Nowakowska-Świrta, Ewa; Kozak, Anna; Walusiak-Skorupa, Jolanta

    2014-06-01

    Wood dust is a known occupational allergen that may induce, in exposed workers, respiratory diseases including asthma and allergic rhinitis. Samba (obeche, Triplochiton scleroxylon) is a tropical tree, which grows in West Africa, therefore, Polish workers are rarely exposed to it. This paper describes a case of occupational asthma caused by samba wood dust. The patient with suspicion of occupational asthma due to wood dust was examined at the Department of Occupational Diseases and Clinical Toxicology in the Nofer Institute of Occupational Medicine. Clinical evaluation included: analysis of occupational history, skin prick tests (SPT) to common and occupational allergens, determination of serum specific IgE to occupational allergens, serial spirometry measurements, metacholine challenge test and specific inhalation challenge test with samba dust SPT and specific serum IgE assessment revealed sensitization to common and occupational allergens including samba. Spirometry measurements showed mild obstruction. Metacholine challenge test revealed a high level of bronchial hyperactivity. Specific inhalation challenge test was positive and cellular changes in nasal lavage and induced sputum confirmed allergic reaction to samba. IgE mediated allergy to samba wood dust was confirmed. This case report presents the first documented occupational asthma and rhinitis due to samba wood dust in wooden airplanes model maker in Poland.

  11. Comparing a single case to a control group - Applying linear mixed effects models to repeated measures data.

    PubMed

    Huber, Stefan; Klein, Elise; Moeller, Korbinian; Willmes, Klaus

    2015-10-01

    In neuropsychological research, single-cases are often compared with a small control sample. Crawford and colleagues developed inferential methods (i.e., the modified t-test) for such a research design. In the present article, we suggest an extension of the methods of Crawford and colleagues employing linear mixed models (LMM). We first show that a t-test for the significance of a dummy coded predictor variable in a linear regression is equivalent to the modified t-test of Crawford and colleagues. As an extension to this idea, we then generalized the modified t-test to repeated measures data by using LMMs to compare the performance difference in two conditions observed in a single participant to that of a small control group. The performance of LMMs regarding Type I error rates and statistical power were tested based on Monte-Carlo simulations. We found that starting with about 15-20 participants in the control sample Type I error rates were close to the nominal Type I error rate using the Satterthwaite approximation for the degrees of freedom. Moreover, statistical power was acceptable. Therefore, we conclude that LMMs can be applied successfully to statistically evaluate performance differences between a single-case and a control sample. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Application of the finite element groundwater model FEWA to the engineered test facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Craig, P.M.; Davis, E.C.

    1985-09-01

    A finite element model for water transport through porous media (FEWA) has been applied to the unconfined aquifer at the Oak Ridge National Laboratory Solid Waste Storage Area 6 Engineered Test Facility (ETF). The model was developed in 1983 as part of the Shallow Land Burial Technology - Humid Task (ONL-WL14) and was previously verified using several general hydrologic problems for which an analytic solution exists. Model application and calibration, as described in this report, consisted of modeling the ETF water table for three specialized cases: a one-dimensional steady-state simulation, a one-dimensional transient simulation, and a two-dimensional transient simulation. Inmore » the one-dimensional steady-state simulation, the FEWA output accurately predicted the water table during a long period in which there were no man-induced or natural perturbations to the system. The input parameters of most importance for this case were hydraulic conductivity and aquifer bottom elevation. In the two transient cases, the FEWA output has matched observed water table responses to a single rainfall event occurring in February 1983, yielding a calibrated finite element model that is useful for further study of additional precipitation events as well as contaminant transport at the experimental site.« less

  13. Estimating parameter values of a socio-hydrological flood model

    NASA Astrophysics Data System (ADS)

    Holkje Barendrecht, Marlies; Viglione, Alberto; Kreibich, Heidi; Vorogushyn, Sergiy; Merz, Bruno; Blöschl, Günter

    2018-06-01

    Socio-hydrological modelling studies that have been published so far show that dynamic coupled human-flood models are a promising tool to represent the phenomena and the feedbacks in human-flood systems. So far these models are mostly generic and have not been developed and calibrated to represent specific case studies. We believe that applying and calibrating these type of models to real world case studies can help us to further develop our understanding about the phenomena that occur in these systems. In this paper we propose a method to estimate the parameter values of a socio-hydrological model and we test it by applying it to an artificial case study. We postulate a model that describes the feedbacks between floods, awareness and preparedness. After simulating hypothetical time series with a given combination of parameters, we sample few data points for our variables and try to estimate the parameters given these data points using Bayesian Inference. The results show that, if we are able to collect data for our case study, we would, in theory, be able to estimate the parameter values for our socio-hydrological flood model.

  14. Mixed Phase Modeling in GlennICE with Application to Engine Icing

    NASA Technical Reports Server (NTRS)

    Wright, William B.; Jorgenson, Philip C. E.; Veres, Joseph P.

    2011-01-01

    A capability for modeling ice crystals and mixed phase icing has been added to GlennICE. Modifications have been made to the particle trajectory algorithm and energy balance to model this behavior. This capability has been added as part of a larger effort to model ice crystal ingestion in aircraft engines. Comparisons have been made to four mixed phase ice accretions performed in the Cox icing tunnel in order to calibrate an ice erosion model. A sample ice ingestion case was performed using the Energy Efficient Engine (E3) model in order to illustrate current capabilities. Engine performance characteristics were supplied using the Numerical Propulsion System Simulation (NPSS) model for this test case.

  15. Model Analysis and Model Creation: Capturing the Task-Model Structure of Quantitative Item Domains. Research Report. ETS RR-06-11

    ERIC Educational Resources Information Center

    Deane, Paul; Graf, Edith Aurora; Higgins, Derrick; Futagi, Yoko; Lawless, René

    2006-01-01

    This study focuses on the relationship between item modeling and evidence-centered design (ECD); it considers how an appropriately generalized item modeling software tool can support systematic identification and exploitation of task-model variables, and then examines the feasibility of this goal, using linear-equation items as a test case. The…

  16. Strain Gage Load Calibration of the Wing Interface Fittings for the Adaptive Compliant Trailing Edge Flap Flight Test

    NASA Technical Reports Server (NTRS)

    Miller, Eric J.; Holguin, Andrew C.; Cruz, Josue; Lokos, William A.

    2014-01-01

    The safety-of-flight parameters for the Adaptive Compliant Trailing Edge (ACTE) flap experiment require that flap-to-wing interface loads be sensed and monitored in real time to ensure that the structural load limits of the wing are not exceeded. This paper discusses the strain gage load calibration testing and load equation derivation methodology for the ACTE interface fittings. Both the left and right wing flap interfaces were monitored; each contained four uniquely designed and instrumented flap interface fittings. The interface hardware design and instrumentation layout are discussed. Twenty-one applied test load cases were developed using the predicted in-flight loads. Pre-test predictions of strain gage responses were produced using finite element method models of the interface fittings. Predicted and measured test strains are presented. A load testing rig and three hydraulic jacks were used to apply combinations of shear, bending, and axial loads to the interface fittings. Hardware deflections under load were measured using photogrammetry and transducers. Due to deflections in the interface fitting hardware and test rig, finite element model techniques were used to calculate the reaction loads throughout the applied load range, taking into account the elastically-deformed geometry. The primary load equations were selected based on multiple calibration metrics. An independent set of validation cases was used to validate each derived equation. The 2-sigma residual errors for the shear loads were less than eight percent of the full-scale calibration load; the 2-sigma residual errors for the bending moment loads were less than three percent of the full-scale calibration load. The derived load equations for shear, bending, and axial loads are presented, with the calculated errors for both the calibration cases and the independent validation load cases.

  17. Parallel ALLSPD-3D: Speeding Up Combustor Analysis Via Parallel Processing

    NASA Technical Reports Server (NTRS)

    Fricker, David M.

    1997-01-01

    The ALLSPD-3D Computational Fluid Dynamics code for reacting flow simulation was run on a set of benchmark test cases to determine its parallel efficiency. These test cases included non-reacting and reacting flow simulations with varying numbers of processors. Also, the tests explored the effects of scaling the simulation with the number of processors in addition to distributing a constant size problem over an increasing number of processors. The test cases were run on a cluster of IBM RS/6000 Model 590 workstations with ethernet and ATM networking plus a shared memory SGI Power Challenge L workstation. The results indicate that the network capabilities significantly influence the parallel efficiency, i.e., a shared memory machine is fastest and ATM networking provides acceptable performance. The limitations of ethernet greatly hamper the rapid calculation of flows using ALLSPD-3D.

  18. Applying Bayesian Item Selection Approaches to Adaptive Tests Using Polytomous Items

    ERIC Educational Resources Information Center

    Penfield, Randall D.

    2006-01-01

    This study applied the maximum expected information (MEI) and the maximum posterior-weighted information (MPI) approaches of computer adaptive testing item selection to the case of a test using polytomous items following the partial credit model. The MEI and MPI approaches are described. A simulation study compared the efficiency of ability…

  19. Testing the Intervention Effect in Single-Case Experiments: A Monte Carlo Simulation Study

    ERIC Educational Resources Information Center

    Heyvaert, Mieke; Moeyaert, Mariola; Verkempynck, Paul; Van den Noortgate, Wim; Vervloet, Marlies; Ugille, Maaike; Onghena, Patrick

    2017-01-01

    This article reports on a Monte Carlo simulation study, evaluating two approaches for testing the intervention effect in replicated randomized AB designs: two-level hierarchical linear modeling (HLM) and using the additive method to combine randomization test "p" values (RTcombiP). Four factors were manipulated: mean intervention effect,…

  20. Experiences Using Lightweight Formal Methods for Requirements Modeling

    NASA Technical Reports Server (NTRS)

    Easterbrook, Steve; Lutz, Robyn; Covington, Rick; Kelly, John; Ampo, Yoko; Hamilton, David

    1997-01-01

    This paper describes three case studies in the lightweight application of formal methods to requirements modeling for spacecraft fault protection systems. The case studies differ from previously reported applications of formal methods in that formal methods were applied very early in the requirements engineering process, to validate the evolving requirements. The results were fed back into the projects, to improve the informal specifications. For each case study, we describe what methods were applied, how they were applied, how much effort was involved, and what the findings were. In all three cases, formal methods enhanced the existing verification and validation processes, by testing key properties of the evolving requirements, and helping to identify weaknesses. We conclude that the benefits gained from early modeling of unstable requirements more than outweigh the effort needed to maintain multiple representations.

  1. Toward fidelity between specification and implementation

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Montgomery, Todd L.; Morrison, Jeff; Wu, Yunqing

    1994-01-01

    This paper describes the methods used to specify and implement a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally by two complementary teams using a combination of formal and informal techniques in an attempt to ensure the correctness of the protocol implementation. The first team, called the Design team, initially specified protocol requirements using a variant of SCR requirements tables and implemented a prototype solution. The second team, called the V&V team, developed a state model based on the requirements tables and derived test cases from these tables to exercise the implementation. In a series of iterative steps, the Design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation through testing. Test cases derived from state transition paths in the formal model formed the dialogue between teams during development and served as the vehicles for keeping the model and implementation in fidelity with each other. This paper describes our experiences in developing our process model, details of our approach, and some example problems found during the development of RMP.

  2. Verification and validation of a reliable multicast protocol

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Montgomery, Todd L.

    1995-01-01

    This paper describes the methods used to specify and implement a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally by two complementary teams using a combination of formal and informal techniques in an attempt to ensure the correctness of the protocol implementation. The first team, called the Design team, initially specified protocol requirements using a variant of SCR requirements tables and implemented a prototype solution. The second team, called the V&V team, developed a state model based on the requirements tables and derived test cases from these tables to exercise the implementation. In a series of iterative steps, the Design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation through testing. Test cases derived from state transition paths in the formal model formed the dialogue between teams during development and served as the vehicles for keeping the model and implementation in fidelity with each other. This paper describes our experiences in developing our process model, details of our approach, and some example problems found during the development of RMP.

  3. Recovery Act. Development and Validation of an Advanced Stimulation Prediction Model for Enhanced Geothermal System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gutierrez, Marte

    The research project aims to develop and validate an advanced computer model that can be used in the planning and design of stimulation techniques to create engineered reservoirs for Enhanced Geothermal Systems. The specific objectives of the proposal are to: 1) Develop a true three-dimensional hydro-thermal fracturing simulator that is particularly suited for EGS reservoir creation. 2) Perform laboratory scale model tests of hydraulic fracturing and proppant flow/transport using a polyaxial loading device, and use the laboratory results to test and validate the 3D simulator. 3) Perform discrete element/particulate modeling of proppant transport in hydraulic fractures, and use the resultsmore » to improve understand of proppant flow and transport. 4) Test and validate the 3D hydro-thermal fracturing simulator against case histories of EGS energy production. 5) Develop a plan to commercialize the 3D fracturing and proppant flow/transport simulator. The project is expected to yield several specific results and benefits. Major technical products from the proposal include: 1) A true-3D hydro-thermal fracturing computer code that is particularly suited to EGS, 2) Documented results of scale model tests on hydro-thermal fracturing and fracture propping in an analogue crystalline rock, 3) Documented procedures and results of discrete element/particulate modeling of flow and transport of proppants for EGS applications, and 4) Database of monitoring data, with focus of Acoustic Emissions (AE) from lab scale modeling and field case histories of EGS reservoir creation.« less

  4. Recovery Act. Development and Validation of an Advanced Stimulation Prediction Model for Enhanced Geothermal Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gutierrez, Marte

    2013-12-31

    This research project aims to develop and validate an advanced computer model that can be used in the planning and design of stimulation techniques to create engineered reservoirs for Enhanced Geothermal Systems. The specific objectives of the proposal are to; Develop a true three-dimensional hydro-thermal fracturing simulator that is particularly suited for EGS reservoir creation; Perform laboratory scale model tests of hydraulic fracturing and proppant flow/transport using a polyaxial loading device, and use the laboratory results to test and validate the 3D simulator; Perform discrete element/particulate modeling of proppant transport in hydraulic fractures, and use the results to improve understandmore » of proppant flow and transport; Test and validate the 3D hydro-thermal fracturing simulator against case histories of EGS energy production; and Develop a plan to commercialize the 3D fracturing and proppant flow/transport simulator. The project is expected to yield several specific results and benefits. Major technical products from the proposal include; A true-3D hydro-thermal fracturing computer code that is particularly suited to EGS; Documented results of scale model tests on hydro-thermal fracturing and fracture propping in an analogue crystalline rock; Documented procedures and results of discrete element/particulate modeling of flow and transport of proppants for EGS applications; and Database of monitoring data, with focus of Acoustic Emissions (AE) from lab scale modeling and field case histories of EGS reservoir creation.« less

  5. Testing contamination source identification methods for water distribution networks

    DOE PAGES

    Seth, Arpan; Klise, Katherine A.; Siirola, John D.; ...

    2016-04-01

    In the event of contamination in a water distribution network (WDN), source identification (SI) methods that analyze sensor data can be used to identify the source location(s). Knowledge of the source location and characteristics are important to inform contamination control and cleanup operations. Various SI strategies that have been developed by researchers differ in their underlying assumptions and solution techniques. The following manuscript presents a systematic procedure for testing and evaluating SI methods. The performance of these SI methods is affected by various factors including the size of WDN model, measurement error, modeling error, time and number of contaminant injections,more » and time and number of measurements. This paper includes test cases that vary these factors and evaluates three SI methods on the basis of accuracy and specificity. The tests are used to review and compare these different SI methods, highlighting their strengths in handling various identification scenarios. These SI methods and a testing framework that includes the test cases and analysis tools presented in this paper have been integrated into EPA’s Water Security Toolkit (WST), a suite of software tools to help researchers and others in the water industry evaluate and plan various response strategies in case of a contamination incident. Lastly, a set of recommendations are made for users to consider when working with different categories of SI methods.« less

  6. A general nonlinear magnetomechanical model for ferromagnetic materials under a constant weak magnetic field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Pengpeng; Zheng, Xiaojing, E-mail: xjzheng@xidian.edu.cn; Jin, Ke

    2016-04-14

    Weak magnetic nondestructive testing (e.g., metal magnetic memory method) concerns the magnetization variation of ferromagnetic materials due to its applied load and a weak magnetic surrounding them. One key issue on these nondestructive technologies is the magnetomechanical effect for quantitative evaluation of magnetization state from stress–strain condition. A representative phenomenological model has been proposed to explain the magnetomechanical effect by Jiles in 1995. However, the Jiles' model has some deficiencies in quantification, for instance, there is a visible difference between theoretical prediction and experimental measurements on stress–magnetization curve, especially in the compression case. Based on the thermodynamic relations and themore » approach law of irreversible magnetization, a nonlinear coupled model is proposed to improve the quantitative evaluation of the magnetomechanical effect. Excellent agreement has been achieved between the predictions from the present model and previous experimental results. In comparison with Jiles' model, the prediction accuracy is improved greatly by the present model, particularly for the compression case. A detailed study has also been performed to reveal the effects of initial magnetization status, cyclic loading, and demagnetization factor on the magnetomechanical effect. Our theoretical model reveals that the stable weak magnetic signals of nondestructive testing after multiple cyclic loads are attributed to the first few cycles eliminating most of the irreversible magnetization. Remarkably, the existence of demagnetization field can weaken magnetomechanical effect, therefore, significantly reduces the testing capability. This theoretical model can be adopted to quantitatively analyze magnetic memory signals, and then can be applied in weak magnetic nondestructive testing.« less

  7. Educating resident physicians using virtual case-based simulation improves diabetes management: a randomized controlled trial.

    PubMed

    Sperl-Hillen, JoAnn; O'Connor, Patrick J; Ekstrom, Heidi L; Rush, William A; Asche, Stephen E; Fernandes, Omar D; Appana, Deepika; Amundson, Gerald H; Johnson, Paul E; Curran, Debra M

    2014-12-01

    To test a virtual case-based Simulated Diabetes Education intervention (SimDE) developed to teach primary care residents how to manage diabetes. Nineteen primary care residency programs, with 341 volunteer residents in all postgraduate years (PGY), were randomly assigned to a SimDE intervention group or control group (CG). The Web-based interactive educational intervention used computerized virtual patients who responded to provider actions through programmed simulation models. Eighteen distinct learning cases (L-cases) were assigned to SimDE residents over six months from 2010 to 2011. Impact was assessed using performance on four virtual assessment cases (A-cases), an objective knowledge test, and pre-post changes in self-assessed diabetes knowledge and confidence. Group comparisons were analyzed using generalized linear mixed models, controlling for clustering of residents within residency programs and differences in baseline knowledge. The percentages of residents appropriately achieving A-case composite clinical goals for glucose, blood pressure, and lipids were as follows: A-case 1: SimDE = 21.2%, CG = 1.8%, P = .002; A-case 2: SimDE = 15.7%, CG = 4.7%, P = .02; A-case 3: SimDE = 48.0%, CG = 10.4%, P < .001; and A-case 4: SimDE = 42.1%, CG = 18.7%, P = .004. The mean knowledge score and pre-post changes in self-assessed knowledge and confidence were significantly better for SimDE group than CG participants. A virtual case-based simulated diabetes education intervention improved diabetes management skills, knowledge, and confidence for primary care residents.

  8. Model-Selection Theory: The Need for a More Nuanced Picture of Use-Novelty and Double-Counting

    PubMed Central

    Steele, Katie; Werndl, Charlotte

    2018-01-01

    Abstract This article argues that common intuitions regarding (a) the specialness of ‘use-novel’ data for confirmation and (b) that this specialness implies the ‘no-double-counting rule’, which says that data used in ‘constructing’ (calibrating) a model cannot also play a role in confirming the model’s predictions, are too crude. The intuitions in question are pertinent in all the sciences, but we appeal to a climate science case study to illustrate what is at stake. Our strategy is to analyse the intuitive claims in light of prominent accounts of confirmation of model predictions. We show that on the Bayesian account of confirmation, and also on the standard classical hypothesis-testing account, claims (a) and (b) are not generally true; but for some select cases, it is possible to distinguish data used for calibration from use-novel data, where only the latter confirm. The more specialized classical model-selection methods, on the other hand, uphold a nuanced version of claim (a), but this comes apart from (b), which must be rejected in favour of a more refined account of the relationship between calibration and confirmation. Thus, depending on the framework of confirmation, either the scope or the simplicity of the intuitive position must be revised. 1 Introduction2 A Climate Case Study3 The Bayesian Method vis-à-vis Intuitions4 Classical Tests vis-à-vis Intuitions5 Classical Model-Selection Methods vis-à-vis Intuitions  5.1 Introducing classical model-selection methods  5.2 Two cases6 Re-examining Our Case Study7 Conclusion PMID:29780170

  9. Improved Ant Algorithms for Software Testing Cases Generation

    PubMed Central

    Yang, Shunkun; Xu, Jiaqi

    2014-01-01

    Existing ant colony optimization (ACO) for software testing cases generation is a very popular domain in software testing engineering. However, the traditional ACO has flaws, as early search pheromone is relatively scarce, search efficiency is low, search model is too simple, positive feedback mechanism is easy to porduce the phenomenon of stagnation and precocity. This paper introduces improved ACO for software testing cases generation: improved local pheromone update strategy for ant colony optimization, improved pheromone volatilization coefficient for ant colony optimization (IPVACO), and improved the global path pheromone update strategy for ant colony optimization (IGPACO). At last, we put forward a comprehensive improved ant colony optimization (ACIACO), which is based on all the above three methods. The proposed technique will be compared with random algorithm (RND) and genetic algorithm (GA) in terms of both efficiency and coverage. The results indicate that the improved method can effectively improve the search efficiency, restrain precocity, promote case coverage, and reduce the number of iterations. PMID:24883391

  10. Analysis of household refrigerators for different testing standards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bansal, P.K.; McGill, I.

    This study highlights the salient differences among various testing standards for household refrigerator-freezers and proposes a methodology for predicting the performance of a single evaporator-based vapor-compression refrigeration system (either refrigerator or freezer) from one test standard (where the test data are available-the reference case) to another (the alternative case). The standards studied during this investigation include the Australian-New Zealand Standard (ANZS), the International Standard (ISO), the American National Standard (ANSI), the Japanese Industrial Standard (JIS), and the Chinese National Standard (CNS). A simple analysis in conjunction with the BICYCLE model (Bansal and Rice 1993) is used to calculate the energymore » consumption of two refrigerator cabinets from the reference case to the alternative cases. The proposed analysis includes the effect of door openings (as required by the JIS) as well as defrost heaters. The analytical results are found to agree reasonably well with the experimental observations for translating energy consumption information from one standard to another.« less

  11. Identification of Low Order Equivalent System Models From Flight Test Data

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2000-01-01

    Identification of low order equivalent system dynamic models from flight test data was studied. Inputs were pilot control deflections, and outputs were aircraft responses, so the models characterized the total aircraft response including bare airframe and flight control system. Theoretical investigations were conducted and related to results found in the literature. Low order equivalent system modeling techniques using output error and equation error parameter estimation in the frequency domain were developed and validated on simulation data. It was found that some common difficulties encountered in identifying closed loop low order equivalent system models from flight test data could be overcome using the developed techniques. Implications for data requirements and experiment design were discussed. The developed methods were demonstrated using realistic simulation cases, then applied to closed loop flight test data from the NASA F-18 High Alpha Research Vehicle.

  12. Orbit Estimation of Non-Cooperative Maneuvering Spacecraft

    DTIC Science & Technology

    2015-06-01

    only take on values that generate real sigma points; therefore, λ > −n. The additional weighting scheme is outlined in the following equations κ = α2...orbit shapes resulted in a similar model weighting. Additional cases of this orbit type also resulted in heavily weighting smaller η value models. It is...determined using both the symmetric and additional parameters UTs. The best values for the weighting parameters are then compared for each test case

  13. Training artificial neural networks directly on the concordance index for censored data using genetic algorithms.

    PubMed

    Kalderstam, Jonas; Edén, Patrik; Bendahl, Pär-Ola; Strand, Carina; Fernö, Mårten; Ohlsson, Mattias

    2013-06-01

    The concordance index (c-index) is the standard way of evaluating the performance of prognostic models in the presence of censored data. Constructing prognostic models using artificial neural networks (ANNs) is commonly done by training on error functions which are modified versions of the c-index. Our objective was to demonstrate the capability of training directly on the c-index and to evaluate our approach compared to the Cox proportional hazards model. We constructed a prognostic model using an ensemble of ANNs which were trained using a genetic algorithm. The individual networks were trained on a non-linear artificial data set divided into a training and test set both of size 2000, where 50% of the data was censored. The ANNs were also trained on a data set consisting of 4042 patients treated for breast cancer spread over five different medical studies, 2/3 used for training and 1/3 used as a test set. A Cox model was also constructed on the same data in both cases. The two models' c-indices on the test sets were then compared. The ranking performance of the models is additionally presented visually using modified scatter plots. Cross validation on the cancer training set did not indicate any non-linear effects between the covariates. An ensemble of 30 ANNs with one hidden neuron was therefore used. The ANN model had almost the same c-index score as the Cox model (c-index=0.70 and 0.71, respectively) on the cancer test set. Both models identified similarly sized low risk groups with at most 10% false positives, 49 for the ANN model and 60 for the Cox model, but repeated bootstrap runs indicate that the difference was not significant. A significant difference could however be seen when applied on the non-linear synthetic data set. In that case the ANN ensemble managed to achieve a c-index score of 0.90 whereas the Cox model failed to distinguish itself from the random case (c-index=0.49). We have found empirical evidence that ensembles of ANN models can be optimized directly on the c-index. Comparison with a Cox model indicates that near identical performance is achieved on a real cancer data set while on a non-linear data set the ANN model is clearly superior. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Modelling of piezoelectric actuator dynamics for active structural control

    NASA Technical Reports Server (NTRS)

    Hagood, Nesbitt W.; Chung, Walter H.; Von Flotow, Andreas

    1990-01-01

    The paper models the effects of dynamic coupling between a structure and an electrical network through the piezoelectric effect. The coupled equations of motion of an arbitrary elastic structure with piezoelectric elements and passive electronics are derived. State space models are developed for three important cases: direct voltage driven electrodes, direct charge driven electrodes, and an indirect drive case where the piezoelectric electrodes are connected to an arbitrary electrical circuit with embedded voltage and current sources. The equations are applied to the case of a cantilevered beam with surface mounted piezoceramics and indirect voltage and current drive. The theoretical derivations are validated experimentally on an actively controlled cantilevered beam test article with indirect voltage drive.

  15. Preliminary Two-Phase Terry Turbine Nozzle Models for RCIC Off-Design Operation Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Haihua; O'Brien, James

    This report presents the effort to extend the single-phase analytical Terry turbine model to cover two-phase off-design conditions. The work includes: (1) adding well-established two-phase choking models – the Isentropic Homogenous Equilibrium Model (IHEM) and Moody’s model, and (2) theoretical development and implementation of a two-phase nozzle expansion model. The two choking models provide bounding cases for the two-phase choking mass flow rate. The new two-phase Terry turbine model uses the choking models to calculate the mass flow rate, the critical pressure at the nozzle throat, and steam quality. In the divergent stage, we only consider the vapor phase withmore » a similar model for the single-phase case by assuming that the liquid phase would slip along the wall with a much slower speed and will not contribute the impulse on the rotor. We also modify the stagnation conditions according to two-phase choking conditions at the throat and the cross-section areas for steam flow at the nozzle throat and at the nozzle exit. The new two-phase Terry turbine model was benchmarked with the same steam nozzle test as for the single-phase model. Better agreement with the experimental data is observed than from the single-phase model. We also repeated the Terry turbine nozzle benchmark work against the Sandia CFD simulation results with the two-phase model for the pure steam inlet nozzle case. The RCIC start-up tests were simulated and compared with the single-phase model. Similar results are obtained. Finally, we designed a new RCIC system test case to simulate the self-regulated Terry turbine behavior observed in Fukushima accidents. In this test, a period inlet condition for the steam quality varying from 1 to 0 is applied. For the high quality inlet period, the RCIC system behaves just like the normal operation condition with a high pump injection flow rate and a nominal steam release rate through the turbine, with the net addition of water to the primary system; for the low quality inlet period, the RCIC turbine shaft work dramatically decreases and results in a much reduced pump injection flow rate, and the mixture flow rate through the turbine increases due to the high liquid phase flow rate. The net effect for this period is net removal of coolant from the primary loop. With the periodic addition and removal of coolant to the primary loop, the self-regulation mode of the RCIC system can be maintained for a quite long time. Both the IHEM and Moody’s models generate similar phenomena; however noticeable differences can be observed.« less

  16. The Case of Effort Variables in Student Performance.

    ERIC Educational Resources Information Center

    Borg, Mary O.; And Others

    1989-01-01

    Tests the existence of a structural shift between above- and below-average students in the econometric models that explain students' grades in principles of economics classes. Identifies a structural shift and estimates separate models for above- and below-average students. Concludes that separate models as well as educational policies are…

  17. Modeling of unit operating considerations in generating-capacity reliability evaluation. Volume 2. Computer-program documentation. Final report. [GENESIS, OPCON and OPPLAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patton, A.D.; Ayoub, A.K.; Singh, C.

    1982-07-01

    This report describes the structure and operation of prototype computer programs developed for a Monte Carlo simulation model, GENESIS, and for two analytical models, OPCON and OPPLAN. It includes input data requirements and sample test cases.

  18. Evaluation of the channelized Hotelling observer with an internal-noise model in a train-test paradigm for cardiac SPECT defect detection.

    PubMed

    Brankov, Jovan G

    2013-10-21

    The channelized Hotelling observer (CHO) has become a widely used approach for evaluating medical image quality, acting as a surrogate for human observers in early-stage research on assessment and optimization of imaging devices and algorithms. The CHO is typically used to measure lesion detectability. Its popularity stems from experiments showing that the CHO's detection performance can correlate well with that of human observers. In some cases, CHO performance overestimates human performance; to counteract this effect, an internal-noise model is introduced, which allows the CHO to be tuned to match human-observer performance. Typically, this tuning is achieved using example data obtained from human observers. We argue that this internal-noise tuning step is essentially a model training exercise; therefore, just as in supervised learning, it is essential to test the CHO with an internal-noise model on a set of data that is distinct from that used to tune (train) the model. Furthermore, we argue that, if the CHO is to provide useful insights about new imaging algorithms or devices, the test data should reflect such potential differences from the training data; it is not sufficient simply to use new noise realizations of the same imaging method. Motivated by these considerations, the novelty of this paper is the use of new model selection criteria to evaluate ten established internal-noise models, utilizing four different channel models, in a train-test approach. Though not the focus of the paper, a new internal-noise model is also proposed that outperformed the ten established models in the cases tested. The results, using cardiac perfusion SPECT data, show that the proposed train-test approach is necessary, as judged by the newly proposed model selection criteria, to avoid spurious conclusions. The results also demonstrate that, in some models, the optimal internal-noise parameter is very sensitive to the choice of training data; therefore, these models are prone to overfitting, and will not likely generalize well to new data. In addition, we present an alternative interpretation of the CHO as a penalized linear regression wherein the penalization term is defined by the internal-noise model.

  19. Test Cases for Modeling and Validation of Structures with Piezoelectric Actuators

    NASA Technical Reports Server (NTRS)

    Reaves, Mercedes C.; Horta, Lucas G.

    2001-01-01

    A set of benchmark test articles were developed to validate techniques for modeling structures containing piezoelectric actuators using commercially available finite element analysis packages. The paper presents the development, modeling, and testing of two structures: an aluminum plate with surface mounted patch actuators and a composite box beam with surface mounted actuators. Three approaches for modeling structures containing piezoelectric actuators using the commercially available packages: MSC/NASTRAN and ANSYS are presented. The approaches, applications, and limitations are discussed. Data for both test articles are compared in terms of frequency response functions from deflection and strain data to input voltage to the actuator. Frequency response function results using the three different analysis approaches provided comparable test/analysis results. It is shown that global versus local behavior of the analytical model and test article must be considered when comparing different approaches. Also, improper bonding of actuators greatly reduces the electrical to mechanical effectiveness of the actuators producing anti-resonance errors.

  20. Impact and cost-effectiveness of chlamydia testing in Scotland: a mathematical modelling study.

    PubMed

    Looker, Katharine J; Wallace, Lesley A; Turner, Katherine M E

    2015-01-15

    Chlamydia is the most common sexually transmitted bacterial infection in Scotland, and is associated with potentially serious reproductive outcomes, including pelvic inflammatory disease (PID) and tubal factor infertility (TFI) in women. Chlamydia testing in Scotland is currently targeted towards symptomatic individuals, individuals at high risk of existing undetected infection, and young people. The cost-effectiveness of testing and treatment to prevent PID and TFI in Scotland is uncertain. A compartmental deterministic dynamic model of chlamydia infection in 15-24 year olds in Scotland was developed. The model was used to estimate the impact of a change in testing strategy from baseline (16.8% overall testing coverage; 0.4 partners notified and tested/treated per treated positive index) on PID and TFI cases. Cost-effectiveness calculations informed by best-available estimates of the quality-adjusted life years (QALYs) lost due to PID and TFI were also performed. Increasing overall testing coverage by 50% from baseline to 25.2% is estimated to result in 21% fewer cases in young women each year (PID: 703 fewer; TFI: 88 fewer). A 50% decrease to 8.4% would result in 20% more PID (669 additional) and TFI (84 additional) cases occurring annually. The cost per QALY gained of current testing activities compared to no testing is £40,034, which is above the £20,000-£30,000 cost-effectiveness threshold. However, calculations are hampered by lack of reliable data. Any increase in partner notification from baseline would be cost-effective (incremental cost per QALY gained for a partner notification efficacy of 1 compared to baseline: £5,119), and would increase the cost-effectiveness of current testing strategy compared to no testing, with threshold cost-effectiveness reached at a partner notification efficacy of 1.5. However, there is uncertainty in the extent to which partner notification is currently done, and hence the amount by which it could potentially be increased. Current chlamydia testing strategy in Scotland is not cost-effective under the conservative model assumptions applied. However, with better data enabling some of these assumptions to be relaxed, current coverage could be cost-effective. Meanwhile, increasing partner notification efficacy on its own would be a cost-effective way of preventing PID and TFI from current strategy.

  1. The Power of Exclusion using Automated Osteometric Sorting: Pair-Matching.

    PubMed

    Lynch, Jeffrey James; Byrd, John; LeGarde, Carrie B

    2018-03-01

    This study compares the original pair-matching osteometric sorting model (J Forensic Sci 2003;48:717) against two new models providing validation and performance testing across three samples. The samples include the Forensic Data Bank, USS Oklahoma, and the osteometric sorting reference used within the Defense POW/MIA Accounting Agency. A computer science solution to generating dynamic statistical models across a commingled assemblage is presented. The issue of normality is investigated showing the relative robustness against non-normality and a data transformation to control for normality. A case study is provided showing the relative exclusion power of all three models from an active commingled case within the Defense POW/MIA Accounting Agency. In total, 14,357,220 osteometric t-tests were conducted. The results indicate that osteometric sorting performs as expected despite reference samples deviating from normality. The two new models outperform the original, and one of those is recommended to supersede the original for future osteometric sorting work. © 2017 American Academy of Forensic Sciences.

  2. Cost-effectiveness analysis of carrier and prenatal genetic testing for X-linked hemophilia.

    PubMed

    Tsai, Meng-Che; Cheng, Chao-Neng; Wang, Ru-Jay; Chen, Kow-Tong; Kuo, Mei-Chin; Lin, Shio-Jean

    2015-08-01

    Hemophilia involves a lifelong burden from the perspective of the patient and the entire healthcare system. Advances in genetic testing provide valuable information to hemophilia-affected families for family planning. The aim of this study was to analyze the cost-effectiveness of carrier and prenatal genetic testing in the health-economic framework in Taiwan. A questionnaire was developed to assess the attitudes towards genetic testing for hemophilia. We modeled clinical outcomes of the proposed testing scheme by using the decision tree method. Incremental cost-effectiveness analysis was conducted, based on data from the National Health Insurance (NHI) database and a questionnaire survey. From the NHI database, 1111 hemophilic patients were identified and required an average medical expenditure of approximately New Taiwan (NT) $2.1 million per patient-year in 2009. By using the decision tree model, we estimated that 26 potential carriers need to be tested to prevent one case of hemophilia. At a screening rate of 79%, carrier and prenatal genetic testing would cost NT $85.9 million, which would be offset by an incremental saving of NT $203 million per year by preventing 96 cases of hemophilia. Assuming that the life expectancy for hemophilic patients is 70 years, genetic testing could further save NT $14.2 billion. Higher screening rates would increase the savings for healthcare resources. Carrier and prenatal genetic testing for hemophilia is a cost-effective investment in healthcare allocation. A case management system should be integrated in the current practice to facilitate patient care (e.g., collecting family pedigrees and providing genetic counseling). Copyright © 2013. Published by Elsevier B.V.

  3. Development of a severe local storm prediction system: A 60-day test of a mesoscale primitive equation model

    NASA Technical Reports Server (NTRS)

    Paine, D. A.; Zack, J. W.; Kaplan, M. L.

    1979-01-01

    The progress and problems associated with the dynamical forecast system which was developed to predict severe storms are examined. The meteorological problem of severe convective storm forecasting is reviewed. The cascade hypothesis which forms the theoretical core of the nested grid dynamical numerical modelling system is described. The dynamical and numerical structure of the model used during the 1978 test period is presented and a preliminary description of a proposed multigrid system for future experiments and tests is provided. Six cases from the spring of 1978 are discussed to illustrate the model's performance and its problems. Potential solutions to the problems are examined.

  4. Passenger car crippling end-load test and analyses

    DOT National Transportation Integrated Search

    2017-09-01

    The Transportation Technology Center, Inc. (TTCI) performed a series of full-scale tests and a finite element analysis (FEA) in a case study that may become a model for manufacturers seeking to use the waiver process of Tier I crashworthiness and occ...

  5. TREAT (TREe-based Association Test)

    Cancer.gov

    TREAT is an R package for detecting complex joint effects in case-control studies. The test statistic is derived from a tree-structure model by recursive partitioning the data. Ultra-fast algorithm is designed to evaluate the significance of association between candidate gene and disease outcome

  6. Simulations of coupled, Antarctic ice-ocean evolution using POP2x and BISICLES (Invited)

    NASA Astrophysics Data System (ADS)

    Price, S. F.; Asay-Davis, X.; Martin, D. F.; Maltrud, M. E.; Hoffman, M. J.

    2013-12-01

    We present initial results from Antarctic, ice-ocean coupled simulations using large-scale ocean circulation and land ice evolution models. The ocean model, POP2x is a modified version of POP, a fully eddying, global-scale ocean model (Smith and Gent, 2002). POP2x allows for circulation beneath ice shelf cavities using the method of partial top cells (Losch, 2008). Boundary layer physics, which control fresh water and salt exchange at the ice-ocean interface, are implemented following Holland and Jenkins (1999), Jenkins (1999), and Jenkins et al. (2010). Standalone POP2x output compares well with standard ice-ocean test cases (e.g., ISOMIP; Losch, 2008; Kimura et al., 2013) and with results from other idealized ice-ocean coupling test cases (e.g., Goldberg et al., 2012). The land ice model, BISICLES (Cornford et al., 2012), includes a 1st-order accurate momentum balance (L1L2) and uses block structured, adaptive-mesh refinement to more accurately model regions of dynamic complexity, such as ice streams, outlet glaciers, and grounding lines. For idealized test cases focused on marine-ice sheet dynamics, BISICLES output compares very favorably relative to simulations based on the full, nonlinear Stokes momentum balance (MISMIP-3d; Pattyn et al., 2013). Here, we present large-scale (southern ocean) simulations using POP2x with fixed ice shelf geometries, which are used to obtain and validate modeled submarine melt rates against observations. These melt rates are, in turn, used to force evolution of the BISICLES model. An offline-coupling scheme, which we compare with the ice-ocean coupling work of Goldberg et al. (2012), is then used to sequentially update the sub-shelf cavity geometry seen by POP2x.

  7. SedFoam-2.0: a 3-D two-phase flow numerical model for sediment transport

    NASA Astrophysics Data System (ADS)

    Chauchat, Julien; Cheng, Zhen; Nagel, Tim; Bonamy, Cyrille; Hsu, Tian-Jian

    2017-11-01

    In this paper, a three-dimensional two-phase flow solver, SedFoam-2.0, is presented for sediment transport applications. The solver is extended from twoPhaseEulerFoam available in the 2.1.0 release of the open-source CFD (computational fluid dynamics) toolbox OpenFOAM. In this approach the sediment phase is modeled as a continuum, and constitutive laws have to be prescribed for the sediment stresses. In the proposed solver, two different intergranular stress models are implemented: the kinetic theory of granular flows and the dense granular flow rheology μ(I). For the fluid stress, laminar or turbulent flow regimes can be simulated and three different turbulence models are available for sediment transport: a simple mixing length model (one-dimensional configuration only), a k - ɛ, and a k - ω model. The numerical implementation is demonstrated on four test cases: sedimentation of suspended particles, laminar bed load, sheet flow, and scour at an apron. These test cases illustrate the capabilities of SedFoam-2.0 to deal with complex turbulent sediment transport problems with different combinations of intergranular stress and turbulence models.

  8. Cleanroom certification model

    NASA Technical Reports Server (NTRS)

    Currit, P. A.

    1983-01-01

    The Cleanroom software development methodology is designed to take the gamble out of product releases for both suppliers and receivers of the software. The ingredients of this procedure are a life cycle of executable product increments, representative statistical testing, and a standard estimate of the MTTF (Mean Time To Failure) of the product at the time of its release. A statistical approach to software product testing using randomly selected samples of test cases is considered. A statistical model is defined for the certification process which uses the timing data recorded during test. A reasonableness argument for this model is provided that uses previously published data on software product execution. Also included is a derivation of the certification model estimators and a comparison of the proposed least squares technique with the more commonly used maximum likelihood estimators.

  9. Design of an Adaptive Power Regulation Mechanism and a Nozzle for a Hydroelectric Power Plant Turbine Test Rig

    NASA Astrophysics Data System (ADS)

    Mert, Burak; Aytac, Zeynep; Tascioglu, Yigit; Celebioglu, Kutay; Aradag, Selin; ETU Hydro Research Center Team

    2014-11-01

    This study deals with the design of a power regulation mechanism for a Hydroelectric Power Plant (HEPP) model turbine test system which is designed to test Francis type hydroturbines up to 2 MW power with varying head and flow(discharge) values. Unlike the tailor made regulation mechanisms of full-sized, functional HEPPs; the design for the test system must be easily adapted to various turbines that are to be tested. In order to achieve this adaptability, a dynamic simulation model is constructed in MATLAB/Simulink SimMechanics. This model acquires geometric data and hydraulic loading data of the regulation system from Autodesk Inventor CAD models and Computational Fluid Dynamics (CFD) analysis respectively. The dynamic model is explained and case studies of two different HEPPs are performed for validation. CFD aided design of the turbine guide vanes, which is used as input for the dynamic model, is also presented. This research is financially supported by Turkish Ministry of Development.

  10. Spectral Invariance Principles Observed in Spectral Radiation Measurements of the Transition Zone

    NASA Technical Reports Server (NTRS)

    Marshak, Alexander

    2011-01-01

    The main theme for our research is the understanding and closure of the surface spectral shortwave radiation problem in fully 3D cloud situations by combining the new ARM scanning radars, shortwave spectrometers, and microwave radiometers with the arsenal of radiative transfer tools developed by our group. In particular, we define first a large number of cloudy test cases spanning all 3D possibilities not just the customary uniform-overcast ones. Second, for each case, we define a "Best Estimate of Clouds That Affect Shortwave Radiation" using all relevant ARM instruments, notably the new scanning radars, and contribute this to the ARM Archive. Third, we test the ASR-signature radiative transfer model RRTMG_SW for those cases, focusing on the near-IR because of long-standing problems in this spectral region, and work with the developers to improve RRTMG_SW in order to increase its penetration into the modeling community.

  11. Coupled incompressible Smoothed Particle Hydrodynamics model for continuum-based modelling sediment transport

    NASA Astrophysics Data System (ADS)

    Pahar, Gourabananda; Dhar, Anirban

    2017-04-01

    A coupled solenoidal Incompressible Smoothed Particle Hydrodynamics (ISPH) model is presented for simulation of sediment displacement in erodible bed. The coupled framework consists of two separate incompressible modules: (a) granular module, (b) fluid module. The granular module considers a friction based rheology model to calculate deviatoric stress components from pressure. The module is validated for Bagnold flow profile and two standardized test cases of sediment avalanching. The fluid module resolves fluid flow inside and outside porous domain. An interaction force pair containing fluid pressure, viscous term and drag force acts as a bridge between two different flow modules. The coupled model is validated against three dambreak flow cases with different initial conditions of movable bed. The simulated results are in good agreement with experimental data. A demonstrative case considering effect of granular column failure under full/partial submergence highlights the capability of the coupled model for application in generalized scenario.

  12. Integration of system identification and finite element modelling of nonlinear vibrating structures

    NASA Astrophysics Data System (ADS)

    Cooper, Samson B.; DiMaio, Dario; Ewins, David J.

    2018-03-01

    The Finite Element Method (FEM), Experimental modal analysis (EMA) and other linear analysis techniques have been established as reliable tools for the dynamic analysis of engineering structures. They are often used to provide solutions to small and large structures and other variety of cases in structural dynamics, even those exhibiting a certain degree of nonlinearity. Unfortunately, when the nonlinear effects are substantial or the accuracy of the predicted response is of vital importance, a linear finite element model will generally prove to be unsatisfactory. As a result, the validated linear FE model requires further enhancement so that it can represent and predict the nonlinear behaviour exhibited by the structure. In this paper, a pragmatic approach to integrating test-based system identification and FE modelling of a nonlinear structure is presented. This integration is based on three different phases: the first phase involves the derivation of an Underlying Linear Model (ULM) of the structure, the second phase includes experiment-based nonlinear identification using measured time series and the third phase covers augmenting the linear FE model and experimental validation of the nonlinear FE model. The proposed case study is demonstrated on a twin cantilever beam assembly coupled with a flexible arch shaped beam. In this case, polynomial-type nonlinearities are identified and validated with force-controlled stepped-sine test data at several excitation levels.

  13. An Empirical Comparison of Selected Two-Sample Hypothesis Testing Procedures Which Are Locally Most Powerful Under Certain Conditions.

    ERIC Educational Resources Information Center

    Hoover, H. D.; Plake, Barbara

    The relative power of the Mann-Whitney statistic, the t-statistic, the median test, a test based on exceedances (A,B), and two special cases of (A,B) the Tukey quick test and the revised Tukey quick test, was investigated via a Monte Carlo experiment. These procedures were compared across four population probability models: uniform, beta, normal,…

  14. Bayes Factor based on the Trend Test Incorporating Hardy-Weinberg Disequilibrium: More Powerful to Detect Genetic Association

    PubMed Central

    Xu, Jinfeng; Yuan, Ao; Zheng, Gang

    2012-01-01

    Summary In the analysis of case-control genetic association, the trend test and Pearson’s test are the two most commonly used tests. In genome-wide association studies (GWAS), Bayes factor is a useful tool to support significant p-values, and a better measure than p-value when results are compared across studies with different sample sizes. When reporting the p-value of the trend test, we propose a Bayes factor directly based on the trend test. To improve the power to detect association under recessive or dominant genetic models, we propose a Bayes factor based on the trend test and incorporating Hardy-Weinberg disequilibrium in cases. When the true model is unknown, or both the trend test and Pearson’s test or other robust tests are applied in genome-wide scans, we propose a joint Bayes factor, combining the previous two Bayes factors. All three Bayes factors studied in this paper have closed forms and are easy to compute without integrations, so they can be reported along with p-values, especially in GWAS. We discuss how to use each of them and how to specify priors. Simulation studies and applications to three GWAS are provided to illustrate their usefulness to detect non-additive gene susceptibility in practice. PMID:22607017

  15. Developments and Validations of Fully Coupled CFD and Practical Vortex Transport Method for High-Fidelity Wake Modeling in Fixed and Rotary Wing Applications

    NASA Technical Reports Server (NTRS)

    Anusonti-Inthra, Phuriwat

    2010-01-01

    A novel Computational Fluid Dynamics (CFD) coupling framework using a conventional Reynolds-Averaged Navier-Stokes (BANS) solver to resolve the near-body flow field and a Particle-based Vorticity Transport Method (PVTM) to predict the evolution of the far field wake is developed, refined, and evaluated for fixed and rotary wing cases. For the rotary wing case, the RANS/PVTM modules are loosely coupled to a Computational Structural Dynamics (CSD) module that provides blade motion and vehicle trim information. The PVTM module is refined by the addition of vortex diffusion, stretching, and reorientation models as well as an efficient memory model. Results from the coupled framework are compared with several experimental data sets (a fixed-wing wind tunnel test and a rotary-wing hover test).

  16. Second-Moment RANS Model Verification and Validation Using the Turbulence Modeling Resource Website (Invited)

    NASA Technical Reports Server (NTRS)

    Eisfeld, Bernhard; Rumsey, Chris; Togiti, Vamshi

    2015-01-01

    The implementation of the SSG/LRR-omega differential Reynolds stress model into the NASA flow solvers CFL3D and FUN3D and the DLR flow solver TAU is verified by studying the grid convergence of the solution of three different test cases from the Turbulence Modeling Resource Website. The model's predictive capabilities are assessed based on four basic and four extended validation cases also provided on this website, involving attached and separated boundary layer flows, effects of streamline curvature and secondary flow. Simulation results are compared against experimental data and predictions by the eddy-viscosity models of Spalart-Allmaras (SA) and Menter's Shear Stress Transport (SST).

  17. Robust Flutter Analysis for Aeroservoelastic Systems

    NASA Astrophysics Data System (ADS)

    Kotikalpudi, Aditya

    The dynamics of a flexible air vehicle are typically described using an aeroservoelastic model which accounts for interaction between aerodynamics, structural dynamics, rigid body dynamics and control laws. These subsystems can be individually modeled using a theoretical approach and experimental data from various ground tests can be combined into them. For instance, a combination of linear finite element modeling and data from ground vibration tests may be used to obtain a validated structural model. Similarly, an aerodynamic model can be obtained using computational fluid dynamics or simple panel methods and partially updated using limited data from wind tunnel tests. In all cases, the models obtained for these subsystems have a degree of uncertainty owing to inherent assumptions in the theory and errors in experimental data. Suitable uncertain models that account for these uncertainties can be built to study the impact of these modeling errors on the ability to predict dynamic instabilities known as flutter. This thesis addresses the methods used for modeling rigid body dynamics, structural dynamics and unsteady aerodynamics of a blended wing design called the Body Freedom Flutter vehicle. It discusses the procedure used to incorporate data from a wide range of ground based experiments in the form of model uncertainties within these subsystems. Finally, it provides the mathematical tools for carrying out flutter analysis and sensitivity analysis which account for these model uncertainties. These analyses are carried out for both open loop and controller in the loop (closed loop) cases.

  18. Cavitating Propeller Performance in Inclined Shaft Conditions with OpenFOAM: PPTC 2015 Test Case

    NASA Astrophysics Data System (ADS)

    Gaggero, Stefano; Villa, Diego

    2018-05-01

    In this paper, we present our analysis of the non-cavitating and cavitating unsteady performances of the Potsdam Propeller Test Case (PPTC) in oblique flow. For our calculations, we used the Reynolds-averaged Navier-Stokes equation (RANSE) solver from the open-source OpenFOAM libraries. We selected the homogeneous mixture approach to solve for multiphase flow with phase change, using the volume of fluid (VoF) approach to solve the multiphase flow and modeling the mass transfer between vapor and water with the Schnerr-Sauer model. Comparing the model results with the experimental measurements collected during the Second Workshop on Cavitation and Propeller Performance - SMP'15 enabled our assessment of the reliability of the open-source calculations. Comparisons with the numerical data collected during the workshop enabled further analysis of the reliability of different flow solvers from which we produced an overview of recommended guidelines (mesh arrangements and solver setups) for accurate numerical prediction even in off-design conditions. Lastly, we propose a number of calculations using the boundary element method developed at the University of Genoa for assessing the reliability of this dated but still widely adopted approach for design and optimization in the preliminary stages of very demanding test cases.

  19. Groundwater flow and heat transport for systems undergoing freeze-thaw: Intercomparison of numerical simulators for 2D test cases

    DOE PAGES

    Grenier, Christophe; Anbergen, Hauke; Bense, Victor; ...

    2018-02-26

    In high-elevation, boreal and arctic regions, hydrological processes and associated water bodies can be strongly influenced by the distribution of permafrost. Recent field and modeling studies indicate that a fully-coupled multidimensional thermo-hydraulic approach is required to accurately model the evolution of these permafrost-impacted landscapes and groundwater systems. However, the relatively new and complex numerical codes being developed for coupled non-linear freeze-thaw systems require verification. Here in this paper, this issue is addressed by means of an intercomparison of thirteen numerical codes for two-dimensional test cases with several performance metrics (PMs). These codes comprise a wide range of numerical approaches, spatialmore » and temporal discretization strategies, and computational efficiencies. Results suggest that the codes provide robust results for the test cases considered and that minor discrepancies are explained by computational precision. However, larger discrepancies are observed for some PMs resulting from differences in the governing equations, discretization issues, or in the freezing curve used by some codes.« less

  20. International Energy Agency Ocean Energy Systems Task 10 Wave Energy Converter Modeling Verification and Validation: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendt, Fabian F; Yu, Yi-Hsiang; Nielsen, Kim

    This is the first joint reference paper for the Ocean Energy Systems (OES) Task 10 Wave Energy Converter modeling verification and validation group. The group is established under the OES Energy Technology Network program under the International Energy Agency. OES was founded in 2001 and Task 10 was proposed by Bob Thresher (National Renewable Energy Laboratory) in 2015 and approved by the OES Executive Committee EXCO in 2016. The kickoff workshop took place in September 2016, wherein the initial baseline task was defined. Experience from similar offshore wind validation/verification projects (OC3-OC5 conducted within the International Energy Agency Wind Task 30)more » [1], [2] showed that a simple test case would help the initial cooperation to present results in a comparable way. A heaving sphere was chosen as the first test case. The team of project participants simulated different numerical experiments, such as heave decay tests and regular and irregular wave cases. The simulation results are presented and discussed in this paper.« less

  1. Characterisation and calculation of nonlinear vibrations in gas foil bearing systems-An experimental and numerical investigation

    NASA Astrophysics Data System (ADS)

    Hoffmann, Robert; Liebich, Robert

    2018-01-01

    This paper states a unique classification to understand the source of the subharmonic vibrations of gas foil bearing (GFB) systems, which will experimentally and numerically tested. The classification is based on two cases, where an isolated system is assumed: Case 1 considers a poorly balance rotor, which results in increased displacement during operation and interacts with the nonlinear progressive structure. It is comparable to a Duffing-Oscillator. In contrast, for case 2 a well/perfectly balanced rotor is assumed. Hence, the only source of nonlinear subharmonic whirling results from the fluid film self-excitation. Experimental tests with different unbalance levels and GFB modifications confirm these assumptions. Furthermore, simulations are able to predict the self-excitations and synchronous and subharmonic resonances of the experimental test. The numerical model is based on a linearised eigenvalue problem. The GFB system uses linearised stiffness and damping parameters by applying a perturbation method on the Reynolds Equation. The nonlinear bump structure is simplified by a link-spring model. It includes Coulomb friction effects inside the elastic corrugated structure and captures the interaction between single bumps.

  2. Groundwater flow and heat transport for systems undergoing freeze-thaw: Intercomparison of numerical simulators for 2D test cases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grenier, Christophe; Anbergen, Hauke; Bense, Victor

    In high-elevation, boreal and arctic regions, hydrological processes and associated water bodies can be strongly influenced by the distribution of permafrost. Recent field and modeling studies indicate that a fully-coupled multidimensional thermo-hydraulic approach is required to accurately model the evolution of these permafrost-impacted landscapes and groundwater systems. However, the relatively new and complex numerical codes being developed for coupled non-linear freeze-thaw systems require verification. Here in this paper, this issue is addressed by means of an intercomparison of thirteen numerical codes for two-dimensional test cases with several performance metrics (PMs). These codes comprise a wide range of numerical approaches, spatialmore » and temporal discretization strategies, and computational efficiencies. Results suggest that the codes provide robust results for the test cases considered and that minor discrepancies are explained by computational precision. However, larger discrepancies are observed for some PMs resulting from differences in the governing equations, discretization issues, or in the freezing curve used by some codes.« less

  3. Improving Groundwater Data Interoperability: Results of the Second OGC Groundwater Interoperability Experiment

    NASA Astrophysics Data System (ADS)

    Lucido, J. M.; Booth, N.

    2014-12-01

    Interoperable sharing of groundwater data across international boarders is essential for the proper management of global water resources. However storage and management of groundwater data is often times distributed across many agencies or organizations. Furthermore these data may be represented in disparate proprietary formats, posing a significant challenge for integration. For this reason standard data models are required to achieve interoperability across geographical and political boundaries. The GroundWater Markup Language 1.0 (GWML1) was developed in 2010 as an extension of the Geography Markup Language (GML) in order to support groundwater data exchange within Spatial Data Infrastructures (SDI). In 2013, development of GWML2 was initiated under the sponsorship of the Open Geospatial Consortium (OGC) for intended adoption by the international community as the authoritative standard for the transfer of groundwater feature data, including data about water wells, aquifers, and related entities. GWML2 harmonizes GWML1 and the EU's INSPIRE models related to geology and hydrogeology. Additionally, an interoperability experiment was initiated to test the model for commercial, technical, scientific, and policy use cases. The scientific use case focuses on the delivery of data required for input into computational flow modeling software used to determine the flow of groundwater within a particular aquifer system. It involves the delivery of properties associated with hydrogeologic units, observations related to those units, and information about the related aquifers. To test this use case web services are being implemented using GWML2 and WaterML2, which is the authoritative standard for water time series observations, in order to serve USGS water well and hydrogeologic data via standard OGC protocols. Furthermore, integration of these data into a computational groundwater flow model will be tested. This submission will present the GWML2 information model and results of an interoperability experiment with a particular emphasis on the scientific use case.

  4. A depth integrated model for dry geophysical granular flows

    NASA Astrophysics Data System (ADS)

    Rossi, Giulia; Armanini, Aronne

    2017-04-01

    Granular flows are rapid to very rapid flows, made up of dry sediment (rock and snow avalanches) or mixture of water and sediment (debris flows). They are among the most dangerous and destructive natural phenomena and the definition of run-out scenarios for risk assessment has received wide interest in the last decades. Nowadays there are many urbanized mountain areas affected by these phenomena, which cause several properties damages and loss of lives. The numerical simulation is a fundamental step to analyze these phenomena and define the runout scenarios. For this reason, a depth-integrated model is developed to analyze the case of dry granular flows, representative of snow avalanches or rock avalanches. The model consists of a two-phase mathematical description of the flow motion: it is similar to the solid transport equations but substantially different since there is no water in this case. A set of partial differential equations is obtained and written in the form of a hyperbolic system. The numerical solution is computed through a path-conservative SPH (Smoothed Particles Hydrodynamics) scheme, in the two dimensional case. Appropriate closure relations are necessary, with respect to the concentration C and the shear stress at the bed τ0. In first approximation, it is possible to derive a formulation for the two closure relations from appropriate rheological models (Bagnold theory and dense gas analogy). The model parameters are determined by means of laboratory tests on dry granular material and the effectiveness of the closure relation verified through a comparison with the experimental results. In particular, the experimental investigation aims to reproduce two case of study for dry granular material: the dam-break test problem and the stationary motion with changes in planimetry. The experiments are carried out in the Hydraulic Laboratory of the University of Trento, by means of channels with variable slope and variable shape. The mathematical model will be tested by comparing the numerical results with the experimental data.

  5. Models in Physics, Models for Physics Learning, and Why the Distinction May Matter in the Case of Electric Circuits

    ERIC Educational Resources Information Center

    Hart, Christina

    2008-01-01

    Models are important both in the development of physics itself and in teaching physics. Historically, the consensus models of physics have come to embody particular ontological assumptions and epistemological commitments. Educators have generally assumed that the consensus models of physics, which have stood the test of time, will also work well…

  6. A Multiple Group Measurement Model of Children's Reports of Parental Socioeconomic Status. Discussion Papers No. 531-78.

    ERIC Educational Resources Information Center

    Mare, Robert D.; Mason, William M.

    An important class of applications of measurement error or constrained factor analytic models consists of comparing models for several populations. In such cases, it is appropriate to make explicit statistical tests of model similarity across groups and to constrain some parameters of the models to be equal across groups using a priori substantive…

  7. The Log-Linear Cognitive Diagnostic Model (LCDM) as a Special Case of The General Diagnostic Model (GDM). Research Report. ETS RR-14-40

    ERIC Educational Resources Information Center

    von Davier, Matthias

    2014-01-01

    Diagnostic models combine multiple binary latent variables in an attempt to produce a latent structure that provides more information about test takers' performance than do unidimensional latent variable models. Recent developments in diagnostic modeling emphasize the possibility that multiple skills may interact in a conjunctive way within the…

  8. Validation of a numerical method for interface-resolving simulation of multicomponent gas-liquid mass transfer and evaluation of multicomponent diffusion models

    NASA Astrophysics Data System (ADS)

    Woo, Mino; Wörner, Martin; Tischer, Steffen; Deutschmann, Olaf

    2018-03-01

    The multicomponent model and the effective diffusivity model are well established diffusion models for numerical simulation of single-phase flows consisting of several components but are seldom used for two-phase flows so far. In this paper, a specific numerical model for interfacial mass transfer by means of a continuous single-field concentration formulation is combined with the multicomponent model and effective diffusivity model and is validated for multicomponent mass transfer. For this purpose, several test cases for one-dimensional physical or reactive mass transfer of ternary mixtures are considered. The numerical results are compared with analytical or numerical solutions of the Maxell-Stefan equations and/or experimental data. The composition-dependent elements of the diffusivity matrix of the multicomponent and effective diffusivity model are found to substantially differ for non-dilute conditions. The species mole fraction or concentration profiles computed with both diffusion models are, however, for all test cases very similar and in good agreement with the analytical/numerical solutions or measurements. For practical computations, the effective diffusivity model is recommended due to its simplicity and lower computational costs.

  9. Some aspects of the analysis of geodetic strain observations in kinematic models

    NASA Astrophysics Data System (ADS)

    Welsch, W. M.

    1986-11-01

    Frequently, deformation processes are analyzed in static models. In many cases, this procedure is justified, in particular if the deformation occurring is a singular event. If. however, the deformation is a continuous process, as is the case, for instance, with recent crustal movements, the analysis in kinematic models is more commensurate with the problem because the factor "time" is considered an essential part of the model. Some specialities have to be considered when analyzing geodetic strain observations in kinematic models. They are dealt with in this paper. After a brief derivation of the basic kinematic model and the kinematic strain model, the following subjects are treated: the adjustment of the pointwise velocity field and the derivation of strain-rate parameters; the fixing of the kinematic reference system as part of the geodetic datum; statistical tests of models by testing linear hypotheses; the invariance of kinematic strain-rate parameters with respect to transformations of the coordinate-system and the geodetic datum; the interpolation of strain rates by finite-element methods. After the representation of some advanced models for the description of secular and episodic kinematic processes, the data analysis in dynamic models is regarded as a further generalization of deformation analysis.

  10. A kernel regression approach to gene-gene interaction detection for case-control studies.

    PubMed

    Larson, Nicholas B; Schaid, Daniel J

    2013-11-01

    Gene-gene interactions are increasingly being addressed as a potentially important contributor to the variability of complex traits. Consequently, attentions have moved beyond single locus analysis of association to more complex genetic models. Although several single-marker approaches toward interaction analysis have been developed, such methods suffer from very high testing dimensionality and do not take advantage of existing information, notably the definition of genes as functional units. Here, we propose a comprehensive family of gene-level score tests for identifying genetic elements of disease risk, in particular pairwise gene-gene interactions. Using kernel machine methods, we devise score-based variance component tests under a generalized linear mixed model framework. We conducted simulations based upon coalescent genetic models to evaluate the performance of our approach under a variety of disease models. These simulations indicate that our methods are generally higher powered than alternative gene-level approaches and at worst competitive with exhaustive SNP-level (where SNP is single-nucleotide polymorphism) analyses. Furthermore, we observe that simulated epistatic effects resulted in significant marginal testing results for the involved genes regardless of whether or not true main effects were present. We detail the benefits of our methods and discuss potential genome-wide analysis strategies for gene-gene interaction analysis in a case-control study design. © 2013 WILEY PERIODICALS, INC.

  11. The development and testing of a skin tear risk assessment tool.

    PubMed

    Newall, Nelly; Lewin, Gill F; Bulsara, Max K; Carville, Keryln J; Leslie, Gavin D; Roberts, Pam A

    2017-02-01

    The aim of the present study is to develop a reliable and valid skin tear risk assessment tool. The six characteristics identified in a previous case control study as constituting the best risk model for skin tear development were used to construct a risk assessment tool. The ability of the tool to predict skin tear development was then tested in a prospective study. Between August 2012 and September 2013, 1466 tertiary hospital patients were assessed at admission and followed up for 10 days to see if they developed a skin tear. The predictive validity of the tool was assessed using receiver operating characteristic (ROC) analysis. When the tool was found not to have performed as well as hoped, secondary analyses were performed to determine whether a potentially better performing risk model could be identified. The tool was found to have high sensitivity but low specificity and therefore have inadequate predictive validity. Secondary analysis of the combined data from this and the previous case control study identified an alternative better performing risk model. The tool developed and tested in this study was found to have inadequate predictive validity. The predictive validity of an alternative, more parsimonious model now needs to be tested. © 2015 Medicalhelplines.com Inc and John Wiley & Sons Ltd.

  12. Integration of the predictions of two models with dose measurements in a case study of children exposed to the emissions of a lead smelter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonnard, R.; McKone, T.E.

    2009-03-01

    The predictions of two source-to-dose models are systematically evaluated with observed data collected in a village polluted by a currently operating secondary lead smelter. Both models were built up from several sub-models linked together and run using Monte-Carlo simulation, to calculate the distribution children's blood lead levels attributable to the emissions from the facility. The first model system is composed of the CalTOX model linked to a recoded version of the IEUBK model. This system provides the distribution of the media-specific lead concentrations (air, soil, fruit, vegetables and blood) in the whole area investigated. The second model consists of amore » statistical model to estimate the lead deposition on the ground, a modified version of the model HHRAP and the same recoded version of the IEUBK model. This system provides an estimate of the concentration of exposure of specific individuals living in the study area. The predictions of the first model system were improved in terms of accuracy and precision by performing a sensitivity analysis and using field data to correct the default value provided for the leaf wet density. However, in this case study, the first model system tends to overestimate the exposure due to exposed vegetables. The second model was tested for nine children with contrasting exposure conditions. It managed to capture the blood levels for eight of them. In the last case, the exposure of the child by pathways not considered in the model may explain the failure of the model. The interest of this integrated model is to provide outputs with lower variance than the first model system, but at the moment further tests are necessary to conclude about its accuracy.« less

  13. ILS Glide Slope Performance Prediction Multipath Scattering

    DOT National Transportation Integrated Search

    1976-12-01

    A mathematical model has been developed which predicts the performance of ILS glide slope systems subject to multipath scattering and the effects of irregular terrain contours. The model is discussed in detail and then applied to a test case for purp...

  14. Performance analysis and dynamic modeling of a single-spool turbojet engine

    NASA Astrophysics Data System (ADS)

    Andrei, Irina-Carmen; Toader, Adrian; Stroe, Gabriela; Frunzulica, Florin

    2017-01-01

    The purposes of modeling and simulation of a turbojet engine are the steady state analysis and transient analysis. From the steady state analysis, which consists in the investigation of the operating, equilibrium regimes and it is based on appropriate modeling describing the operation of a turbojet engine at design and off-design regimes, results the performance analysis, concluded by the engine's operational maps (i.e. the altitude map, velocity map and speed map) and the engine's universal map. The mathematical model that allows the calculation of the design and off-design performances, in case of a single spool turbojet is detailed. An in house code was developed, its calibration was done for the J85 turbojet engine as the test case. The dynamic modeling of the turbojet engine is obtained from the energy balance equations for compressor, combustor and turbine, as the engine's main parts. The transient analysis, which is based on appropriate modeling of engine and its main parts, expresses the dynamic behavior of the turbojet engine, and further, provides details regarding the engine's control. The aim of the dynamic analysis is to determine a control program for the turbojet, based on the results provided by performance analysis. In case of the single-spool turbojet engine, with fixed nozzle geometry, the thrust is controlled by one parameter, which is the fuel flow rate. The design and management of the aircraft engine controls are based on the results of the transient analysis. The construction of the design model is complex, since it is based on both steady-state and transient analysis, further allowing the flight path cycle analysis and optimizations. This paper presents numerical simulations for a single-spool turbojet engine (J85 as test case), with appropriate modeling for steady-state and dynamic analysis.

  15. Applying the multivariate time-rescaling theorem to neural population models

    PubMed Central

    Gerhard, Felipe; Haslinger, Robert; Pipa, Gordon

    2011-01-01

    Statistical models of neural activity are integral to modern neuroscience. Recently, interest has grown in modeling the spiking activity of populations of simultaneously recorded neurons to study the effects of correlations and functional connectivity on neural information processing. However any statistical model must be validated by an appropriate goodness-of-fit test. Kolmogorov-Smirnov tests based upon the time-rescaling theorem have proven to be useful for evaluating point-process-based statistical models of single-neuron spike trains. Here we discuss the extension of the time-rescaling theorem to the multivariate (neural population) case. We show that even in the presence of strong correlations between spike trains, models which neglect couplings between neurons can be erroneously passed by the univariate time-rescaling test. We present the multivariate version of the time-rescaling theorem, and provide a practical step-by-step procedure for applying it towards testing the sufficiency of neural population models. Using several simple analytically tractable models and also more complex simulated and real data sets, we demonstrate that important features of the population activity can only be detected using the multivariate extension of the test. PMID:21395436

  16. Field Testing and Modeling of Supermarket Refrigeration Systems as a Demand Response Resource

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deru, Michael; Hirsch, Adam; Clark, Jordan

    Supermarkets offer a substantial demand response (DR) resource because of their high energy intensity and use patterns; however, refrigeration as the largest load has been challenging to access. Previous work has analyzed supermarket DR using heating, ventilating, and air conditioning; lighting; and anti-sweat heaters. This project evaluated and quantified the DR potential inherent in supermarket refrigeration systems in the Bonneville Power Administration service territory. DR events were carried out and results measured in an operational 45,590-ft2 supermarket located in Hillsboro, Oregon. Key results from the project include the rate of temperature increase in freezer reach-in cases and walk-ins when refrigerationmore » is suspended, the load shed amount for DR tests, and the development of calibrated models to quantify available DR resources. Simulations showed that demand savings of 15 to 20 kilowatts (kW) are available for 1.5 hours for a typical store without precooling and for about 2.5 hours with precooling using only the low-temperature, non-ice cream cases. This represents an aggregated potential of 20 megawatts within BPA's service territory. Inability to shed loads for medium-temperature (MT) products because of the tighter temperature requirements is a significant barrier to realizing larger DR for supermarkets. Store owners are reluctant to allow MT case set point changes, and laboratory tests of MT case DR strategies are needed so that owners become comfortable testing, and implementing, MT case DR. The next-largest barrier is the lack of proper controls in most supermarket displays over ancillary equipment, such as anti-sweat heaters, lights, and fans.« less

  17. An Overview of Unsteady Pressure Measurements in the Transonic Dynamics Tunnel

    NASA Technical Reports Server (NTRS)

    Schuster, David M.; Edwards, John W.; Bennett, Robert M.

    2000-01-01

    The NASA Langley Transonic Dynamics Tunnel has served as a unique national facility for aeroelastic testing for over forty years. A significant portion of this testing has been to measure unsteady pressures on models undergoing flutter, forced oscillations, or buffet. These tests have ranged from early launch vehicle buffet to flutter of a generic high-speed transport. This paper will highlight some of the test techniques, model design approaches, and the many unsteady pressure tests conducted in the TDT. The objectives and results of the data acquired during these tests will be summarized for each case and a brief discussion of ongoing research involving unsteady pressure measurements and new TDT capabilities will be presented.

  18. Development and pilot testing of an online case-based approach to shared decision making skills training for clinicians.

    PubMed

    Volk, Robert J; Shokar, Navkiran K; Leal, Viola B; Bulik, Robert J; Linder, Suzanne K; Mullen, Patricia Dolan; Wexler, Richard M; Shokar, Gurjeet S

    2014-11-01

    Although research suggests that patients prefer a shared decision making (SDM) experience when making healthcare decisions, clinicians do not routinely implement SDM into their practice and training programs are needed. Using a novel case-based strategy, we developed and pilot tested an online educational program to promote shared decision making (SDM) by primary care clinicians. A three-phased approach was used: 1) development of a conceptual model of the SDM process; 2) development of an online teaching case utilizing the Design A Case (DAC) authoring template, a well-tested process used to create peer-reviewed web-based clinical cases across all levels of healthcare training; and 3) pilot testing of the case. Participants were clinician members affiliated with several primary care research networks across the United States who answered an invitation email. The case used prostate cancer screening as the clinical context and was delivered online. Post-intervention ratings of clinicians' general knowledge of SDM, knowledge of specific SDM steps, confidence in and intention to perform SDM steps were also collected online. Seventy-nine clinicians initially volunteered to participate in the study, of which 49 completed the case and provided evaluations. Forty-three clinicians (87.8%) reported the case met all the learning objectives, and 47 (95.9%) indicated the case was relevant for other equipoise decisions. Thirty-one clinicians (63.3%) accessed supplementary information via links provided in the case. After viewing the case, knowledge of SDM was high (over 90% correctly identified the steps in a SDM process). Determining a patient's preferred role in making the decision (62.5% very confident) and exploring a patient's values (65.3% very confident) about the decisions were areas where clinician confidence was lowest. More than 70% of the clinicians intended to perform SDM in the future. A comprehensive model of the SDM process was used to design a case-based approach to teaching SDM skills to primary care clinicians. The case was favorably rated in this pilot study. Clinician skills training for helping patients clarify their values and for assessing patients' desire for involvement in decision making remain significant challenges and should be a focus of future comparative studies.

  19. Modeling annual extreme temperature using generalized extreme value distribution: A case study in Malaysia

    NASA Astrophysics Data System (ADS)

    Hasan, Husna; Salam, Norfatin; Kassim, Suraiya

    2013-04-01

    Extreme temperature of several stations in Malaysia is modeled by fitting the annual maximum to the Generalized Extreme Value (GEV) distribution. The Augmented Dickey Fuller (ADF) and Phillips Perron (PP) tests are used to detect stochastic trends among the stations. The Mann-Kendall (MK) test suggests a non-stationary model. Three models are considered for stations with trend and the Likelihood Ratio test is used to determine the best-fitting model. The results show that Subang and Bayan Lepas stations favour a model which is linear for the location parameters while Kota Kinabalu and Sibu stations are suitable with a model in the logarithm of the scale parameters. The return level is the level of events (maximum temperature) which is expected to be exceeded once, on average, in a given number of years, is obtained.

  20. A 3D unstructured grid nearshore hydrodynamic model based on the vortex force formalism

    NASA Astrophysics Data System (ADS)

    Zheng, Peng; Li, Ming; van der A, Dominic A.; van der Zanden, Joep; Wolf, Judith; Chen, Xueen; Wang, Caixia

    2017-08-01

    A new three-dimensional nearshore hydrodynamic model system is developed based on the unstructured-grid version of the third generation spectral wave model SWAN (Un-SWAN) coupled with the three-dimensional ocean circulation model FVCOM to enable the full representation of the wave-current interaction in the nearshore region. A new wave-current coupling scheme is developed by adopting the vortex-force (VF) scheme to represent the wave-current interaction. The GLS turbulence model is also modified to better reproduce wave-breaking enhanced turbulence, together with a roller transport model to account for the effect of surface wave roller. This new model system is validated first against a theoretical case of obliquely incident waves on a planar beach, and then applied to three test cases: a laboratory scale experiment of normal waves on a beach with a fixed breaker bar, a field experiment of oblique incident waves on a natural, sandy barred beach (Duck'94 experiment), and a laboratory study of normal-incident waves propagating around a shore-parallel breakwater. Overall, the model predictions agree well with the available measurements in these tests, illustrating the robustness and efficiency of the present model for very different spatial scales and hydrodynamic conditions. Sensitivity tests indicate the importance of roller effects and wave energy dissipation on the mean flow (undertow) profile over the depth. These tests further suggest to adopt a spatially varying value for roller effects across the beach. In addition, the parameter values in the GLS turbulence model should be spatially inhomogeneous, which leads to better prediction of the turbulent kinetic energy and an improved prediction of the undertow velocity profile.

  1. Life cycle assessment based environmental impact estimation model for pre-stressed concrete beam bridge in the early design phase

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kyong Ju, E-mail: kjkim@cau.ac.kr; Yun, Won Gun, E-mail: ogun78@naver.com; Cho, Namho, E-mail: nhc51@cau.ac.kr

    The late rise in global concern for environmental issues such as global warming and air pollution is accentuating the need for environmental assessments in the construction industry. Promptly evaluating the environmental loads of the various design alternatives during the early stages of a construction project and adopting the most environmentally sustainable candidate is therefore of large importance. Yet, research on the early evaluation of a construction project's environmental load in order to aid the decision making process is hitherto lacking. In light of this dilemma, this study proposes a model for estimating the environmental load by employing only the mostmore » basic information accessible during the early design phases of a project for the pre-stressed concrete (PSC) beam bridge, the most common bridge structure. Firstly, a life cycle assessment (LCA) was conducted on the data from 99 bridges by integrating the bills of quantities (BOQ) with a life cycle inventory (LCI) database. The processed data was then utilized to construct a case based reasoning (CBR) model for estimating the environmental load. The accuracy of the estimation model was then validated using five test cases; the model's mean absolute error rates (MAER) for the total environmental load was calculated as 7.09%. Such test results were shown to be superior compared to those obtained from a multiple-regression based model and a slab area base-unit analysis model. Henceforth application of this model during the early stages of a project is expected to highly complement environmentally friendly designs and construction by facilitating the swift evaluation of the environmental load from multiple standpoints. - Highlights: • This study is to develop the model of assessing the environmental impacts on LCA. • Bills of quantity from completed designs of PSC Beam were linked with the LCI DB. • Previous cases were used to estimate the environmental load of new case by CBR model. • CBR model produces more accurate estimations (7.09%) than other conventional models. • This study supports decision making process in the early stage of a new construction case.« less

  2. Space Shuttle Projects

    NASA Image and Video Library

    1991-07-01

    This photograph shows the Solid Propellant Test Article (SPTA) test stand with the Modified Nasa Motor (M-NASA) test article at the Marshall Space Flight Center (MSFC). The SPTA test stand, 12-feet wide by 12-feet long by 24-feet high, was built in 1989 to provide comparative performance data on nozzle and case insulation material and to verify thermostructural analysis models. A modified NASA 48-inch solid motor (M-NASA motor) with a 12-foot blast tube and 10-inch throat makes up the SPTA. The M-NASA motor is being used to evaluate solid rocket motor internal non-asbestos insulation materials, nozzle designs, materials, and new inspection techniques. New internal motor case instrumentation techniques are also being evaluated.

  3. Utility of a novel error-stepping method to improve gradient-based parameter identification by increasing the smoothness of the local objective surface: a case-study of pulmonary mechanics.

    PubMed

    Docherty, Paul D; Schranz, Christoph; Chase, J Geoffrey; Chiew, Yeong Shiong; Möller, Knut

    2014-05-01

    Accurate model parameter identification relies on accurate forward model simulations to guide convergence. However, some forward simulation methodologies lack the precision required to properly define the local objective surface and can cause failed parameter identification. The role of objective surface smoothness in identification of a pulmonary mechanics model was assessed using forward simulation from a novel error-stepping method and a proprietary Runge-Kutta method. The objective surfaces were compared via the identified parameter discrepancy generated in a Monte Carlo simulation and the local smoothness of the objective surfaces they generate. The error-stepping method generated significantly smoother error surfaces in each of the cases tested (p<0.0001) and more accurate model parameter estimates than the Runge-Kutta method in three of the four cases tested (p<0.0001) despite a 75% reduction in computational cost. Of note, parameter discrepancy in most cases was limited to a particular oblique plane, indicating a non-intuitive multi-parameter trade-off was occurring. The error-stepping method consistently improved or equalled the outcomes of the Runge-Kutta time-integration method for forward simulations of the pulmonary mechanics model. This study indicates that accurate parameter identification relies on accurate definition of the local objective function, and that parameter trade-off can occur on oblique planes resulting prematurely halted parameter convergence. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  4. Evaluation of nickel and cobalt release from mobile phone devices used in Brazil.

    PubMed

    Hafner, Mariana de Figueiredo Silva; Chen, Jessica Chia Sin; Lazzarini, Rosana

    2018-01-01

    Nickel and cobalt are often responsible for metal-induced allergic contact dermatitis. With the increasing use of cell phones, we observed an increase in cases reports on telephone-related allergic contact dermatitis. The present study evaluated nickel and cobalt release from mobile phones used in Brazil. We evaluated devices of 6 brands and 20 different models using nickel and cobalt allergy spot tests. Of the 20 models, 64.7% tested positive for nickel, with 41.1% positive results for the charger input and 23.5% for other tested areas. None of them was positive for cobalt. Nickel release was more common in older models.

  5. Mouse models of neurodegenerative diseases: criteria and general methodology.

    PubMed

    Janus, Christopher; Welzl, Hans

    2010-01-01

    The major symptom of Alzheimer's disease is rapidly progressing dementia, coinciding with the formation of amyloid and tau deposits in the central nervous system, and neuronal death. At present familial cases of dementias provide the most promising foundation for modelling neurodegeneration. We describe the mnemonic and other major behavioral symptoms of tauopathies, briefly outline the genetics underlying familiar cases and discuss the arising implications for modelling the disease in mostly transgenic mouse lines. We then depict to what degree the most recent mouse models replicate pathological and cognitive characteristics observed in patients.There is no universally valid behavioral test battery to evaluate mouse models. The selection of individual tests depends on the behavioral and/or memory system in focus, the type of a model and how well it replicates the pathology of a disease and the amount of control over the genetic background of the mouse model. However it is possible to provide guidelines and criteria for modelling the neurodegeneration, setting up the experiments and choosing relevant tests. One should not adopt a "one (trans)gene, one disease" interpretation, but should try to understand how the mouse genome copes with the protein expression of the transgene in question. Further, it is not possible to recommend some mouse models over others since each model is valuable within its own constraints, and the way experiments are performed often reflects the idiosyncratic reality of specific laboratories. Our purpose is to improve bridging molecular and behavioural approaches in translational research.

  6. Wall interference correction improvements for the ONERA main wind tunnels

    NASA Technical Reports Server (NTRS)

    Vaucheret, X.

    1982-01-01

    This paper describes improved methods of calculating wall interference corrections for the ONERA large windtunnels. The mathematical description of the model and its sting support have become more sophisticated. An increasing number of singularities is used until an agreement between theoretical and experimental signatures of the model and sting on the walls of the closed test section is obtained. The singularity decentering effects are calculated when the model reaches large angles of attack. The porosity factor cartography on the perforated walls deduced from the measured signatures now replaces the reference tests previously carried out in larger tunnels. The porosity factors obtained from the blockage terms (signatures at zero lift) and from the lift terms are in good agreement. In each case (model + sting + test section), wall corrections are now determined, before the tests, as a function of the fundamental parameters M, CS, CZ. During the windtunnel tests, the corrections are quickly computed from these functions.

  7. A novel Bayesian approach to predicting reductions in HIV incidence following increased testing interventions among gay, bisexual and other men who have sex with men in Vancouver, Canada.

    PubMed

    Irvine, Michael A; Konrad, Bernhard P; Michelow, Warren; Balshaw, Robert; Gilbert, Mark; Coombs, Daniel

    2018-03-01

    Increasing HIV testing rates among high-risk groups should lead to increased numbers of cases being detected. Coupled with effective treatment and behavioural change among individuals with detected infection, increased testing should also reduce onward incidence of HIV in the population. However, it can be difficult to predict the strengths of these effects and thus the overall impact of testing. We construct a mathematical model of an ongoing HIV epidemic in a population of gay, bisexual and other men who have sex with men. The model incorporates different levels of infection risk, testing habits and awareness of HIV status among members of the population. We introduce a novel Bayesian analysis that is able to incorporate potentially unreliable sexual health survey data along with firm clinical diagnosis data. We parameterize the model using survey and diagnostic data drawn from a population of men in Vancouver, Canada. We predict that increasing testing frequency will yield a small-scale but long-term impact on the epidemic in terms of new infections averted, as well as a large short-term impact on numbers of detected cases. These effects are predicted to occur even when a testing intervention is short-lived. We show that a short-lived but intensive testing campaign can potentially produce many of the same benefits as a campaign that is less intensive but of longer duration. © 2018 The Author(s).

  8. Unconditional or Conditional Logistic Regression Model for Age-Matched Case–Control Data?

    PubMed Central

    Kuo, Chia-Ling; Duan, Yinghui; Grady, James

    2018-01-01

    Matching on demographic variables is commonly used in case–control studies to adjust for confounding at the design stage. There is a presumption that matched data need to be analyzed by matched methods. Conditional logistic regression has become a standard for matched case–control data to tackle the sparse data problem. The sparse data problem, however, may not be a concern for loose-matching data when the matching between cases and controls is not unique, and one case can be matched to other controls without substantially changing the association. Data matched on a few demographic variables are clearly loose-matching data, and we hypothesize that unconditional logistic regression is a proper method to perform. To address the hypothesis, we compare unconditional and conditional logistic regression models by precision in estimates and hypothesis testing using simulated matched case–control data. Our results support our hypothesis; however, the unconditional model is not as robust as the conditional model to the matching distortion that the matching process not only makes cases and controls similar for matching variables but also for the exposure status. When the study design involves other complex features or the computational burden is high, matching in loose-matching data can be ignored for negligible loss in testing and estimation if the distributions of matching variables are not extremely different between cases and controls. PMID:29552553

  9. Some practical turbulence modeling options for Reynolds-averaged full Navier-Stokes calculations of three-dimensional flows

    NASA Technical Reports Server (NTRS)

    Bui, Trong T.

    1993-01-01

    New turbulence modeling options recently implemented for the 3-D version of Proteus, a Reynolds-averaged compressible Navier-Stokes code, are described. The implemented turbulence models include: the Baldwin-Lomax algebraic model, the Baldwin-Barth one-equation model, the Chien k-epsilon model, and the Launder-Sharma k-epsilon model. Features of this turbulence modeling package include: well documented and easy to use turbulence modeling options, uniform integration of turbulence models from different classes, automatic initialization of turbulence variables for calculations using one- or two-equation turbulence models, multiple solid boundaries treatment, and fully vectorized L-U solver for one- and two-equation models. Validation test cases include the incompressible and compressible flat plate turbulent boundary layers, turbulent developing S-duct flow, and glancing shock wave/turbulent boundary layer interaction. Good agreement is obtained between the computational results and experimental data. Sensitivity of the compressible turbulent solutions with the method of y(sup +) computation, the turbulent length scale correction, and some compressibility corrections are examined in detail. The test cases show that the highly optimized one-and two-equation turbulence models can be used in routine 3-D Navier-Stokes computations with no significant increase in CPU time as compared with the Baldwin-Lomax algebraic model.

  10. Cystic Fibrosis Patents: A Case Study of Successful Licensing

    PubMed Central

    Minear, Mollie A.; Kapustij, Cristina; Boden, Kaeleen; Chandrasekharan, Subhashini; Cook-Deegan, Robert

    2013-01-01

    From 2006–2010, Duke University’s Center for Public Genomics prepared eight case studies examining the effects of gene patent licensing practices on clinical access to genetic testing for ten clinical conditions. One of these case studies focused on the successful licensing practices employed by the University of Michigan and the Hospital for Sick Children in Toronto for patents covering the CFTR gene and its ΔF508 mutation that causes a majority of cystic fibrosis cases. Since the licensing of these patents has not impeded clinical access to genetic testing, we sought to understand how this successful licensing model was developed and whether it might be applicable to other gene patents. We interviewed four key players who either were involved in the initial discussions regarding the structure of licensing or who have recently managed the licenses and collected related documents. Important features of the licensing planning process included thoughtful consideration of potential uses of the patent; anticipation of future scientific discoveries and technological advances; engagement of relevant stakeholders, including the Cystic Fibrosis Foundation; and using separate licenses for in-house diagnostics versus kit manufacture. These features led to the development of a licensing model that has not only allowed the patent holders to avoid the controversy that has plagued other gene patents, but has also allowed research, development of new therapeutics, and wide-spread dissemination of genetic testing for cystic fibrosis. Although this licensing model may not be applicable to all gene patents, it serves as a model in which gene patent licensing can successfully enable innovation, investment in therapeutics research, and protect intellectual property while respecting the needs of patients, scientists, and public health. PMID:24231943

  11. [Cost effectiveness of mass orthoptic screening in kindergarten for early detection of developmental vision disorders].

    PubMed

    König, H H; Barry, J C; Leidl, R; Zrenner, E

    2000-04-01

    Orthoptic screening in the kindergarten is one option to improve early detection of amblyopia in children aged 3 years. The purpose of this study was to analyse the cost-effectiveness of such a screening programme in Germany. Based on data from the literature and own experience gained from orthoptic screening in kindergarten a decision-analytic model was developed. According to the model, all children in kindergarten, aged 3 years, who had not been treated for amblyopia before, were subjected to an orthoptic examination. Non-cooperative children were reexamined in kindergarten after one year. Children with positive test results were examined by an ophthalmologist for diagnosis. Effects were measured by the number of newly diagnosed cases of amblyopia, non-obvious strabismus and amblyogenic refractive errors. Direct costs were estimated from a third-party payer perspective. The influence of uncertain model parameters was tested by sensitivity analysis. In the base analysis the cost per orthoptic screening test was DM 15.39. Examination by an ophthalmologist cost DM 71.20. The total cost of the screening programme in all German kindergartens was DM 6.1 million. With a 1.5% age-specific prevalence of undiagnosed cases, a sensitivity of 95% and a specificity of 98%, a total of 4,261 new cases would be detected. The cost-effectiveness ratio was DM 1,421 per case detected. Sensitivity analysis showed considerable influence of prevalence and specificity on the cost-effectiveness ratio. It was more cost-effective to re-screen non-cooperative children in kindergarten than to have them examined by an ophthalmologist straight-away. The decision-analytic model showed stable results which may serve as a basis for discussion on the implementation of orthoptic screening and for planning a field study.

  12. Cost-utility analysis of searching electronic health records and cascade testing to identify and diagnose familial hypercholesterolaemia in England and Wales.

    PubMed

    Crosland, Paul; Maconachie, Ross; Buckner, Sara; McGuire, Hugh; Humphries, Steve E; Qureshi, Nadeem

    2018-05-17

    The cost effectiveness of cascade testing for familial hypercholesterolaemia (FH) is well recognised. Less clear is the cost effectiveness of FH screening when it includes case identification strategies that incorporate routinely available data from primary and secondary care electronic health records. Nine strategies were compared, all using cascade testing in combination with different index case approaches (primary care identification, secondary care identification, and clinical assessment using the Simon Broome (SB) or Dutch Lipid Clinic Network (DLCN) criteria). A decision analytic model was informed by three systematic literature reviews and expert advice provided by a NICE Guideline Committee. The model found that the addition of primary care case identification by database search for patients with recorded total cholesterol >9.3 mmol/L was more cost effective than cascade testing alone. The incremental cost-effectiveness ratio (ICER) of clinical assessment using the DLCN criteria was £3254 per quality-adjusted life year (QALY) compared with case-finding with no genetic testing. The ICER of clinical assessment using the SB criteria was £13,365 per QALY (compared with primary care identification using the DLCN criteria), indicating that the SB criteria was preferred because it achieved additional health benefits at an acceptable cost. Secondary care identification, with either the SB or DLCN criteria, was not cost effective, alone (dominated and dominated respectively) or combined with primary care identification (£63, 514 per QALY, and £82,388 per QALY respectively). Searching primary care databases for people at high risk of FH followed by cascade testing is likely to be cost-effective. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Individual and combined in vitro (anti)androgenic effects of certain food additives and cosmetic preservatives.

    PubMed

    Pop, Anca; Drugan, Tudor; Gutleb, Arno C; Lupu, Diana; Cherfan, Julien; Loghin, Felicia; Kiss, Béla

    2016-04-01

    The individual and combined (binary mixtures) (anti)androgenic effect of butylparaben (BuPB), butylated hydroxyanisole (BHA), butylated hydroxytoluene (BHT) and propyl gallate (PG) was evaluated using the MDA-kb2 cell line. Exposing these cells to AR agonists results in the expression of the reporter gene (encoding for luciferase) and luminescence can be measured in order to monitor the activity of the reporter protein. In case of the evaluation of the anti-androgenic effect, the individual test compounds or binary mixtures were tested in the presence of a fixed concentration of a strong AR agonist (1000 pM 5-alpha-dihydrotestosterone; DHT). Cell viability was assessed using a resazurin based assay. For PG, this is the first report in the literature concerning its (anti)androgenic activity. In case of both individual and mixture testing none of the compounds or binary combinations showed androgenic activity. When tested in the presence of DHT, BuPB, BHA and BHT proved to be weak anti-androgens and this was confirmed during the evaluation of binary mixtures (BuPB+BHA, BuPB+BHT and BHA+BHT). Besides performing the in vitro testing of the binary combinations, two mathematical models (dose addition and response addition) were evaluated in terms of accuracy of prediction of the anti-androgenic effect of the selected binary mixtures. The dose addition model guaranteed a good correlation between the experimental and predicted data. However, no estimation was possible in case of mixtures containing PG, due to the lack of effect of the compound in case of the individual testing. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Design and Qualification of the AMS-02 Flight Cryocoolers

    NASA Technical Reports Server (NTRS)

    Shirey, Kimberly; Banks,Stuart; Boyle, Rob; Unger, Reuven

    2005-01-01

    Four commercial Sunpower M87N Stirling-cycle cryocoolers will be used to extend the lifetime of the Alpha Magnetic Spectrometer-02 (AMS-02) experiment. The cryocoolers will be mounted to the AMS-02 vacuum case using a structure that will thermally and mechanically decouple the cryocooler from the vacuum case. This paper discusses modifications of the Sunpower M87N cryocooler to make it acceptable for space flight applications and suitable for use on AMS-02. Details of the flight model qualification test program are presented. AMS-02 is a state-of-the-art particle physics detector containing a large superfluid helium-cooled superconducting magnet. Highly sensitive detector plates inside the magnet measure a particle's speed, mass, charge, and direction. The AMS-02 experiment, which will be flown as an attached payload on the International Space Station, will study the properties and origin of cosmic particles and nuclei including antimatter and dark matter. Two engineering model cryocoolers have been under test at NASA Goddard since November 2001. Qualification testing of the engineering model cryocooler bracket assembly including random vibration and thermal vacuum testing was completed at the end of April 2005. The flight cryocoolers were received in December 2003. Acceptance testing of the flight cryocooler bracket assemblies began in May 2005 .

  15. Study of the integration of wind tunnel and computational methods for aerodynamic configurations

    NASA Technical Reports Server (NTRS)

    Browne, Lindsey E.; Ashby, Dale L.

    1989-01-01

    A study was conducted to determine the effectiveness of using a low-order panel code to estimate wind tunnel wall corrections. The corrections were found by two computations. The first computation included the test model and the surrounding wind tunnel walls, while in the second computation the wind tunnel walls were removed. The difference between the force and moment coefficients obtained by comparing these two cases allowed the determination of the wall corrections. The technique was verified by matching the test-section, wall-pressure signature from a wind tunnel test with the signature predicted by the panel code. To prove the viability of the technique, two cases were considered. The first was a two-dimensional high-lift wing with a flap that was tested in the 7- by 10-foot wind tunnel at NASA Ames Research Center. The second was a 1/32-scale model of the F/A-18 aircraft which was tested in the low-speed wind tunnel at San Diego State University. The panel code used was PMARC (Panel Method Ames Research Center). Results of this study indicate that the proposed wind tunnel wall correction method is comparable to other methods and that it also inherently includes the corrections due to model blockage and wing lift.

  16. Environmental Niche Modelling of Phlebotomine Sand Flies and Cutaneous Leishmaniasis Identifies Lutzomyia intermedia as the Main Vector Species in Southeastern Brazil

    PubMed Central

    Meneguzzi, Viviane Coutinho; dos Santos, Claudiney Biral; Leite, Gustavo Rocha; Fux, Blima; Falqueto, Aloísio

    2016-01-01

    Cutaneous leishmaniasis (CL) is caused by a protozoan of the genus Leishmania and is transmitted by sand flies. The state of Espírito Santo (ES), an endemic area in southeast Brazil, has shown a considerably high prevalence in recent decades. Environmental niche modelling (ENM) is a useful tool for predicting potential disease risk. In this study, ENM was applied to sand fly species and CL cases in ES to identify the principal vector and risk areas of the disease. Sand flies were collected in 466 rural localities between 1997 and 2013 using active and passive capture. Insects were identified to the species level, and the localities were georeferenced. Twenty-one bioclimatic variables were selected from WorldClim. Maxent was used to construct models projecting the potential distribution for five Lutzomyia species and CL cases. ENMTools was used to overlap the species and the CL case models. The Kruskal–Wallis test was performed, adopting a 5% significance level. Approximately 250,000 specimens were captured, belonging to 43 species. The area under the curve (AUC) was considered acceptable for all models. The slope was considered relevant to the construction of the models for all the species identified. The overlay test identified Lutzomyia intermedia as the main vector of CL in southeast Brazil. ENM tools enable an analysis of the association among environmental variables, vector distributions and CL cases, which can be used to support epidemiologic and entomological vigilance actions to control the expansion of CL in vulnerable areas. PMID:27783641

  17. A study on the predictability of acute lymphoblastic leukaemia response to treatment using a hybrid oncosimulator.

    PubMed

    Ouzounoglou, Eleftherios; Kolokotroni, Eleni; Stanulla, Martin; Stamatakos, Georgios S

    2018-02-06

    Efficient use of Virtual Physiological Human (VPH)-type models for personalized treatment response prediction purposes requires a precise model parameterization. In the case where the available personalized data are not sufficient to fully determine the parameter values, an appropriate prediction task may be followed. This study, a hybrid combination of computational optimization and machine learning methods with an already developed mechanistic model called the acute lymphoblastic leukaemia (ALL) Oncosimulator which simulates ALL progression and treatment response is presented. These methods are used in order for the parameters of the model to be estimated for retrospective cases and to be predicted for prospective ones. The parameter value prediction is based on a regression model trained on retrospective cases. The proposed Hybrid ALL Oncosimulator system has been evaluated when predicting the pre-phase treatment outcome in ALL. This has been correctly achieved for a significant percentage of patient cases tested (approx. 70% of patients). Moreover, the system is capable of denying the classification of cases for which the results are not trustworthy enough. In that case, potentially misleading predictions for a number of patients are avoided, while the classification accuracy for the remaining patient cases further increases. The results obtained are particularly encouraging regarding the soundness of the proposed methodologies and their relevance to the process of achieving clinical applicability of the proposed Hybrid ALL Oncosimulator system and VPH models in general.

  18. AMS-02 Cryocooler Baseline Configuration and Engineering Model Qualification Test Results

    NASA Technical Reports Server (NTRS)

    Banks, Stuart; Breon, Susan; Shirey, Kimberly

    2003-01-01

    Four Sunpower M87N Stirling-cycle cryocoolers will be used to extend the lifetime of the Alpha Magnetic Spectrometer-02 (AMS-02) experiment. The cryocoolers will be mounted to the AMS-02 vacuum case using a structure that will thermally and mechanically decouple the cryocooler from the vacuum case while providing compliance to allow force attenuation using a passive balancer system. The cryocooler drive is implemented using a 60Hz pulse duration modulated square wave. Details of the testing program, mounting assembly and drive scheme will be presented. AMS-02 is a state-of-the-art particle physics detector containing a large superfluid helium-cooled superconducting magnet. Highly sensitive detector plates inside the magnet measure a particle s speed, momentum, charge, and path. The AMS-02 experiment, which will be flown as an attached payload on the International Space Station, will study the properties and origin of cosmic particles and nuclei including antimatter and dark matter. Two engineering model cryocoolers have been under test at NASA Goddard since November 2001. Qualification testing of the engineering model cryocooler bracket assembly is near completion. Delivery of the flight cryocoolers to Goddard is scheduled for September 2003.

  19. Improving Parolees' Participation in Drug Treatment and Other Services through Strengths Case Management.

    PubMed

    Prendergast, Michael; Cartier, Jerome J

    2008-01-01

    In an effort to increase participation in community aftercare treatment for substance-abusing parolees, an intervention based on a transitional case management (TCM) model that focuses mainly on offenders' strengths has been developed and is under testing. This model consists of completion, by the inmate, of a self-assessment of strengths that informs the development of the continuing care plan, a case conference call shortly before release, and strengths case management for three months post-release to promote retention in substance abuse treatment and support the participant's access to designated services in the community. The post-release component consists of a minimum of one weekly client/case manager meeting (in person or by telephone) for 12 weeks. The intervention is intended to improve the transition process from prison to community at both the individual and systems level. Specifically, the intervention is designed to improve outcomes in parolee admission to, and retention in, community-based substance-abuse treatment, parolee access to other needed services, and recidivism rates during the first year of parole. On the systems level, the intervention is intended to improve the communication and collaboration between criminal justice agencies, community-based treatment organizations, and other social and governmental service providers. The TCM model is being tested in a multisite study through the Criminal Justice Drug Abuse Treatment Studies (CJ-DATS) research cooperative funded by the National Institute of Drug Abuse.

  20. Modeling the dynamic crush of impact mitigating materials

    NASA Astrophysics Data System (ADS)

    Logan, R. W.; McMichael, L. D.

    1995-05-01

    Crushable materials are commonly utilized in the design of structural components to absorb energy and mitigate shock during the dynamic impact of a complex structure, such as an automobile chassis or drum-type shipping container. The development and application of several finite-element material models which have been developed at various times at LLNL for DYNA3D are discussed. Between the models, they are able to account for several of the predominant mechanisms which typically influence the dynamic mechanical behavior of crushable materials. One issue we addressed was that no single existing model would account for the entire gambit of constitutive features which are important for crushable materials. Thus, we describe the implementation and use of an additional material model which attempts to provide a more comprehensive model of the mechanics of crushable material behavior. This model combines features of the pre-existing DYNA models and incorporates some new features as well in an invariant large-strain formulation. In addition to examining the behavior of a unit cell in uniaxial compression, two cases were chosen to evaluate the capabilities and accuracy of the various material models in DYNA. In the first case, a model for foam filled box beams was developed and compared to test data from a four-point bend test. The model was subsequently used to study its effectiveness in energy absorption in an aluminum extrusion, spaceframe, vehicle chassis. The second case examined the response of the AT-400A shipping container and the performance of the overpack material during accident environments selected from 10CFR71 and IAEA regulations.

  1. Bayesian Learning and the Psychology of Rule Induction

    ERIC Educational Resources Information Center

    Endress, Ansgar D.

    2013-01-01

    In recent years, Bayesian learning models have been applied to an increasing variety of domains. While such models have been criticized on theoretical grounds, the underlying assumptions and predictions are rarely made concrete and tested experimentally. Here, I use Frank and Tenenbaum's (2011) Bayesian model of rule-learning as a case study to…

  2. Bayes Nets in Educational Assessment: Where Do the Numbers Come from? CSE Technical Report.

    ERIC Educational Resources Information Center

    Mislevy, Robert J.; Almond, Russell G.; Yan, Duanli; Steinberg, Linda S.

    Educational assessments that exploit advances in technology and cognitive psychology can produce observations and pose student models that outstrip familiar test-theoretic models and analytic methods. Bayesian inference networks (BINs), which include familiar models and techniques as special cases, can be used to manage belief about students'…

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quaglioni, S.; Beck, B. R.

    The Monte Carlo All Particle Method generator and collision physics library features two models for allowing a particle to either up- or down-scatter due to collisions with material at finite temperature. The two models are presented and compared. Neutron interaction with matter through elastic collisions is used as testing case.

  4. A Case-Series Test of the Interactive Two-step Model of Lexical Access: Predicting Word Repetition from Picture Naming

    PubMed Central

    Dell, Gary S.; Martin, Nadine; Schwartz, Myrna F.

    2010-01-01

    Lexical access in language production, and particularly pathologies of lexical access, are often investigated by examining errors in picture naming and word repetition. In this article, we test a computational approach to lexical access, the two-step interactive model, by examining whether the model can quantitatively predict the repetition-error patterns of 65 aphasic subjects from their naming errors. The model’s characterizations of the subjects’ naming errors were taken from the companion paper to this one (Schwartz, Dell, N. Martin, Gahl & Sobel, 2006), and their repetition was predicted from the model on the assumption that naming involves two error prone steps, word and phonological retrieval, whereas repetition only creates errors in the second of these steps. A version of the model in which lexical-semantic and lexical-phonological connections could be independently lesioned was generally successful in predicting repetition for the aphasics. An analysis of the few cases in which model predictions were inaccurate revealed the role of input phonology in the repetition task. PMID:21085621

  5. Data Sufficiency Assessment and Pumping Test Design for Groundwater Prediction Using Decision Theory and Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    McPhee, J.; William, Y. W.

    2005-12-01

    This work presents a methodology for pumping test design based on the reliability requirements of a groundwater model. Reliability requirements take into consideration the application of the model results in groundwater management, expressed in this case as a multiobjective management model. The pumping test design is formulated as a mixed-integer nonlinear programming (MINLP) problem and solved using a combination of genetic algorithm (GA) and gradient-based optimization. Bayesian decision theory provides a formal framework for assessing the influence of parameter uncertainty over the reliability of the proposed pumping test. The proposed methodology is useful for selecting a robust design that will outperform all other candidate designs under most potential 'true' states of the system

  6. Comparison of risk factors for seropositivity to feline immunodeficiency virus and feline leukemia virus among cats: a case-case study.

    PubMed

    Chhetri, Bimal K; Berke, Olaf; Pearl, David L; Bienzle, Dorothee

    2015-02-10

    Feline immunodeficiency virus (FIV) and feline leukemia virus (FeLV) are reported to have similar risk factors and similar recommendations apply to manage infected cats. However, some contrasting evidence exists in the literature with regard to commonly reported risk factors. In this study, we investigated whether the known risk factors for FIV and FeLV infections have a stronger effect for either infection. This retrospective study included samples from 696 cats seropositive for FIV and 593 cats seropositive for FeLV from the United States and Canada. Data were collected during two cross sectional studies, where cats were tested using IDEXX FIV/FeLV ELISA kits. To compare the effect of known risk factors for FIV infection compared to FeLV, using a case-case study design, random intercept logistic regression models were fit including cats' age, sex, neuter status, outdoor exposure, health status and type of testing facility as independent variables. A random intercept for testing facility was included to account for clustering expected in testing practices at the individual clinics and shelters. In the multivariable random intercept model, the odds of FIV compared to FeLV positive ELISA results were greater for adults (OR = 2.09, CI: 1.50-2.92), intact males (OR = 3.14, CI: 1.85-3.76), neutered males (OR = 2.68, CI: 1.44- 3.14), cats with outdoor access (OR = 2.58, CI: 1.85-3.76) and lower for cats with clinical illness (OR = 0.60, 95% CI: 0.52-0.90). The variance components obtained from the model indicated clustering at the testing facility level. Risk factors that have a greater effect on FIV seropositivity include adulthood, being male (neutered or not) and having access to outdoors, while clinical illness was a stronger predictor for FeLV seropositivity. Further studies are warranted to assess the implications of these results for the management and control of these infections.

  7. [Training of resident physicians in the recognition and treatment of an anaphylaxis case in pediatrics with simulation models].

    PubMed

    Enríquez, Diego; Lamborizio, María J; Firenze, Lorena; Jaureguizar, María de la P; Díaz Pumará, Estanislao; Szyld, Edgardo

    2017-08-01

    To evaluate the performance of resident physicians in diagnosing and treating a case of anaphylaxis, six months after participating in simulation training exercises. Initially, a group of pediatric residents were trained using simulation techniques in the management of critical pediatric cases. Based on their performance in this exercise, participants were assigned to one of 3 groups. At six months post-training, 4 residents were randomly chosen from each group to be re-tested, using the same performance measure as previously used. During the initial training session, 56 of 72 participants (78%) correctly identified and treated the case. Six months after the initial training, all 12 (100%) resident physicians who were re-tested successfully diagnosed and treated the simulated anaphylaxis case. The training through simulation techniques allowed correction or optimization of the treatment of simulated anaphylaxis cases in resident physicians evaluated after 6 months of the initial training.

  8. Thermal Vacuum Test Correlation of A Zero Propellant Load Case Thermal Capacitance Propellant Gauging Analytics Model

    NASA Technical Reports Server (NTRS)

    McKim, Stephen A.

    2016-01-01

    This thesis describes the development and test data validation of the thermal model that is the foundation of a thermal capacitance spacecraft propellant load estimator. Specific details of creating the thermal model for the diaphragm propellant tank used on NASA's Magnetospheric Multiscale spacecraft using ANSYS and the correlation process implemented to validate the model are presented. The thermal model was correlated to within plus or minus 3 degrees Centigrade of the thermal vacuum test data, and was found to be relatively insensitive to uncertainties in applied heat flux and mass knowledge of the tank. More work is needed, however, to refine the thermal model to further improve temperature predictions in the upper hemisphere of the propellant tank. Temperatures predictions in this portion were found to be 2-2.5 degrees Centigrade lower than the test data. A road map to apply the model to predict propellant loads on the actual MMS spacecraft toward its end of life in 2017-2018 is also presented.

  9. Design Of Computer Based Test Using The Unified Modeling Language

    NASA Astrophysics Data System (ADS)

    Tedyyana, Agus; Danuri; Lidyawati

    2017-12-01

    The Admission selection of Politeknik Negeri Bengkalis through interest and talent search (PMDK), Joint Selection of admission test for state Polytechnics (SB-UMPN) and Independent (UM-Polbeng) were conducted by using paper-based Test (PBT). Paper Based Test model has some weaknesses. They are wasting too much paper, the leaking of the questios to the public, and data manipulation of the test result. This reasearch was Aimed to create a Computer-based Test (CBT) models by using Unified Modeling Language (UML) the which consists of Use Case diagrams, Activity diagram and sequence diagrams. During the designing process of the application, it is important to pay attention on the process of giving the password for the test questions before they were shown through encryption and description process. RSA cryptography algorithm was used in this process. Then, the questions shown in the questions banks were randomized by using the Fisher-Yates Shuffle method. The network architecture used in Computer Based test application was a client-server network models and Local Area Network (LAN). The result of the design was the Computer Based Test application for admission to the selection of Politeknik Negeri Bengkalis.

  10. Identification and calibration of the structural model of historical masonry building damaged during the 2016 Italian earthquakes: The case study of Palazzo del Podestà in Montelupone

    NASA Astrophysics Data System (ADS)

    Catinari, Federico; Pierdicca, Alessio; Clementi, Francesco; Lenci, Stefano

    2017-11-01

    The results of an ambient-vibration based investigation conducted on the "Palazzo del Podesta" in Montelupone (Italy) is presented. The case study was damaged during the 20I6 Italian earthquakes that stroke the central part of the Italy. The assessment procedure includes full-scale ambient vibration testing, modal identification from ambient vibration responses, finite element modeling and dynamic-based identification of the uncertain structural parameters of the model. A very good match between theoretical and experimental modal parameters was reached and the model updating has been performed identifying some structural parameters.

  11. Regression approaches in the test-negative study design for assessment of influenza vaccine effectiveness.

    PubMed

    Bond, H S; Sullivan, S G; Cowling, B J

    2016-06-01

    Influenza vaccination is the most practical means available for preventing influenza virus infection and is widely used in many countries. Because vaccine components and circulating strains frequently change, it is important to continually monitor vaccine effectiveness (VE). The test-negative design is frequently used to estimate VE. In this design, patients meeting the same clinical case definition are recruited and tested for influenza; those who test positive are the cases and those who test negative form the comparison group. When determining VE in these studies, the typical approach has been to use logistic regression, adjusting for potential confounders. Because vaccine coverage and influenza incidence change throughout the season, time is included among these confounders. While most studies use unconditional logistic regression, adjusting for time, an alternative approach is to use conditional logistic regression, matching on time. Here, we used simulation data to examine the potential for both regression approaches to permit accurate and robust estimates of VE. In situations where vaccine coverage changed during the influenza season, the conditional model and unconditional models adjusting for categorical week and using a spline function for week provided more accurate estimates. We illustrated the two approaches on data from a test-negative study of influenza VE against hospitalization in children in Hong Kong which resulted in the conditional logistic regression model providing the best fit to the data.

  12. Numerical and analytical modeling of the end-loaded split (ELS) test specimens made of multi-directional coupled composite laminates

    NASA Astrophysics Data System (ADS)

    Samborski, Sylwester; Valvo, Paolo S.

    2018-01-01

    The paper deals with the numerical and analytical modelling of the end-loaded split test for multi-directional laminates affected by the typical elastic couplings. Numerical analysis of three-dimensional finite element models was performed with the Abaqus software exploiting the virtual crack closure technique (VCCT). The results show possible asymmetries in the widthwise deflections of the specimen, as well as in the strain energy release rate (SERR) distributions along the delamination front. Analytical modelling based on a beam-theory approach was also conducted in simpler cases, where only bending-extension coupling is present, but no out-of-plane effects. The analytical results matched the numerical ones, thus demonstrating that the analytical models are feasible for test design and experimental data reduction.

  13. Parameters estimation for reactive transport: A way to test the validity of a reactive model

    NASA Astrophysics Data System (ADS)

    Aggarwal, Mohit; Cheikh Anta Ndiaye, Mame; Carrayrou, Jérôme

    The chemical parameters used in reactive transport models are not known accurately due to the complexity and the heterogeneous conditions of a real domain. We will present an efficient algorithm in order to estimate the chemical parameters using Monte-Carlo method. Monte-Carlo methods are very robust for the optimisation of the highly non-linear mathematical model describing reactive transport. Reactive transport of tributyltin (TBT) through natural quartz sand at seven different pHs is taken as the test case. Our algorithm will be used to estimate the chemical parameters of the sorption of TBT onto the natural quartz sand. By testing and comparing three models of surface complexation, we show that the proposed adsorption model cannot explain the experimental data.

  14. Interactive Model-Centric Systems Engineering (IMCSE) Phase 1

    DTIC Science & Technology

    2014-09-30

    and supporting infrastructure ...testing. 4. Supporting MPTs. During Phase 1, the opportunity to develop several MPTs to support IMCSE arose, including supporting infrastructure ...Analysis will be completed and tested with a case application, along with preliminary supporting infrastructure , which will then be used to inform the

  15. IPRT polarized radiative transfer model intercomparison project - Three-dimensional test cases (phase B)

    NASA Astrophysics Data System (ADS)

    Emde, Claudia; Barlakas, Vasileios; Cornet, Céline; Evans, Frank; Wang, Zhen; Labonotte, Laurent C.; Macke, Andreas; Mayer, Bernhard; Wendisch, Manfred

    2018-04-01

    Initially unpolarized solar radiation becomes polarized by scattering in the Earth's atmosphere. In particular molecular scattering (Rayleigh scattering) polarizes electromagnetic radiation, but also scattering of radiation at aerosols, cloud droplets (Mie scattering) and ice crystals polarizes. Each atmospheric constituent produces a characteristic polarization signal, thus spectro-polarimetric measurements are frequently employed for remote sensing of aerosol and cloud properties. Retrieval algorithms require efficient radiative transfer models. Usually, these apply the plane-parallel approximation (PPA), assuming that the atmosphere consists of horizontally homogeneous layers. This allows to solve the vector radiative transfer equation (VRTE) efficiently. For remote sensing applications, the radiance is considered constant over the instantaneous field-of-view of the instrument and each sensor element is treated independently in plane-parallel approximation, neglecting horizontal radiation transport between adjacent pixels (Independent Pixel Approximation, IPA). In order to estimate the errors due to the IPA approximation, three-dimensional (3D) vector radiative transfer models are required. So far, only a few such models exist. Therefore, the International Polarized Radiative Transfer (IPRT) working group of the International Radiation Commission (IRC) has initiated a model intercomparison project in order to provide benchmark results for polarized radiative transfer. The group has already performed an intercomparison for one-dimensional (1D) multi-layer test cases [phase A, 1]. This paper presents the continuation of the intercomparison project (phase B) for 2D and 3D test cases: a step cloud, a cubic cloud, and a more realistic scenario including a 3D cloud field generated by a Large Eddy Simulation (LES) model and typical background aerosols. The commonly established benchmark results for 3D polarized radiative transfer are available at the IPRT website (http://www.meteo.physik.uni-muenchen.de/ iprt).

  16. Strategic Factors in the Choice of a Model of Public Relations. Case Study: Seventh-day Adventist Church World Headquarters.

    ERIC Educational Resources Information Center

    Denton, Holly M.

    A study tested a model of organizational variables that earlier research had identified as important in influencing what model(s) of public relations an organization selects. Models of public relations (as outlined by J. Grunig and Hunt in 1984) are defined as either press agentry, public information, two-way asymmetrical, or two-way symmetrical.…

  17. Using a bayesian latent class model to evaluate the utility of investigating persons with negative polymerase chain reaction results for pertussis.

    PubMed

    Tarr, Gillian A M; Eickhoff, Jens C; Koepke, Ruth; Hopfensperger, Daniel J; Davis, Jeffrey P; Conway, James H

    2013-07-15

    Pertussis remains difficult to control. Imperfect sensitivity of diagnostic tests and lack of specific guidance regarding interpretation of negative test results among patients with compatible symptoms may contribute to its spread. In this study, we examined whether additional pertussis cases could be identified if persons with negative pertussis test results were routinely investigated. We conducted interviews among 250 subjects aged ≤18 years with pertussis polymerase chain reaction (PCR) results reported from 2 reference laboratories in Wisconsin during July-September 2010 to determine whether their illnesses met the Centers for Disease Control and Prevention's clinical case definition (CCD) for pertussis. PCR validity measures were calculated using the CCD as the standard for pertussis disease. Two Bayesian latent class models were used to adjust the validity measures for pertussis detectable by 1) culture alone and 2) culture and/or more sensitive measures such as serology. Among 190 PCR-negative subjects, 54 (28%) had illnesses meeting the CCD. In adjusted analyses, PCR sensitivity and the negative predictive value were 1) 94% and 99% and 2) 43% and 87% in the 2 types of models, respectively. The models suggested that public health follow-up of reported pertussis patients with PCR-negative results leads to the detection of more true pertussis cases than follow-up of PCR-positive persons alone. The results also suggest a need for a more specific pertussis CCD.

  18. A standard test case suite for two-dimensional linear transport on the sphere: results from a collection of state-of-the-art schemes

    NASA Astrophysics Data System (ADS)

    Lauritzen, P. H.; Ullrich, P. A.; Jablonowski, C.; Bosler, P. A.; Calhoun, D.; Conley, A. J.; Enomoto, T.; Dong, L.; Dubey, S.; Guba, O.; Hansen, A. B.; Kaas, E.; Kent, J.; Lamarque, J.-F.; Prather, M. J.; Reinert, D.; Shashkin, V. V.; Skamarock, W. C.; Sørensen, B.; Taylor, M. A.; Tolstykh, M. A.

    2013-09-01

    Recently, a standard test case suite for 2-D linear transport on the sphere was proposed to assess important aspects of accuracy in geophysical fluid dynamics with a "minimal" set of idealized model configurations/runs/diagnostics. Here we present results from 19 state-of-the-art transport scheme formulations based on finite-difference/finite-volume methods as well as emerging (in the context of atmospheric/oceanographic sciences) Galerkin methods. Discretization grids range from traditional regular latitude-longitude grids to more isotropic domain discretizations such as icosahedral and cubed-sphere tessellations of the sphere. The schemes are evaluated using a wide range of diagnostics in idealized flow environments. Accuracy is assessed in single- and two-tracer configurations using conventional error norms as well as novel diagnostics designed for climate and climate-chemistry applications. In addition, algorithmic considerations that may be important for computational efficiency are reported on. The latter is inevitably computing platform dependent, The ensemble of results from a wide variety of schemes presented here helps shed light on the ability of the test case suite diagnostics and flow settings to discriminate between algorithms and provide insights into accuracy in the context of global atmospheric/ocean modeling. A library of benchmark results is provided to facilitate scheme intercomparison and model development. Simple software and data-sets are made available to facilitate the process of model evaluation and scheme intercomparison.

  19. A standard test case suite for two-dimensional linear transport on the sphere: results from a collection of state-of-the-art schemes

    NASA Astrophysics Data System (ADS)

    Lauritzen, P. H.; Ullrich, P. A.; Jablonowski, C.; Bosler, P. A.; Calhoun, D.; Conley, A. J.; Enomoto, T.; Dong, L.; Dubey, S.; Guba, O.; Hansen, A. B.; Kaas, E.; Kent, J.; Lamarque, J.-F.; Prather, M. J.; Reinert, D.; Shashkin, V. V.; Skamarock, W. C.; Sørensen, B.; Taylor, M. A.; Tolstykh, M. A.

    2014-01-01

    Recently, a standard test case suite for 2-D linear transport on the sphere was proposed to assess important aspects of accuracy in geophysical fluid dynamics with a "minimal" set of idealized model configurations/runs/diagnostics. Here we present results from 19 state-of-the-art transport scheme formulations based on finite-difference/finite-volume methods as well as emerging (in the context of atmospheric/oceanographic sciences) Galerkin methods. Discretization grids range from traditional regular latitude-longitude grids to more isotropic domain discretizations such as icosahedral and cubed-sphere tessellations of the sphere. The schemes are evaluated using a wide range of diagnostics in idealized flow environments. Accuracy is assessed in single- and two-tracer configurations using conventional error norms as well as novel diagnostics designed for climate and climate-chemistry applications. In addition, algorithmic considerations that may be important for computational efficiency are reported on. The latter is inevitably computing platform dependent. The ensemble of results from a wide variety of schemes presented here helps shed light on the ability of the test case suite diagnostics and flow settings to discriminate between algorithms and provide insights into accuracy in the context of global atmospheric/ocean modeling. A library of benchmark results is provided to facilitate scheme intercomparison and model development. Simple software and data sets are made available to facilitate the process of model evaluation and scheme intercomparison.

  20. Implementing and sustaining a mobile medical clinic for prenatal care and sexually transmitted infection prevention in rural Mysore, India.

    PubMed

    Kojima, Noah; Krupp, Karl; Ravi, Kavitha; Gowda, Savitha; Jaykrishna, Poornima; Leonardson-Placek, Caitlyn; Siddhaiah, Anand; Bristow, Claire C; Arun, Anjali; Klausner, Jeffrey D; Madhivanan, Purnima

    2017-03-06

    In rural India, mobile medical clinics are useful models for delivering health promotion, education, and care. Mobile medical clinics use fewer providers for larger catchment areas compared to traditional clinic models in resource limited settings, which is especially useful in areas with shortages of healthcare providers and a wide geographical distribution of patients. From 2008 to 2011, we built infrastructure to implement a mobile clinic system to educate rural communities about maternal child health, train community health workers in common safe birthing procedures, and provide comprehensive antenatal care, prevention of mother-to-child transmission (PMTCT) of human immunodeficiency virus (HIV), and testing for specific infections in a large rural catchment area of pregnant women in rural Mysore. This was done using two mobile clinics and one walk-in clinic. Women were tested for HIV, hepatitis B, syphilis, and bacterial vaginosis along with random blood sugar, urine albumin, and anemia. Sociodemographic information, medical, and obstetric history were collected using interviewer-administered questionnaires in the local language, Kannada. Data were entered in Microsoft Excel and analyzed using Stata SE 14.1. During the program period, nearly 700 community workers and 100 health care providers were trained; educational sessions were delivered to over 15,000 men and women and integrated antenatal care and HIV/sexually transmitted infection testing was offered to 3545 pregnant women. There were 22 (0.6%) cases of HIV, 19 (0.5%) cases of hepatitis B, 2 (0.1%) cases of syphilis, and 250 (7.1%) cases of BV, which were identified and treated. Additionally, 1755 (49.5%) cases of moderate to severe anemia and 154 (4.3%) cases of hypertension were identified and treated among the pregnant women tested. Patient-centered mobile medical clinics are feasible, successful, and acceptable models that can be used to provide quality healthcare to pregnant women in rural and hard-to-reach settings. The high numbers of pregnant women attending mobile medical clinics show that integrated antenatal care with PMTCT services were acceptable and utilized. The program also developed and trained health professionals who continue to remain in those communities.

  1. Finite Element Vibration Modeling and Experimental Validation for an Aircraft Engine Casing

    NASA Astrophysics Data System (ADS)

    Rabbitt, Christopher

    This thesis presents a procedure for the development and validation of a theoretical vibration model, applies this procedure to a pair of aircraft engine casings, and compares select parameters from experimental testing of those casings to those from a theoretical model using the Modal Assurance Criterion (MAC) and linear regression coefficients. A novel method of determining the optimal MAC between axisymmetric results is developed and employed. It is concluded that the dynamic finite element models developed as part of this research are fully capable of modelling the modal parameters within the frequency range of interest. Confidence intervals calculated in this research for correlation coefficients provide important information regarding the reliability of predictions, and it is recommended that these intervals be calculated for all comparable coefficients. The procedure outlined for aligning mode shapes around an axis of symmetry proved useful, and the results are promising for the development of further optimization techniques.

  2. A systems approach to healthcare: agent-based modeling, community mental health, and population well-being.

    PubMed

    Silverman, Barry G; Hanrahan, Nancy; Bharathy, Gnana; Gordon, Kim; Johnson, Dan

    2015-02-01

    Explore whether agent-based modeling and simulation can help healthcare administrators discover interventions that increase population wellness and quality of care while, simultaneously, decreasing costs. Since important dynamics often lie in the social determinants outside the health facilities that provide services, this study thus models the problem at three levels (individuals, organizations, and society). The study explores the utility of translating an existing (prize winning) software for modeling complex societal systems and agent's daily life activities (like a Sim City style of software), into a desired decision support system. A case study tests if the 3 levels of system modeling approach is feasible, valid, and useful. The case study involves an urban population with serious mental health and Philadelphia's Medicaid population (n=527,056), in particular. Section 3 explains the models using data from the case study and thereby establishes feasibility of the approach for modeling a real system. The models were trained and tuned using national epidemiologic datasets and various domain expert inputs. To avoid co-mingling of training and testing data, the simulations were then run and compared (Section 4.1) to an analysis of 250,000 Philadelphia patient hospital admissions for the year 2010 in terms of re-hospitalization rate, number of doctor visits, and days in hospital. Based on the Student t-test, deviations between simulated vs. real world outcomes are not statistically significant. Validity is thus established for the 2008-2010 timeframe. We computed models of various types of interventions that were ineffective as well as 4 categories of interventions (e.g., reduced per-nurse caseload, increased check-ins and stays, etc.) that result in improvement in well-being and cost. The 3 level approach appears to be useful to help health administrators sort through system complexities to find effective interventions at lower costs. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Exploring different strategies for imbalanced ADME data problem: case study on Caco-2 permeability modeling.

    PubMed

    Pham-The, Hai; Casañola-Martin, Gerardo; Garrigues, Teresa; Bermejo, Marival; González-Álvarez, Isabel; Nguyen-Hai, Nam; Cabrera-Pérez, Miguel Ángel; Le-Thi-Thu, Huong

    2016-02-01

    In many absorption, distribution, metabolism, and excretion (ADME) modeling problems, imbalanced data could negatively affect classification performance of machine learning algorithms. Solutions for handling imbalanced dataset have been proposed, but their application for ADME modeling tasks is underexplored. In this paper, various strategies including cost-sensitive learning and resampling methods were studied to tackle the moderate imbalance problem of a large Caco-2 cell permeability database. Simple physicochemical molecular descriptors were utilized for data modeling. Support vector machine classifiers were constructed and compared using multiple comparison tests. Results showed that the models developed on the basis of resampling strategies displayed better performance than the cost-sensitive classification models, especially in the case of oversampling data where misclassification rates for minority class have values of 0.11 and 0.14 for training and test set, respectively. A consensus model with enhanced applicability domain was subsequently constructed and showed improved performance. This model was used to predict a set of randomly selected high-permeability reference drugs according to the biopharmaceutics classification system. Overall, this study provides a comparison of numerous rebalancing strategies and displays the effectiveness of oversampling methods to deal with imbalanced permeability data problems.

  4. Simulation of solute transport across low-permeability barrier walls

    USGS Publications Warehouse

    Harte, P.T.; Konikow, Leonard F.; Hornberger, G.Z.

    2006-01-01

    Low-permeability, non-reactive barrier walls are often used to contain contaminants in an aquifer. Rates of solute transport through such barriers are typically many orders of magnitude slower than rates through the aquifer. Nevertheless, the success of remedial actions may be sensitive to these low rates of transport. Two numerical simulation methods for representing low-permeability barriers in a finite-difference groundwater-flow and transport model were tested. In the first method, the hydraulic properties of the barrier were represented directly on grid cells and in the second method, the intercell hydraulic-conductance values were adjusted to approximate the reduction in horizontal flow, allowing use of a coarser and computationally efficient grid. The alternative methods were tested and evaluated on the basis of hypothetical test problems and a field case involving tetrachloroethylene (PCE) contamination at a Superfund site in New Hampshire. For all cases, advective transport across the barrier was negligible, but preexisting numerical approaches to calculate dispersion yielded dispersive fluxes that were greater than expected. A transport model (MODFLOW-GWT) was modified to (1) allow different dispersive and diffusive properties to be assigned to the barrier than the adjacent aquifer and (2) more accurately calculate dispersion from concentration gradients and solute fluxes near barriers. The new approach yields reasonable and accurate concentrations for the test cases. ?? 2006.

  5. Systems engineering principles for the design of biomedical signal processing systems.

    PubMed

    Faust, Oliver; Acharya U, Rajendra; Sputh, Bernhard H C; Min, Lim Choo

    2011-06-01

    Systems engineering aims to produce reliable systems which function according to specification. In this paper we follow a systems engineering approach to design a biomedical signal processing system. We discuss requirements capturing, specification definition, implementation and testing of a classification system. These steps are executed as formal as possible. The requirements, which motivate the system design, are based on diabetes research. The main requirement for the classification system is to be a reliable component of a machine which controls diabetes. Reliability is very important, because uncontrolled diabetes may lead to hyperglycaemia (raised blood sugar) and over a period of time may cause serious damage to many of the body systems, especially the nerves and blood vessels. In a second step, these requirements are refined into a formal CSP‖ B model. The formal model expresses the system functionality in a clear and semantically strong way. Subsequently, the proven system model was translated into an implementation. This implementation was tested with use cases and failure cases. Formal modeling and automated model checking gave us deep insight in the system functionality. This insight enabled us to create a reliable and trustworthy implementation. With extensive tests we established trust in the reliability of the implementation. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  6. Geophysical techniques applied to urban planning in complex near surface environments. Examples of Zaragoza, NE Spain

    NASA Astrophysics Data System (ADS)

    Pueyo-Anchuela, Ó.; Casas-Sainz, A. M.; Soriano, M. A.; Pocoví-Juan, A.

    Complex geological shallow subsurface environments represent an important handicap in urban and building projects. The geological features of the Central Ebro Basin, with sharp lateral changes in Quaternary deposits, alluvial karst phenomena and anthropic activity can preclude the characterization of future urban areas only from isolated geomechanical tests or from non-correctly dimensioned geophysical techniques. This complexity is here analyzed in two different test fields, (i) one of them linked to flat-bottomed valleys with irregular distribution of Quaternary deposits related to sharp lateral facies changes and irregular preconsolidated substratum position and (ii) a second one with similar complexities in the alluvial deposits and karst activity linked to solution of the underlying evaporite substratum. The results show that different geophysical techniques allow for similar geological models to be obtained in the first case (flat-bottomed valleys), whereas only the application of several geophysical techniques can permit to correctly evaluate the geological model complexities in the second case (alluvial karst). In this second case, the geological and superficial information permit to refine the sensitivity of the applied geophysical techniques to different indicators of karst activity. In both cases 3D models are needed to correctly distinguish alluvial lateral sedimentary changes from superimposed karstic activity.

  7. Ecology and geography of avian influenza (HPAI H5N1) transmission in the Middle East and northeastern Africa

    PubMed Central

    Williams, Richard AJ; Peterson, A Townsend

    2009-01-01

    Background The emerging highly pathogenic avian influenza strain H5N1 ("HPAI-H5N1") has spread broadly in the past decade, and is now the focus of considerable concern. We tested the hypothesis that spatial distributions of HPAI-H5N1 cases are related consistently and predictably to coarse-scale environmental features in the Middle East and northeastern Africa. We used ecological niche models to relate virus occurrences to 8 km resolution digital data layers summarizing parameters of monthly surface reflectance and landform. Predictive challenges included a variety of spatial stratification schemes in which models were challenged to predict case distributions in broadly unsampled areas. Results In almost all tests, HPAI-H5N1 cases were indeed occurring under predictable sets of environmental conditions, generally predicted absent from areas with low NDVI values and minimal seasonal variation, and present in areas with a broad range of and appreciable seasonal variation in NDVI values. Although we documented significant predictive ability of our models, even between our study region and West Africa, case occurrences in the Arabian Peninsula appear to follow a distinct environmental regime. Conclusion Overall, we documented a variable environmental "fingerprint" for areas suitable for HPAI-H5N1 transmission. PMID:19619336

  8. Improving In Vitro to In Vivo Extrapolation by Incorporating Toxicokinetic Measurements: A Case Study of Lindane-Induced Neurotoxicity

    EPA Science Inventory

    Approaches for extrapolating in vitro toxicity testing results for prediction of human in vivo outcomes are needed. The purpose of this case study was to employ in vitro toxicokinetics and PBPK modeling to perform in vitro to in vivo extrapolation (IVIVE) of lindane neurotoxicit...

  9. Using exposure prediction tools to link exposure and dosimetry for risk based decisions: a case study with phthalates

    EPA Science Inventory

    The Population Life-course Exposure to Health Effects Modeling (PLETHEM) platform being developed provides a tool that links results from emerging toxicity testing tools to exposure estimates for humans as defined by the USEPA. A reverse dosimetry case study using phthalates was ...

  10. Acoustic testing of a 1.5 pressure ratio low tip speed fan with casing tip bleed (QEP Fan B scale model)

    NASA Technical Reports Server (NTRS)

    Kazin, S. B.; Minzner, W. R.; Paas, J. E.

    1971-01-01

    A scale model of the bypass flow region of a 1.5 pressure ratio, single stage, low tip speed fan was tested with a rotor tip casing bleed slot to determine its effects on noise generation. The bleed slot was located 1/2 inch (1.3 cm) upstream of the rotor leading edge and was configured to be a continuous opening around the circumference. The bleed manifold system was operated over a range of bleed rates corresponding to as much as 6% of the fan flow at approach thrust and 4.25% of the fan flow at takeoff thrust. Acoustic results indicate that a bleed rate of 4% of the fan flow reduces the fan maximum approach 200 foot (61.0 m) sideline PNL 0.5 PNdB and the corresponding takeoff thrust noise 1.1 PNdB below the level with zero bleed. However, comparison of the standard casing (no bleed slot) and the slotted bleed casing with zero bleed shows that the bleed slot itself caused a noise increase.

  11. Testing goodness of fit in regression: a general approach for specified alternatives.

    PubMed

    Solari, Aldo; le Cessie, Saskia; Goeman, Jelle J

    2012-12-10

    When fitting generalized linear models or the Cox proportional hazards model, it is important to have tools to test for lack of fit. Because lack of fit comes in all shapes and sizes, distinguishing among different types of lack of fit is of practical importance. We argue that an adequate diagnosis of lack of fit requires a specified alternative model. Such specification identifies the type of lack of fit the test is directed against so that if we reject the null hypothesis, we know the direction of the departure from the model. The goodness-of-fit approach of this paper allows to treat different types of lack of fit within a unified general framework and to consider many existing tests as special cases. Connections with penalized likelihood and random effects are discussed, and the application of the proposed approach is illustrated with medical examples. Tailored functions for goodness-of-fit testing have been implemented in the R package global test. Copyright © 2012 John Wiley & Sons, Ltd.

  12. Modeling the High Speed Research Cycle 2B Longitudinal Aerodynamic Database Using Multivariate Orthogonal Functions

    NASA Technical Reports Server (NTRS)

    Morelli, E. A.; Proffitt, M. S.

    1999-01-01

    The data for longitudinal non-dimensional, aerodynamic coefficients in the High Speed Research Cycle 2B aerodynamic database were modeled using polynomial expressions identified with an orthogonal function modeling technique. The discrepancy between the tabular aerodynamic data and the polynomial models was tested and shown to be less than 15 percent for drag, lift, and pitching moment coefficients over the entire flight envelope. Most of this discrepancy was traced to smoothing local measurement noise and to the omission of mass case 5 data in the modeling process. A simulation check case showed that the polynomial models provided a compact and accurate representation of the nonlinear aerodynamic dependencies contained in the HSR Cycle 2B tabular aerodynamic database.

  13. Development of a One-Equation Transition/Turbulence Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    EDWARDS,JACK R.; ROY,CHRISTOPHER J.; BLOTTNER,FREDERICK G.

    2000-09-26

    This paper reports on the development of a unified one-equation model for the prediction of transitional and turbulent flows. An eddy viscosity - transport equation for non-turbulent fluctuation growth based on that proposed by Warren and Hassan (Journal of Aircraft, Vol. 35, No. 5) is combined with the Spalart-Allmaras one-equation model for turbulent fluctuation growth. Blending of the two equations is accomplished through a multidimensional intermittence function based on the work of Dhawan and Narasimha (Journal of Fluid Mechanics, Vol. 3, No. 4). The model predicts both the onset and extent of transition. Low-speed test cases include transitional flow overmore » a flat plate, a single element airfoil, and a multi-element airfoil in landing configuration. High-speed test cases include transitional Mach 3.5 flow over a 5{degree} cone and Mach 6 flow over a flared-cone configuration. Results are compared with experimental data, and the spatial accuracy of selected predictions is analyzed.« less

  14. GPCR-I-TASSER: A hybrid approach to G protein-coupled receptor structure modeling and the application to the human genome

    PubMed Central

    Zhang, Jian; Yang, Jianyi; Jang, Richard; Zhang, Yang

    2015-01-01

    SUMMARY Experimental structure determination remains very difficult for G protein-coupled receptors (GPCRs). We propose a new hybrid protocol to construct GPCR structure models that integrates experimental mutagenesis data with ab initio transmembrane (TM) helix assembly simulations. The method was tested on 24 known GPCRs where the ab initio TM-helix assembly procedure constructed the correct fold for 20 cases. When combined with weak-homology and sparse mutagenesis restraints, the method generated correct folds for all the tested cases with an average C-alpha RMSD 2.4 Å in the TM-regions. The new hybrid protocol was applied to model all 1026 GPCRs in the human genome, where 923 have a high confidence score that are expected to have correct folds; these contain many pharmaceutically important families with no previously solved structures, including Trace amine, Prostanoids, Releasing hormones, Melanocortins, Vasopressin and Neuropeptide Y receptors. The results demonstrate new progress on genome-wide structure modeling of transmembrane proteins. PMID:26190572

  15. Automated Test Case Generation for an Autopilot Requirement Prototype

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Rungta, Neha; Feary, Michael

    2011-01-01

    Designing safety-critical automation with robust human interaction is a difficult task that is susceptible to a number of known Human-Automation Interaction (HAI) vulnerabilities. It is therefore essential to develop automated tools that provide support both in the design and rapid evaluation of such automation. The Automation Design and Evaluation Prototyping Toolset (ADEPT) enables the rapid development of an executable specification for automation behavior and user interaction. ADEPT supports a number of analysis capabilities, thus enabling the detection of HAI vulnerabilities early in the design process, when modifications are less costly. In this paper, we advocate the introduction of a new capability to model-based prototyping tools such as ADEPT. The new capability is based on symbolic execution that allows us to automatically generate quality test suites based on the system design. Symbolic execution is used to generate both user input and test oracles user input drives the testing of the system implementation, and test oracles ensure that the system behaves as designed. We present early results in the context of a component in the Autopilot system modeled in ADEPT, and discuss the challenges of test case generation in the HAI domain.

  16. Long memory and multifractality: A joint test

    NASA Astrophysics Data System (ADS)

    Goddard, John; Onali, Enrico

    2016-06-01

    The properties of statistical tests for hypotheses concerning the parameters of the multifractal model of asset returns (MMAR) are investigated, using Monte Carlo techniques. We show that, in the presence of multifractality, conventional tests of long memory tend to over-reject the null hypothesis of no long memory. Our test addresses this issue by jointly estimating long memory and multifractality. The estimation and test procedures are applied to exchange rate data for 12 currencies. Among the nested model specifications that are investigated, in 11 out of 12 cases, daily returns are most appropriately characterized by a variant of the MMAR that applies a multifractal time-deformation process to NIID returns. There is no evidence of long memory.

  17. Real-Time Extended Interface Automata for Software Testing Cases Generation

    PubMed Central

    Yang, Shunkun; Xu, Jiaqi; Man, Tianlong; Liu, Bin

    2014-01-01

    Testing and verification of the interface between software components are particularly important due to the large number of complex interactions, which requires the traditional modeling languages to overcome the existing shortcomings in the aspects of temporal information description and software testing input controlling. This paper presents the real-time extended interface automata (RTEIA) which adds clearer and more detailed temporal information description by the application of time words. We also establish the input interface automaton for every input in order to solve the problems of input controlling and interface covering nimbly when applied in the software testing field. Detailed definitions of the RTEIA and the testing cases generation algorithm are provided in this paper. The feasibility and efficiency of this method have been verified in the testing of one real aircraft braking system. PMID:24892080

  18. Applying cost accounting to operating room staffing in otolaryngology: time-driven activity-based costing and outpatient adenotonsillectomy.

    PubMed

    Balakrishnan, Karthik; Goico, Brian; Arjmand, Ellis M

    2015-04-01

    (1) To describe the application of a detailed cost-accounting method (time-driven activity-cased costing) to operating room personnel costs, avoiding the proxy use of hospital and provider charges. (2) To model potential cost efficiencies using different staffing models with the case study of outpatient adenotonsillectomy. Prospective cost analysis case study. Tertiary pediatric hospital. All otolaryngology providers and otolaryngology operating room staff at our institution. Time-driven activity-based costing demonstrated precise per-case and per-minute calculation of personnel costs. We identified several areas of unused personnel capacity in a basic staffing model. Per-case personnel costs decreased by 23.2% by allowing a surgeon to run 2 operating rooms, despite doubling all other staff. Further cost reductions up to a total of 26.4% were predicted with additional staffing rearrangements. Time-driven activity-based costing allows detailed understanding of not only personnel costs but also how personnel time is used. This in turn allows testing of alternative staffing models to decrease unused personnel capacity and increase efficiency. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.

  19. The multicategory case of the sequential Bayesian pixel selection and estimation procedure

    NASA Technical Reports Server (NTRS)

    Pore, M. D.; Dennis, T. B. (Principal Investigator)

    1980-01-01

    A Bayesian technique for stratified proportion estimation and a sampling based on minimizing the mean squared error of this estimator were developed and tested on LANDSAT multispectral scanner data using the beta density function to model the prior distribution in the two-class case. An extention of this procedure to the k-class case is considered. A generalization of the beta function is shown to be a density function for the general case which allows the procedure to be extended.

  20. Constant-parameter capture-recapture models

    USGS Publications Warehouse

    Brownie, C.; Hines, J.E.; Nichols, J.D.

    1986-01-01

    Jolly (1982, Biometrics 38, 301-321) presented modifications of the Jolly-Seber model for capture-recapture data, which assume constant survival and/or capture rates. Where appropriate, because of the reduced number of parameters, these models lead to more efficient estimators than the Jolly-Seber model. The tests to compare models given by Jolly do not make complete use of the data, and we present here the appropriate modifications, and also indicate how to carry out goodness-of-fit tests which utilize individual capture history information. We also describe analogous models for the case where young and adult animals are tagged. The availability of computer programs to perform the analysis is noted, and examples are given using output from these programs.

  1. A Model for Random Student Drug Testing

    ERIC Educational Resources Information Center

    Nelson, Judith A.; Rose, Nancy L.; Lutz, Danielle

    2011-01-01

    The purpose of this case study was to examine random student drug testing in one school district relevant to: (a) the perceptions of students participating in competitive extracurricular activities regarding drug use and abuse; (b) the attitudes and perceptions of parents, school staff, and community members regarding student drug involvement; (c)…

  2. Testing Specific Hypotheses Concerning Latent Group Differences in Multi-group Covariance Structure Analysis with Structured Means.

    ERIC Educational Resources Information Center

    Dolan, Conor V.; Molenaar, Peter C. M.

    1994-01-01

    In multigroup covariance structure analysis with structured means, the traditional latent selection model is formulated as a special case of phenotypic selection. Illustrations with real and simulated data demonstrate how one can test specific hypotheses concerning selection on latent variables. (SLD)

  3. The European Thoracic Surgery Database project: modelling the risk of in-hospital death following lung resection.

    PubMed

    Berrisford, Richard; Brunelli, Alessandro; Rocco, Gaetano; Treasure, Tom; Utley, Martin

    2005-08-01

    To identify pre-operative factors associated with in-hospital mortality following lung resection and to construct a risk model that could be used prospectively to inform decisions and retrospectively to enable fair comparisons of outcomes. Data were submitted to the European Thoracic Surgery Database from 27 units in 14 countries. We analysed data concerning all patients that had a lung resection. Logistic regression was used with a random sample of 60% of cases to identify pre-operative factors associated with in-hospital mortality and to build a model of risk. The resulting model was tested on the remaining 40% of patients. A second model based on age and ppoFEV1% was developed for risk of in-hospital death amongst tumour resection patients. Of the 3426 adult patients that had a first lung resection for whom mortality data were available, 66 died within the same hospital admission. Within the data used for model development, dyspnoea (according to the Medical Research Council classification), ASA (American Society of Anaesthesiologists) score, class of procedure and age were found to be significantly associated with in-hospital death in a multivariate analysis. The logistic model developed on these data displayed predictive value when tested on the remaining data. Two models of the risk of in-hospital death amongst adult patients undergoing lung resection have been developed. The models show predictive value and can be used to discern between high-risk and low-risk patients. Amongst the test data, the model developed for all diagnoses performed well at low risk, underestimated mortality at medium risk and overestimated mortality at high risk. The second model for resection of lung neoplasms was developed after establishing the performance of the first model and so could not be tested robustly. That said, we were encouraged by its performance over the entire range of estimated risk. The first of these two models could be regarded as an evaluation based on clinically available criteria while the second uses data obtained from objective measurement. We are optimistic that further model development and testing will provide a tool suitable for case mix adjustment.

  4. Laboratory Study on Macro-Features of Wave Breaking Over Bars and Artificial Reefs

    DTIC Science & Technology

    1990-07-01

    Prototype and Model Conditions of Case CE400 ( Pilot Test ) . 72 7 List of Design Parameters for Base Tests ... ........... . 72 8 List of Design Parameters...bar configurations, and the procedure was repeated. Pilot test 112. A pilot test was performed as a trial of the methodology and vali- dation of the...criterion on bar depth given by Larson and Kraus (1989) prior to actual testing . In this pilot test , the wave conditions and equilibrium bar formed in a

  5. Wind Tunnel Interference Effects on Tilt Rotor Testing Using Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Koning, Witold J. F.

    2015-01-01

    Experimental techniques to measure rotorcraft aerodynamic performance are widely used. However, most of them are either unable to capture interference effects from bodies, or require an extremely large computational budget. The objective of the present research is to develop an XV-15 Tilt Rotor Research Aircraft rotor model for investigation of wind tunnel wall interference using a novel Computational Fluid Dynamics (CFD) solver for rotorcraft, RotCFD. In RotCFD, a mid-fidelity URANS solver is used with an incompressible flow model and a realizable k-e turbulence model. The rotor is, however, not modeled using a computationally expensive, unsteady viscous body-fitted grid, but is instead modeled using a blade element model with a momentum source approach. Various flight modes of the XV-15 isolated rotor, including hover, tilt and airplane mode, have been simulated and correlated to existing experimental and theoretical data. The rotor model is subsequently used for wind tunnel wall interference simulations in the National Full-Scale Aerodynamics Complex (NFAC) at NASA Ames Research Center in California. The results from the validation of the isolated rotor performance showed good correlation with experimental and theoretical data. The results were on par with known theoretical analyses. In RotCFD the setup, grid generation and running of cases is faster than many CFD codes, which makes it a useful engineering tool. Performance predictions need not be as accurate as high-fidelity CFD codes, as long as wall effects can be properly simulated. For both test sections of the NFAC wall interference was examined by simulating the XV-15 rotor in the test section of the wind tunnel and with an identical grid but extended boundaries in free field. Both cases were also examined with an isolated rotor or with the rotor mounted on the modeled geometry of the Tiltrotor Test Rig (TTR). A 'quasi linear trim' was used to trim the thrust for the rotor to compare the power as a unique variable. Power differences between free field and wind tunnel cases were found from -7 % to 0 % in the 80- by 120-Foot Wind Tunnel test section and -1.6 % to 4.8 % in the 40- by 80-Foot Wind Tunnel, depending on the TTR orientation, tunnel velocity and blade setting. The TTR will be used in 2016 to test the Bell 609 rotor in a similar fashion to the research in this report.

  6. Injection characteristics study of high-pressure direct injector for Compressed Natural Gas (CNG) using experimental and analytical method

    NASA Astrophysics Data System (ADS)

    Taha, Z.; Rahim, MF Abdul; Mamat, R.

    2017-10-01

    The injection characteristics of direct injector affect the mixture formation and combustion processes. In addition, the injector is converted from gasoline operation for CNG application. Thus measurement of CNG direct injector mass flow rate was done by independently tested a single injector on a test bench. The first case investigated the effect of CNG injection pressure and the second case evaluate the effect of pulse-width of injection duration. An analytical model was also developed to predict the mass flow rate of the injector. The injector was operated in a choked condition in both the experiments and simulation studies. In case 1, it was shown that mass flow rate through the injector is affected by injection pressure linearly. Based on the tested injection pressure of 20 bar to 60 bar, the resultant mass flow rate are in the range of 0.4 g/s to 1.2 g/s which are met with theoretical flow rate required by the engine. However, in Case 2, it was demonstrated that the average mass flow rate at short injection durations is lower than recorded in Case 1. At injection pressure of 50 bar, the average mass flow rate for Case 2 and Case 1 are 0.7 g/s and 1.1 g/s respectively. Also, the measured mass flow rate at short injection duration showing a fluctuating data in the range of 0.2 g/s - 1.3 g/s without any noticeable trends. The injector model able to predict the trend of the mass flow rate at different injection pressure but unable to track the fluctuating trend at short injection duration.

  7. External Validation of a Case-Mix Adjustment Model for the Standardized Reporting of 30-Day Stroke Mortality Rates in China.

    PubMed

    Yu, Ping; Pan, Yuesong; Wang, Yongjun; Wang, Xianwei; Liu, Liping; Ji, Ruijun; Meng, Xia; Jing, Jing; Tong, Xu; Guo, Li; Wang, Yilong

    2016-01-01

    A case-mix adjustment model has been developed and externally validated, demonstrating promise. However, the model has not been thoroughly tested among populations in China. In our study, we evaluated the performance of the model in Chinese patients with acute stroke. The case-mix adjustment model A includes items on age, presence of atrial fibrillation on admission, National Institutes of Health Stroke Severity Scale (NIHSS) score on admission, and stroke type. Model B is similar to Model A but includes only the consciousness component of the NIHSS score. Both model A and B were evaluated to predict 30-day mortality rates in 13,948 patients with acute stroke from the China National Stroke Registry. The discrimination of the models was quantified by c-statistic. Calibration was assessed using Pearson's correlation coefficient. The c-statistic of model A in our external validation cohort was 0.80 (95% confidence interval, 0.79-0.82), and the c-statistic of model B was 0.82 (95% confidence interval, 0.81-0.84). Excellent calibration was reported in the two models with Pearson's correlation coefficient (0.892 for model A, p<0.001; 0.927 for model B, p = 0.008). The case-mix adjustment model could be used to effectively predict 30-day mortality rates in Chinese patients with acute stroke.

  8. Emergent Theorisations in Modelling the Teaching of Two Science Teachers

    NASA Astrophysics Data System (ADS)

    Monteiro, Rute; Carrillo, José; Aguaded, Santiago

    2008-05-01

    The main goal of this study is to understand the teacher’s thoughts and action when he/she is immersed in the activity of teaching. To do so, it describes the procedures used to model two teachers’ practice with respect to the topic of Plant Diversity. Starting from a consideration of the theoretical constructs of script, routine and improvisation, this modelling basically corresponds to a microanalysis of the teacher’s beliefs, goals and knowledge, as highlighted in the classroom activity. From the process of modelling certain theorisations emerge, corresponding to abstractions gained from concrete cases. They allow us to foreground strong relationships between the beliefs and actions, and the knowledge and objectives of the teacher in action. Envisaged as conjectures rather than generalisations, these abstractions could possibly be extended to other cases, and tested out with new case studies, questioning their formulation or perhaps demonstrating that the limits of their applicability do not go beyond the original cases.

  9. A new radio propagation model at 2.4 GHz for wireless medical body sensors in outdoor environment.

    PubMed

    Yang, Daniel S

    2013-01-01

    This study investigates the effect of antenna height, receive antenna placement on human body, and distance between transmitter and receiver on the loss of wireless signal power in order to develop a wireless propagation model for wireless body sensors. Although many studies looked at the effect of distance, few studies were found that investigated methodically the effect of antenna height and antenna placement on the human body. Transmit antenna heights of 1, 2, and 3 meters, receive antenna heights of 1 and 1.65 meters, "on-body" and "off-body" placements of receive antenna, and a total of 11 distances ranging from 1 to 45 meters are tested in relation to received power in dBm. Multiple regression is used to analyze the data. Significance of a variable is tested by comparing its p-value with alpha, and model fit is assessed using adjusted R(2) and s of residuals. It is found that an increase in antenna height would increase power--but only for transmit antenna. The receive antenna height has a surprising, opposite effect in the on-body case and an insignificant effect in the off-body case. To formalize the propagation model, coefficient values from multiple regression are incorporated in an extension of the log-distance model to produce a new empirical model for on-body and off-body cases, and the new empirical model could conceivably be utilized to design more reliable wireless links for medical body sensors.

  10. Interest in Genetic Testing in Ashkenazi Jewish Parkinson’s Disease Patients and Their Unaffected Relatives

    PubMed Central

    Gupte, Manisha; Alcalay, Roy N.; Mejia-Santana, Helen; Raymond, Deborah; Saunders-Pullman, Rachel; Roos, Ernest; Orbe-Reily, Martha; Tang, Ming-X; Mirelman, Anat; Ozelius, Laurie; Orr-Urtreger, Avi; Clark, Lorraine; Giladi, Nir; Bressman, Susan

    2014-01-01

    Our objective was to explore interest in genetic testing among Ashkenazi Jewish (AJ) Parkinson’s Disease (PD) cases and first-degree relatives, as genetic testing for LRRK2 G2019S is widely available. Approximately 18 % of AJ PD cases carry G2019S mutations; penetrance estimations vary between 24 and 100 % by age 80. A Genetic Attitude Questionnaire (GAQ) was administered at two New York sites to PD families unaware of LRRK2 G2019S mutation status. The association of G2019S, age, education, gender and family history of PD with desire for genetic testing (outcome) was modeled using logistic regression. One-hundred eleven PD cases and 77 relatives completed the GAQ. Both PD cases and relatives had excellent PD-specific genetic knowledge. Among PD, 32.6 % “definitely” and 41.1 % “probably” wanted testing, if offered “now.” Among relatives, 23.6 % “definitely” and 36.1 % “probably” wanted testing “now.” Desire for testing in relatives increased incrementally based on hypothetical risk of PD. The most important reasons for testing in probands and relatives were: if it influenced medication response, identifying no mutation, and early prevention and treatment. In logistic regression, older age was associated with less desire for testing in probands OR=0.921 95%CI 0.868–0.977, p=0.009. Both probands and relatives express interest in genetic testing, despite no link to current treatment or prevention. PMID:25127731

  11. One size does not fit all: Adapting mark-recapture and occupancy models for state uncertainty

    USGS Publications Warehouse

    Kendall, W.L.; Thomson, David L.; Cooch, Evan G.; Conroy, Michael J.

    2009-01-01

    Multistate capture?recapture models continue to be employed with greater frequency to test hypotheses about metapopulation dynamics and life history, and more recently disease dynamics. In recent years efforts have begun to adjust these models for cases where there is uncertainty about an animal?s state upon capture. These efforts can be categorized into models that permit misclassification between two states to occur in either direction or one direction, where state is certain for a subset of individuals or is always uncertain, and where estimation is based on one sampling occasion per period of interest or multiple sampling occasions per period. State uncertainty also arises in modeling patch occupancy dynamics. I consider several case studies involving bird and marine mammal studies that illustrate how misclassified states can arise, and outline model structures for properly utilizing the data that are produced. In each case misclassification occurs in only one direction (thus there is a subset of individuals or patches where state is known with certainty), and there are multiple sampling occasions per period of interest. For the cases involving capture?recapture data I allude to a general model structure that could include each example as a special case. However, this collection of cases also illustrates how difficult it is to develop a model structure that can be directly useful for answering every ecological question of interest and account for every type of data from the field.

  12. Automobile exhaust as a means of suicide: an experimental study with a proposed model.

    PubMed

    Morgen, C; Schramm, J; Kofoed, P; Steensberg, J; Theilade, P

    1998-07-01

    Experiments were conducted to investigate the concentration of carbon monoxide (CO) in a car cabin under suicide attempts with different vehicles and different start situations, and a mathematical model describing the concentration of CO in the cabin was constructed. Three cars were set up to donate the exhaust. The first vehicle didn't have any catalyst, the second one was equipped with a malfunctioning three-way catalyst, and the third car was equipped with a well-functioning three-way catalyst. The three different starting situations were cold, tepid and warm engine start, respectively. Measurements of the CO concentrations were made in both the cabin and in the exhaust pipe. Lethal concentrations were measured in the cabin using all three vehicles as the donor car, including the vehicle with the well-functioning catalyst. The model results in most cases gave a good prediction of the CO concentration in the cabin. Four case studies of cars used for suicides were described. In each case measurements of CO were made in both the cabin and the exhaust under different starting conditions, and the mathematical model was tested on these cases. In most cases the model predictions were good.

  13. Model-based risk assessment and public health analysis to prevent Lyme disease

    PubMed Central

    Sabounchi, Nasim S.; Roome, Amanda; Spathis, Rita; Garruto, Ralph M.

    2017-01-01

    The number of Lyme disease (LD) cases in the northeastern United States has been dramatically increasing with over 300 000 new cases each year. This is due to numerous factors interacting over time including low public awareness of LD, risk behaviours and clothing choices, ecological and climatic factors, an increase in rodents within ecologically fragmented peri-urban built environments and an increase in tick density and infectivity in such environments. We have used a system dynamics (SD) approach to develop a simulation tool to evaluate the significance of risk factors in replicating historical trends of LD cases, and to investigate the influence of different interventions, such as increasing awareness, controlling clothing risk and reducing mouse populations, in reducing LD risk. The model accurately replicates historical trends of LD cases. Among several interventions tested using the simulation model, increasing public awareness most significantly reduces the number of LD cases. This model provides recommendations for LD prevention, including further educational programmes to raise awareness and control behavioural risk. This model has the potential to be used by the public health community to assess the risk of exposure to LD. PMID:29291075

  14. Censored Hurdle Negative Binomial Regression (Case Study: Neonatorum Tetanus Case in Indonesia)

    NASA Astrophysics Data System (ADS)

    Yuli Rusdiana, Riza; Zain, Ismaini; Wulan Purnami, Santi

    2017-06-01

    Hurdle negative binomial model regression is a method that can be used for discreate dependent variable, excess zero and under- and overdispersion. It uses two parts approach. The first part estimates zero elements from dependent variable is zero hurdle model and the second part estimates not zero elements (non-negative integer) from dependent variable is called truncated negative binomial models. The discrete dependent variable in such cases is censored for some values. The type of censor that will be studied in this research is right censored. This study aims to obtain the parameter estimator hurdle negative binomial regression for right censored dependent variable. In the assessment of parameter estimation methods used Maximum Likelihood Estimator (MLE). Hurdle negative binomial model regression for right censored dependent variable is applied on the number of neonatorum tetanus cases in Indonesia. The type data is count data which contains zero values in some observations and other variety value. This study also aims to obtain the parameter estimator and test statistic censored hurdle negative binomial model. Based on the regression results, the factors that influence neonatorum tetanus case in Indonesia is the percentage of baby health care coverage and neonatal visits.

  15. The PAC-MAN model: Benchmark case for linear acoustics in computational physics

    NASA Astrophysics Data System (ADS)

    Ziegelwanger, Harald; Reiter, Paul

    2017-10-01

    Benchmark cases in the field of computational physics, on the one hand, have to contain a certain complexity to test numerical edge cases and, on the other hand, require the existence of an analytical solution, because an analytical solution allows the exact quantification of the accuracy of a numerical simulation method. This dilemma causes a need for analytical sound field formulations of complex acoustic problems. A well known example for such a benchmark case for harmonic linear acoustics is the ;Cat's Eye model;, which describes the three-dimensional sound field radiated from a sphere with a missing octant analytically. In this paper, a benchmark case for two-dimensional (2D) harmonic linear acoustic problems, viz., the ;PAC-MAN model;, is proposed. The PAC-MAN model describes the radiated and scattered sound field around an infinitely long cylinder with a cut out sector of variable angular width. While the analytical calculation of the 2D sound field allows different angular cut-out widths and arbitrarily positioned line sources, the computational cost associated with the solution of this problem is similar to a 1D problem because of a modal formulation of the sound field in the PAC-MAN model.

  16. Models of verbal working memory capacity: what does it take to make them work?

    PubMed

    Cowan, Nelson; Rouder, Jeffrey N; Blume, Christopher L; Saults, J Scott

    2012-07-01

    Theories of working memory (WM) capacity limits will be more useful when we know what aspects of performance are governed by the limits and what aspects are governed by other memory mechanisms. Whereas considerable progress has been made on models of WM capacity limits for visual arrays of separate objects, less progress has been made in understanding verbal materials, especially when words are mentally combined to form multiword units or chunks. Toward a more comprehensive theory of capacity limits, we examined models of forced-choice recognition of words within printed lists, using materials designed to produce multiword chunks in memory (e.g., leather brief case). Several simple models were tested against data from a variety of list lengths and potential chunk sizes, with test conditions that only imperfectly elicited the interword associations. According to the most successful model, participants retained about 3 chunks on average in a capacity-limited region of WM, with some chunks being only subsets of the presented associative information (e.g., leather brief case retained with leather as one chunk and brief case as another). The addition to the model of an activated long-term memory component unlimited in capacity was needed. A fixed-capacity limit appears critical to account for immediate verbal recognition and other forms of WM. We advance a model-based approach that allows capacity to be assessed despite other important processing contributions. Starting with a psychological-process model of WM capacity developed to understand visual arrays, we arrive at a more unified and complete model. Copyright 2012 APA, all rights reserved.

  17. Models of Verbal Working Memory Capacity: What Does It Take to Make Them Work?

    PubMed Central

    Cowan, Nelson; Rouder, Jeffrey N.; Blume, Christopher L.; Saults, J. Scott

    2013-01-01

    Theories of working memory (WM) capacity limits will be more useful when we know what aspects of performance are governed by the limits and what aspects are governed by other memory mechanisms. Whereas considerable progress has been made on models of WM capacity limits for visual arrays of separate objects, less progress has been made in understanding verbal materials, especially when words are mentally combined to form multi-word units or chunks. Toward a more comprehensive theory of capacity limits, we examine models of forced-choice recognition of words within printed lists, using materials designed to produce multi-word chunks in memory (e.g., leather brief case). Several simple models were tested against data from a variety of list lengths and potential chunk sizes, with test conditions that only imperfectly elicited the inter-word associations. According to the most successful model, participants retained about 3 chunks on average in a capacity-limited region of WM, with some chunks being only subsets of the presented associative information (e.g., leather brief case retained with leather as one chunk and brief case as another). The addition to the model of an activated long-term memory (LTM) component unlimited in capacity was needed. A fixed capacity limit appears critical to account for immediate verbal recognition and other forms of WM. We advance a model-based approach that allows capacity to be assessed despite other important processing contributions. Starting with a psychological-process model of WM capacity developed to understand visual arrays, we arrive at a more unified and complete model. PMID:22486726

  18. Male Power and Female Victimization: Towards a Theory of Interracial Rape.

    ERIC Educational Resources Information Center

    LaFree, Gary D.

    1982-01-01

    Tests two models of Black offender/White victim (BW) rape, using data from 443 rape cases. Results did not support the normative model which correlates BW rape with increased social interaction between Black men and White women and only partially supported the conflict model that correlates BW rape with increased Black politicization. (Author/AM)

  19. Aerothermal modeling program, phase 1

    NASA Technical Reports Server (NTRS)

    Srinivasan, R.; Reynolds, R.; Ball, I.; Berry, R.; Johnson, K.; Mongia, H.

    1983-01-01

    Aerothermal submodels used in analytical combustor models are analyzed. The models described include turbulence and scalar transport, gaseous full combustion, spray evaporation/combustion, soot formation and oxidation, and radiation. The computational scheme is discussed in relation to boundary conditions and convergence criteria. Also presented is the data base for benchmark quality test cases and an analysis of simple flows.

  20. Correlates of Incident Cognitive Impairment in the REasons for Geographic and Racial Differences in Stroke (REGARDS) Study

    PubMed Central

    Gillett, Sarah R.; Thacker, Evan L.; Letter, Abraham J.; McClure, Leslie A.; Wadley, Virginia G.; Unverzagt, Frederick W.; Kissela, Brett M.; Kennedy, Richard E.; Glasser, Stephen P.; Levine, Deborah A.; Cushman, Mary

    2015-01-01

    Objective To identify approximately 500 cases of incident cognitive impairment (ICI) in a large, national sample adapting an existing cognitive test-based case definition and to examine relationships of vascular risk factors with ICI. Method Participants were from the REGARDS study, a national sample of 30,239 African-American and white Americans. Participants included in this analysis had normal cognitive screening and no history of stroke at baseline, and at least one follow-up cognitive assessment with a three test battery (TTB). Regression-based norms were applied to TTB scores to identify cases of ICI. Logistic regression was used to model associations with baseline vascular risk factors. Results We identified 495 participants with ICI out of 17,630 eligible participants. In multivariable modeling, income (OR 1.83 CI 1.27,2.62), stroke belt residence (OR 1.45 CI 1.18,1.78), history of transient ischemic attack (OR 1.90 CI 1.29,2.81), coronary artery disease(OR 1.32 CI 1.02,1.70), diabetes (OR 1.48 CI 1.17,1.87), obesity (OR 1.40 CI 1.05,1.86), and incident stroke (OR 2.73 CI 1.52,4.90) were associated with ICI. Conclusions We adapted a previously validated cognitive test-based case definition to identify cases of ICI. Many previously identified risk factors were associated with ICI, supporting the criterion-related validity of our definition. PMID:25978342

  1. A GIS-based atmospheric dispersion model for pollutants emitted by complex source areas.

    PubMed

    Teggi, Sergio; Costanzini, Sofia; Ghermandi, Grazia; Malagoli, Carlotta; Vinceti, Marco

    2018-01-01

    Gaussian dispersion models are widely used to simulate the concentrations and deposition fluxes of pollutants emitted by source areas. Very often, the calculation time limits the number of sources and receptors and the geometry of the sources must be simple and without holes. This paper presents CAREA, a new GIS-based Gaussian model for complex source areas. CAREA was coded in the Python language, and is largely based on a simplified formulation of the very popular and recognized AERMOD model. The model allows users to define in a GIS environment thousands of gridded or scattered receptors and thousands of complex sources with hundreds of vertices and holes. CAREA computes ground level, or near ground level, concentrations and dry deposition fluxes of pollutants. The input/output and the runs of the model can be completely managed in GIS environment (e.g. inside a GIS project). The paper presents the CAREA formulation and its applications to very complex test cases. The tests shows that the processing time are satisfactory and that the definition of sources and receptors and the output retrieval are quite easy in a GIS environment. CAREA and AERMOD are compared using simple and reproducible test cases. The comparison shows that CAREA satisfactorily reproduces AERMOD simulations and is considerably faster than AERMOD. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Comparative Logic Modeling for Policy Analysis: The Case of HIV Testing Policy Change at the Department of Veterans Affairs

    PubMed Central

    Langer, Erika M; Gifford, Allen L; Chan, Kee

    2011-01-01

    Objective Logic models have been used to evaluate policy programs, plan projects, and allocate resources. Logic Modeling for policy analysis has been used rarely in health services research but can be helpful in evaluating the content and rationale of health policies. Comparative Logic Modeling is used here on human immunodeficiency virus (HIV) policy statements from the Department of Veterans Affairs (VA) and Centers for Disease Control and Prevention (CDC). We created visual representations of proposed HIV screening policy components in order to evaluate their structural logic and research-based justifications. Data Sources and Study Design We performed content analysis of VA and CDC HIV testing policy documents in a retrospective case study. Data Collection Using comparative Logic Modeling, we examined the content and primary sources of policy statements by the VA and CDC. We then quantified evidence-based causal inferences within each statement. Principal Findings VA HIV testing policy structure largely replicated that of the CDC guidelines. Despite similar design choices, chosen research citations did not overlap. The agencies used evidence to emphasize different components of the policies. Conclusion Comparative Logic Modeling can be used by health services researchers and policy analysts more generally to evaluate structural differences in health policies and to analyze research-based rationales used by policy makers. PMID:21689094

  3. The vertical distribution of nutrients and oxygen 18 in the upper Arctic Ocean

    NASA Astrophysics Data System (ADS)

    BjöRk, GöRan

    1990-09-01

    The observed vertical nutrient distribution including a maximum at about 100 m depth in the Arctic Ocean is investigated using a one-dimensional time-dependent circulation model together with a simple biological model. The circulation model includes a shelf-forced circulation. This is thought to take place in a box from which the outflow is specified regarding temperature and volume flux at different salinities. It has earlier been shown that the circulation model is able to reproduce the observed mean salinity and temperature stratification in the Arctic Ocean. Before introducing nutrients in the model a test is performed using the conservative tracer δ18 (18O/16O ratio) as one extra state variable in order to verify the circulation model. It is shown that the field measurements can be simulated. The result is, however, rather sensitive to the tracer concentration in the Bering Strait inflow. The nutrients nitrate, phosphate, and silicate are then treated by coupling a simple biological model to the circulation model. The biological model describes some overall effects of production, sinking, and decomposition of organic matter. First a standard case of the biological model is presented. This is followed by some modified cases. It is shown that the observed nutrient distribution including the maximum can be generated. The available nutrient data from the Arctic Ocean are not sufficient to decide which among the cases is the most likely to occur. One case is, however, chosen as the best case. A nutrient budget and estimates of the magnitudes of the new production are presented for this case.

  4. How well can wave runup be predicted? comment on Laudier et al. (2011) and Stockdon et al. (2006)

    USGS Publications Warehouse

    Plant, Nathaniel G.; Stockdon, Hilary F.

    2015-01-01

    Laudier et al. (2011) suggested that there may be a systematic bias error in runup predictions using a model developed by Stockdon et al. (2006). Laudier et al. tested cases that sampled beach and wave conditions that differed from those used to develop the Stockdon et al. model. Based on our re-analysis, we found that in two of the three Laudier et al. cases observed overtopping was actually consistent with the Stockdon et al. predictions. In these cases, the revised predictions indicated substantial overtopping with, in one case, a freeboard deficit of 1 m. In the third case, the revised prediction had a low likelihood of overtopping, which reflected a large uncertainty due to wave conditions that included a broad and bi-modal frequency distribution. The discrepancy between Laudier et al. results and our re-analysis appear to be due, in part, to simplifications made by Laudier et al. when they implemented a reduced version of the Stockdon et al. model.

  5. Oxygen Mass Transport in Stented Coronary Arteries.

    PubMed

    Murphy, Eoin A; Dunne, Adrian S; Martin, David M; Boyle, Fergal J

    2016-02-01

    Oxygen deficiency, known as hypoxia, in arterial walls has been linked to increased intimal hyperplasia, which is the main adverse biological process causing in-stent restenosis. Stent implantation has significant effects on the oxygen transport into the arterial wall. Elucidating these effects is critical to optimizing future stent designs. In this study the most advanced oxygen transport model developed to date was assessed in two test cases and used to compare three coronary stent designs. Additionally, the predicted results from four simplified blood oxygen transport models are compared in the two test cases. The advanced model showed good agreement with experimental measurements within the mass-transfer boundary layer and at the luminal surface; however, more work is needed in predicting the oxygen transport within the arterial wall. Simplifying the oxygen transport model within the blood flow produces significant errors in predicting the oxygen transport in arteries. This study can be used as a guide for all future numerical studies in this area and the advanced model could provide a powerful tool in aiding design of stents and other cardiovascular devices.

  6. A Unique Finite Element Modeling of the Periodic Wave Transformation over Sloping and Barred Beaches by Beji and Nadaoka's Extended Boussinesq Equations

    PubMed Central

    Jabbari, Mohammad Hadi; Sayehbani, Mesbah; Reisinezhad, Arsham

    2013-01-01

    This paper presents a numerical model based on one-dimensional Beji and Nadaoka's Extended Boussinesq equations for simulation of periodic wave shoaling and its decomposition over morphological beaches. A unique Galerkin finite element and Adams-Bashforth-Moulton predictor-corrector methods are employed for spatial and temporal discretization, respectively. For direct application of linear finite element method in spatial discretization, an auxiliary variable is hereby introduced, and a particular numerical scheme is offered to rewrite the equations in lower-order form. Stability of the suggested numerical method is also analyzed. Subsequently, in order to display the ability of the presented model, four different test cases are considered. In these test cases, dispersive and nonlinearity effects of the periodic waves over sloping beaches and barred beaches, which are the common coastal profiles, are investigated. Outputs are compared with other existing numerical and experimental data. Finally, it is concluded that the current model can be further developed to model any morphological development of coastal profiles. PMID:23853534

  7. Tracking people and cars using 3D modeling and CCTV.

    PubMed

    Edelman, Gerda; Bijhold, Jurrien

    2010-10-10

    The aim of this study was to find a method for the reconstruction of movements of people and cars using CCTV footage and a 3D model of the environment. A procedure is proposed, in which video streams are synchronized and displayed in a 3D model, by using virtual cameras. People and cars are represented by cylinders and boxes, which are moved in the 3D model, according to their movements as shown in the video streams. The procedure was developed and tested in an experimental setup with test persons who logged their GPS coordinates as a recording of the ground truth. Results showed that it is possible to implement this procedure and to reconstruct movements of people and cars from video recordings. The procedure was also applied to a forensic case. In this work we experienced that more situational awareness was created by the 3D model, which made it easier to track people on multiple video streams. Based on all experiences from the experimental set up and the case, recommendations are formulated for use in practice. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  8. Modeling Stationary Lithium-Ion Batteries for Optimization and Predictive Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Kyri A; Shi, Ying; Christensen, Dane T

    Accurately modeling stationary battery storage behavior is crucial to understand and predict its limitations in demand-side management scenarios. In this paper, a lithium-ion battery model was derived to estimate lifetime and state-of-charge for building-integrated use cases. The proposed battery model aims to balance speed and accuracy when modeling battery behavior for real-time predictive control and optimization. In order to achieve these goals, a mixed modeling approach was taken, which incorporates regression fits to experimental data and an equivalent circuit to model battery behavior. A comparison of the proposed battery model output to actual data from the manufacturer validates the modelingmore » approach taken in the paper. Additionally, a dynamic test case demonstrates the effects of using regression models to represent internal resistance and capacity fading.« less

  9. Modeling Stationary Lithium-Ion Batteries for Optimization and Predictive Control: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raszmann, Emma; Baker, Kyri; Shi, Ying

    Accurately modeling stationary battery storage behavior is crucial to understand and predict its limitations in demand-side management scenarios. In this paper, a lithium-ion battery model was derived to estimate lifetime and state-of-charge for building-integrated use cases. The proposed battery model aims to balance speed and accuracy when modeling battery behavior for real-time predictive control and optimization. In order to achieve these goals, a mixed modeling approach was taken, which incorporates regression fits to experimental data and an equivalent circuit to model battery behavior. A comparison of the proposed battery model output to actual data from the manufacturer validates the modelingmore » approach taken in the paper. Additionally, a dynamic test case demonstrates the effects of using regression models to represent internal resistance and capacity fading.« less

  10. Comparison between phenomenological and ab-initio reaction and relaxation models in DSMC

    NASA Astrophysics Data System (ADS)

    Sebastião, Israel B.; Kulakhmetov, Marat; Alexeenko, Alina

    2016-11-01

    New state-specific vibrational-translational energy exchange and dissociation models, based on ab-initio data, are implemented in direct simulation Monte Carlo (DSMC) method and compared to the established Larsen-Borgnakke (LB) and total collision energy (TCE) phenomenological models. For consistency, both the LB and TCE models are calibrated with QCT-calculated O2+O data. The model comparison test cases include 0-D thermochemical relaxation under adiabatic conditions and 1-D normal shockwave calculations. The results show that both the ME-QCT-VT and LB models can reproduce vibrational relaxation accurately but the TCE model is unable to reproduce nonequilibrium rates even when it is calibrated to accurate equilibrium rates. The new reaction model does capture QCT-calculated nonequilibrium rates. For all investigated cases, we discuss the prediction differences based on the new model features.

  11. Temperature Dependent Modal Test/Analysis Correlation of X-34 Fastrac Composite Rocket Nozzle

    NASA Technical Reports Server (NTRS)

    Brown, Andrew M.; Brunty, Joseph A. (Technical Monitor)

    2001-01-01

    A unique high temperature modal test and model correlation/update program has been performed on the composite nozzle of the FASTRAC engine for the NASA X-34 Reusable Launch Vehicle. The program was required to provide an accurate high temperature model of the nozzle for incorporation into the engine system structural dynamics model for loads calculation; this model is significantly different from the ambient case due to the large decrease in composite stiffness properties due to heating. The high-temperature modal test was performed during a hot-fire test of the nozzle. Previously, a series of high fidelity modal tests and finite element model correlation of the nozzle in a free-free configuration had been performed. This model was then attached to a modal-test verified model of the engine hot-fire test stand and the ambient system mode shapes were identified. A reduced set of accelerometers was then attached to the nozzle, the engine fired full-duration, and the frequency peaks corresponding to the ambient nozzle modes individually isolated and tracked as they decreased during the test. To update the finite-element model of the nozzle to these frequency curves, the percentage differences of the anisotropic composite moduli due to temperature variation from ambient, which had been used in the initial modeling and which were obtained by small sample coupon testing, were multiplied by an iteratively determined constant factor. These new properties were used to create high-temperature nozzle models corresponding to 10 second engine operation increments and tied into the engine system model for loads determination.

  12. A comparison of two central difference schemes for solving the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Maksymiuk, C. M.; Swanson, R. C.; Pulliam, T. H.

    1990-01-01

    Five viscous transonic airfoil cases were computed by two significantly different computational fluid dynamics codes: An explicit finite-volume algorithm with multigrid, and an implicit finite-difference approximate-factorization method with Eigenvector diagonalization. Both methods are described in detail, and their performance on the test cases is compared. The codes utilized the same grids, turbulence model, and computer to provide the truest test of the algorithms. The two approaches produce very similar results, which, for attached flows, also agree well with experimental results; however, the explicit code is considerably faster.

  13. LAVA Simulations for the AIAA Sonic Boom Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Housman, Jeffrey A.; Sozer, Emre; Moini-Yekta , Shayan; Kiris, Cetin C.

    2014-01-01

    Computational simulations using the Launch Ascent and Vehicle Aerodynamics (LAVA) framework are presented for the First AIAA Sonic Boom Prediction Workshop test cases. The framework is utilized with both structured overset and unstructured meshing approaches. The three workshop test cases include an axisymmetric body, a Delta Wing-Body model, and a complete low-boom supersonic transport concept. Solution sensitivity to mesh type and sizing, and several numerical convective flux discretization choices are presented and discussed. Favorable comparison between the computational simulations and experimental data of nearand mid-field pressure signatures were obtained.

  14. Test and model correlation of the atmospheric emission photometric imager fiberglass pedestal

    NASA Technical Reports Server (NTRS)

    Lee, H. M., III; Barker, L. A.

    1990-01-01

    The correlation is presented of the static loads testing and finite element modeling for the fiberglass pedestal used on the Atmospheric Emission Photometric Imaging (AEPI) experiment. This payload is to be launched in the space shuttle as part of the ATLAS-1 experiment. Strain gauge data from rosettes around the highly loaded base are compared to the same load case run for the Spacelab 1 testing done in 1981. Correlation of the model and test data was accomplished through comparison of the composite stress invariant using the expected flight loads for the ATLAS-1 mission. Where appropriate, the Tsai-Wu failure criteria was used in the development of the key margins of safety. Margins of safety are all positive for the pedestal and are reported.

  15. Simulating run-up on steep slopes with operational Boussinesq models; capabilities, spurious effects and instabilities

    NASA Astrophysics Data System (ADS)

    Løvholt, F.; Lynett, P.; Pedersen, G.

    2013-06-01

    Tsunamis induced by rock slides plunging into fjords constitute a severe threat to local coastal communities. The rock slide impact may give rise to highly non-linear waves in the near field, and because the wave lengths are relatively short, frequency dispersion comes into play. Fjord systems are rugged with steep slopes, and modeling non-linear dispersive waves in this environment with simultaneous run-up is demanding. We have run an operational Boussinesq-type TVD (total variation diminishing) model using different run-up formulations. Two different tests are considered, inundation on steep slopes and propagation in a trapezoidal channel. In addition, a set of Lagrangian models serves as reference models. Demanding test cases with solitary waves with amplitudes ranging from 0.1 to 0.5 were applied, and slopes were ranging from 10 to 50°. Different run-up formulations yielded clearly different accuracy and stability, and only some provided similar accuracy as the reference models. The test cases revealed that the model was prone to instabilities for large non-linearity and fine resolution. Some of the instabilities were linked with false breaking during the first positive inundation, which was not observed for the reference models. None of the models were able to handle the bore forming during drawdown, however. The instabilities are linked to short-crested undulations on the grid scale, and appear on fine resolution during inundation. As a consequence, convergence was not always obtained. It is reason to believe that the instability may be a general problem for Boussinesq models in fjords.

  16. Climate Change Implications for Tropical Islands: Interpolating and Interpreting Statistically Downscaled GCM Projections for Management and Planning

    Treesearch

    Azad Henareh Khalyani; William A. Gould; Eric Harmsen; Adam Terando; Maya Quinones; Jaime A. Collazo

    2016-01-01

  17. Deployment Repeatability

    DTIC Science & Technology

    2016-04-01

    environment. Modeling is suitable for well- characterized parts, and stochastic modeling techniques can be used for sensitivity analysis and generating a...large cohort of trials to spot unusual cases. However, deployment repeatability is inherently a nonlinear phenomenon, which makes modeling difficult...recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the U.S. Air Force. 1. Test the flight model

  18. Automated Generation and Assessment of Autonomous Systems Test Cases

    NASA Technical Reports Server (NTRS)

    Barltrop, Kevin J.; Friberg, Kenneth H.; Horvath, Gregory A.

    2008-01-01

    This slide presentation reviews some of the issues concerning verification and validation testing of autonomous spacecraft routinely culminates in the exploration of anomalous or faulted mission-like scenarios using the work involved during the Dawn mission's tests as examples. Prioritizing which scenarios to develop usually comes down to focusing on the most vulnerable areas and ensuring the best return on investment of test time. Rules-of-thumb strategies often come into play, such as injecting applicable anomalies prior to, during, and after system state changes; or, creating cases that ensure good safety-net algorithm coverage. Although experience and judgment in test selection can lead to high levels of confidence about the majority of a system's autonomy, it's likely that important test cases are overlooked. One method to fill in potential test coverage gaps is to automatically generate and execute test cases using algorithms that ensure desirable properties about the coverage. For example, generate cases for all possible fault monitors, and across all state change boundaries. Of course, the scope of coverage is determined by the test environment capabilities, where a faster-than-real-time, high-fidelity, software-only simulation would allow the broadest coverage. Even real-time systems that can be replicated and run in parallel, and that have reliable set-up and operations features provide an excellent resource for automated testing. Making detailed predictions for the outcome of such tests can be difficult, and when algorithmic means are employed to produce hundreds or even thousands of cases, generating predicts individually is impractical, and generating predicts with tools requires executable models of the design and environment that themselves require a complete test program. Therefore, evaluating the results of large number of mission scenario tests poses special challenges. A good approach to address this problem is to automatically score the results based on a range of metrics. Although the specific means of scoring depends highly on the application, the use of formal scoring - metrics has high value in identifying and prioritizing anomalies, and in presenting an overall picture of the state of the test program. In this paper we present a case study based on automatic generation and assessment of faulted test runs for the Dawn mission, and discuss its role in optimizing the allocation of resources for completing the test program.

  19. The neglected tool in the Bayesian ecologist's shed: a case study testing informative priors' effect on model accuracy

    PubMed Central

    Morris, William K; Vesk, Peter A; McCarthy, Michael A; Bunyavejchewin, Sarayudh; Baker, Patrick J

    2015-01-01

    Despite benefits for precision, ecologists rarely use informative priors. One reason that ecologists may prefer vague priors is the perception that informative priors reduce accuracy. To date, no ecological study has empirically evaluated data-derived informative priors' effects on precision and accuracy. To determine the impacts of priors, we evaluated mortality models for tree species using data from a forest dynamics plot in Thailand. Half the models used vague priors, and the remaining half had informative priors. We found precision was greater when using informative priors, but effects on accuracy were more variable. In some cases, prior information improved accuracy, while in others, it was reduced. On average, models with informative priors were no more or less accurate than models without. Our analyses provide a detailed case study on the simultaneous effect of prior information on precision and accuracy and demonstrate that when priors are specified appropriately, they lead to greater precision without systematically reducing model accuracy. PMID:25628867

  20. The neglected tool in the Bayesian ecologist's shed: a case study testing informative priors' effect on model accuracy.

    PubMed

    Morris, William K; Vesk, Peter A; McCarthy, Michael A; Bunyavejchewin, Sarayudh; Baker, Patrick J

    2015-01-01

    Despite benefits for precision, ecologists rarely use informative priors. One reason that ecologists may prefer vague priors is the perception that informative priors reduce accuracy. To date, no ecological study has empirically evaluated data-derived informative priors' effects on precision and accuracy. To determine the impacts of priors, we evaluated mortality models for tree species using data from a forest dynamics plot in Thailand. Half the models used vague priors, and the remaining half had informative priors. We found precision was greater when using informative priors, but effects on accuracy were more variable. In some cases, prior information improved accuracy, while in others, it was reduced. On average, models with informative priors were no more or less accurate than models without. Our analyses provide a detailed case study on the simultaneous effect of prior information on precision and accuracy and demonstrate that when priors are specified appropriately, they lead to greater precision without systematically reducing model accuracy.

  1. Development of a claim review and payment model utilizing diagnosis related groups under the Korean health insurance.

    PubMed

    Shin, Y S; Yeom, Y K; Hwang, H

    1993-02-01

    This paper describes the development of a claim review and payment model utilizing the diagnosis related groups (DRGs) for the fee for service-based payment system of the Korean health insurance. The present review process, which examines all claims manually on a case-by-case basis, has been considered to be inefficient, costly, and time-consuming. Differences in case mix among hospitals are controlled in the proposed model using the Korean DRGs. They were developed by modifying the US-DRG system. An empirical test of the model indicated that it can enhance the efficiency as well as the credibility and objectivity of the claim review. Furthermore, it is expected that it can contribute effectively to medical cost containments and to optimal practice pattern of hospitals by establishing a useful mechanism in monitoring the performance of hospitals. However, the performance of this model needs to be upgraded by refining the Korean DRGs which play a key role in the model.

  2. Comparison of model propeller tests with airfoil theory

    NASA Technical Reports Server (NTRS)

    Durand, William F; Lesley, E P

    1925-01-01

    The purpose of the investigation covered by this report was the examination of the degree of approach which may be anticipated between laboratory tests on model airplane propellers and results computed by the airfoil theory, based on tests of airfoils representative of successive blade sections. It is known that the corrections of angles of attack and for aspect ratio, speed, and interference rest either on experimental data or on somewhat uncertain theoretical assumptions. The general situation as regards these four sets of corrections is far from satisfactory, and while it is recognized that occasion exists for the consideration of such corrections, their determination in any given case is a matter of considerable uncertainty. There exists at the present time no theory generally accepted and sufficiently comprehensive to indicate the amount of such corrections, and the application to individual cases of the experimental data available is, at best, uncertain. While the results of this first phase of the investigation are less positive than had been hoped might be the case, the establishment of the general degree of approach between the two sets of results which might be anticipated on the basis of this simpler mode of application seems to have been desirable.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hadgu, Teklu; Appel, Gordon John

    Sandia National Laboratories (SNL) continued evaluation of total system performance assessment (TSPA) computing systems for the previously considered Yucca Mountain Project (YMP). This was done to maintain the operational readiness of the computing infrastructure (computer hardware and software) and knowledge capability for total system performance assessment (TSPA) type analysis, as directed by the National Nuclear Security Administration (NNSA), DOE 2010. This work is a continuation of the ongoing readiness evaluation reported in Lee and Hadgu (2014) and Hadgu et al. (2015). The TSPA computing hardware (CL2014) and storage system described in Hadgu et al. (2015) were used for the currentmore » analysis. One floating license of GoldSim with Versions 9.60.300, 10.5 and 11.1.6 was installed on the cluster head node, and its distributed processing capability was mapped on the cluster processors. Other supporting software were tested and installed to support the TSPA-type analysis on the server cluster. The current tasks included verification of the TSPA-LA uncertainty and sensitivity analyses, and preliminary upgrade of the TSPA-LA from Version 9.60.300 to the latest version 11.1. All the TSPA-LA uncertainty and sensitivity analyses modeling cases were successfully tested and verified for the model reproducibility on the upgraded 2014 server cluster (CL2014). The uncertainty and sensitivity analyses used TSPA-LA modeling cases output generated in FY15 based on GoldSim Version 9.60.300 documented in Hadgu et al. (2015). The model upgrade task successfully converted the Nominal Modeling case to GoldSim Version 11.1. Upgrade of the remaining of the modeling cases and distributed processing tasks will continue. The 2014 server cluster and supporting software systems are fully operational to support TSPA-LA type analysis.« less

  4. MAGNETO-FRICTIONAL MODELING OF CORONAL NONLINEAR FORCE-FREE FIELDS. I. TESTING WITH ANALYTIC SOLUTIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Y.; Keppens, R.; Xia, C.

    2016-09-10

    We report our implementation of the magneto-frictional method in the Message Passing Interface Adaptive Mesh Refinement Versatile Advection Code (MPI-AMRVAC). The method aims at applications where local adaptive mesh refinement (AMR) is essential to make follow-up dynamical modeling affordable. We quantify its performance in both domain-decomposed uniform grids and block-adaptive AMR computations, using all frequently employed force-free, divergence-free, and other vector comparison metrics. As test cases, we revisit the semi-analytic solution of Low and Lou in both Cartesian and spherical geometries, along with the topologically challenging Titov–Démoulin model. We compare different combinations of spatial and temporal discretizations, and find thatmore » the fourth-order central difference with a local Lax–Friedrichs dissipation term in a single-step marching scheme is an optimal combination. The initial condition is provided by the potential field, which is the potential field source surface model in spherical geometry. Various boundary conditions are adopted, ranging from fully prescribed cases where all boundaries are assigned with the semi-analytic models, to solar-like cases where only the magnetic field at the bottom is known. Our results demonstrate that all the metrics compare favorably to previous works in both Cartesian and spherical coordinates. Cases with several AMR levels perform in accordance with their effective resolutions. The magneto-frictional method in MPI-AMRVAC allows us to model a region of interest with high spatial resolution and large field of view simultaneously, as required by observation-constrained extrapolations using vector data provided with modern instruments. The applications of the magneto-frictional method to observations are shown in an accompanying paper.« less

  5. Development of the GPM Observatory Thermal Vacuum Test Model

    NASA Technical Reports Server (NTRS)

    Yang, Kan; Peabody, Hume

    2012-01-01

    A software-based thermal modeling process was documented for generating the thermal panel settings necessary to simulate worst-case on-orbit flight environments in an observatory-level thermal vacuum test setup. The method for creating such a thermal model involved four major steps: (1) determining the major thermal zones for test as indicated by the major dissipating components on the spacecraft, then mapping the major heat flows between these components; (2) finding the flight equivalent sink temperatures for these test thermal zones; (3) determining the thermal test ground support equipment (GSE) design and initial thermal panel settings based on the equivalent sink temperatures; and (4) adjusting the panel settings in the test model to match heat flows and temperatures with the flight model. The observatory test thermal model developed from this process allows quick predictions of the performance of the thermal vacuum test design. In this work, the method described above was applied to the Global Precipitation Measurement (GPM) core observatory spacecraft, a joint project between NASA and the Japanese Aerospace Exploration Agency (JAXA) which is currently being integrated at NASA Goddard Space Flight Center for launch in Early 2014. From preliminary results, the thermal test model generated from this process shows that the heat flows and temperatures match fairly well with the flight thermal model, indicating that the test model can simulate fairly accurately the conditions on-orbit. However, further analysis is needed to determine the best test configuration possible to validate the GPM thermal design before the start of environmental testing later this year. Also, while this analysis method has been applied solely to GPM, it should be emphasized that the same process can be applied to any mission to develop an effective test setup and panel settings which accurately simulate on-orbit thermal environments.

  6. Nursing research on a first aid model of double personnel for major burn patients.

    PubMed

    Wu, Weiwei; Shi, Kai; Jin, Zhenghua; Liu, Shuang; Cai, Duo; Zhao, Jingchun; Chi, Cheng; Yu, Jiaao

    2015-03-01

    This study explored the effect of a first aid model employing two nurses on the efficient rescue operation time and the efficient resuscitation time for major burn patients. A two-nurse model of first aid was designed for major burn patients. The model includes a division of labor between the first aid nurses and the re-organization of emergency carts. The clinical effectiveness of the process was examined in a retrospective chart review of 156 cases of major burn patients, experiencing shock and low blood volume, who were admitted to the intensive care unit of the department of burn surgery between November 2009 and June 2013. Of the 156 major burn cases, 87 patients who received first aid using the double personnel model were assigned to the test group and the 69 patients who received first aid using the standard first aid model were assigned to the control group. The efficient rescue operation time and the efficient resuscitation time for the patients were compared between the two groups. Student's t tests were used to the compare the mean difference between the groups. Statistically significant differences between the two groups were found on both measures (P's < 0.05), with the test group having lower times than the control group. The efficient rescue operation time was 14.90 ± 3.31 min in the test group and 30.42 ± 5.65 min in the control group. The efficient resuscitation time was 7.4 ± 3.2 h in the test group and 9.5 ± 2.7 h in the control group. A two-nurse first aid model based on scientifically validated procedures and a reasonable division of labor can shorten the efficient rescue operation time and the efficient resuscitation time for major burn patients. Given these findings, the model appears to be worthy of clinical application.

  7. Psychosocial work environment and myocardial infarction: improving risk estimation by combining two complementary job stress models in the SHEEP Study

    PubMed Central

    Peter, R; Siegrist, J; Hallqvist, J; Reuterwall, C; Theorell, T

    2002-01-01

    Objectives: Associations between two alternative formulations of job stress derived from the effort-reward imbalance and the job strain model and first non-fatal acute myocardial infarction were studied. Whereas the job strain model concentrates on situational (extrinsic) characteristics the effort-reward imbalance model analyses distinct person (intrinsic) characteristics in addition to situational ones. In view of these conceptual differences the hypothesis was tested that combining information from the two models improves the risk estimation of acute myocardial infarction. Methods: 951 male and female myocardial infarction cases and 1147 referents aged 45–64 years of The Stockholm Heart Epidemiology (SHEEP) case-control study underwent a clinical examination. Information on job stress and health adverse behaviours was derived from standardised questionnaires. Results: Multivariate analysis showed moderately increased odds ratios for either model. Yet, with respect to the effort-reward imbalance model gender specific effects were found: in men the extrinsic component contributed to risk estimation, whereas this was the case with the intrinsic component in women. Controlling each job stress model for the other in order to test the independent effect of either approach did not show systematically increased odds ratios. An improved estimation of acute myocardial infarction risk resulted from combining information from the two models by defining groups characterised by simultaneous exposure to effort-reward imbalance and job strain (men: odds ratio 2.02 (95% confidence intervals (CI) 1.34 to 3.07); women odds ratio 2.19 (95% CI 1.11 to 4.28)). Conclusions: Findings show an improved risk estimation of acute myocardial infarction by combining information from the two job stress models under study. Moreover, gender specific effects of the two components of the effort-reward imbalance model were observed. PMID:11896138

  8. The Potential Benefits of Advanced Casing Treatment for Noise Attenuation in Utra-High Bypass Ratio Turbofan Engines

    NASA Technical Reports Server (NTRS)

    Elliott, David

    2007-01-01

    In order to increase stall margin in a high-bypass ratio turbofan engine, an advanced casing treatment was developed that extracted a small amount of flow from the casing behind the fan and injected it back in front of the fan. Several different configurations of this casing treatment were designed by varying the distance of the extraction and injection points, as well as varying the amount of flow. These casing treatments were tested on a 55.9 cm (22 in.) scale model of the Pratt & Whitney Advanced Ducted Propulsor in the NASA Glenn 9 by 15 Low Speed Wind Tunnel. While all of the casing treatment configurations showed the expected increase in stall margin, a few of the designs showed a potential noise benefit for certain engine speeds. This paper will show the casing treatments and the results of the testing as well as propose further research in this area. With better prediction and design techniques, future casing treatment configurations could be developed that may result in an optimized casing treatment that could conceivably reduce the noise further.

  9. Influence of model errors in optimal sensor placement

    NASA Astrophysics Data System (ADS)

    Vincenzi, Loris; Simonini, Laura

    2017-02-01

    The paper investigates the role of model errors and parametric uncertainties in optimal or near optimal sensor placements for structural health monitoring (SHM) and modal testing. The near optimal set of measurement locations is obtained by the Information Entropy theory; the results of placement process considerably depend on the so-called covariance matrix of prediction error as well as on the definition of the correlation function. A constant and an exponential correlation function depending on the distance between sensors are firstly assumed; then a proposal depending on both distance and modal vectors is presented. With reference to a simple case-study, the effect of model uncertainties on results is described and the reliability and the robustness of the proposed correlation function in the case of model errors are tested with reference to 2D and 3D benchmark case studies. A measure of the quality of the obtained sensor configuration is considered through the use of independent assessment criteria. In conclusion, the results obtained by applying the proposed procedure on a real 5-spans steel footbridge are described. The proposed method also allows to better estimate higher modes when the number of sensors is greater than the number of modes of interest. In addition, the results show a smaller variation in the sensor position when uncertainties occur.

  10. Identification and validation of biomarkers of IgV(H) mutation status in chronic lymphocytic leukemia using microfluidics quantitative real-time polymerase chain reaction technology.

    PubMed

    Abruzzo, Lynne V; Barron, Lynn L; Anderson, Keith; Newman, Rachel J; Wierda, William G; O'brien, Susan; Ferrajoli, Alessandra; Luthra, Madan; Talwalkar, Sameer; Luthra, Rajyalakshmi; Jones, Dan; Keating, Michael J; Coombes, Kevin R

    2007-09-01

    To develop a model incorporating relevant prognostic biomarkers for untreated chronic lymphocytic leukemia patients, we re-analyzed the raw data from four published gene expression profiling studies. We selected 88 candidate biomarkers linked to immunoglobulin heavy-chain variable region gene (IgV(H)) mutation status and produced a reliable and reproducible microfluidics quantitative real-time polymerase chain reaction array. We applied this array to a training set of 29 purified samples from previously untreated patients. In an unsupervised analysis, the samples clustered into two groups. Using a cutoff point of 2% homology to the germline IgV(H) sequence, one group contained all 14 IgV(H)-unmutated samples; the other contained all 15 mutated samples. We confirmed the differential expression of 37 of the candidate biomarkers using two-sample t-tests. Next, we constructed 16 different models to predict IgV(H) mutation status and evaluated their performance on an independent test set of 20 new samples. Nine models correctly classified 11 of 11 IgV(H)-mutated cases and eight of nine IgV(H)-unmutated cases, with some models using three to seven genes. Thus, we can classify cases with 95% accuracy based on the expression of as few as three genes.

  11. Integration of environmental simulation models with satellite remote sensing and geographic information systems technologies: case studies

    USGS Publications Warehouse

    Steyaert, Louis T.; Loveland, Thomas R.; Brown, Jesslyn F.; Reed, Bradley C.

    1993-01-01

    Environmental modelers are testing and evaluating a prototype land cover characteristics database for the conterminous United States developed by the EROS Data Center of the U.S. Geological Survey and the University of Nebraska Center for Advanced Land Management Information Technologies. This database was developed from multi temporal, 1-kilometer advanced very high resolution radiometer (AVHRR) data for 1990 and various ancillary data sets such as elevation, ecological regions, and selected climatic normals. Several case studies using this database were analyzed to illustrate the integration of satellite remote sensing and geographic information systems technologies with land-atmosphere interactions models at a variety of spatial and temporal scales. The case studies are representative of contemporary environmental simulation modeling at local to regional levels in global change research, land and water resource management, and environmental simulation modeling at local to regional levels in global change research, land and water resource management and environmental risk assessment. The case studies feature land surface parameterizations for atmospheric mesoscale and global climate models; biogenic-hydrocarbons emissions models; distributed parameter watershed and other hydrological models; and various ecological models such as ecosystem, dynamics, biogeochemical cycles, ecotone variability, and equilibrium vegetation models. The case studies demonstrate the important of multi temporal AVHRR data to develop to develop and maintain a flexible, near-realtime land cover characteristics database. Moreover, such a flexible database is needed to derive various vegetation classification schemes, to aggregate data for nested models, to develop remote sensing algorithms, and to provide data on dynamic landscape characteristics. The case studies illustrate how such a database supports research on spatial heterogeneity, land use, sensitivity analysis, and scaling issues involving regional extrapolations and parameterizations of dynamic land processes within simulation models.

  12. Modeling the low-velocity impact characteristics of woven glass epoxy composite laminates using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Mathivanan, N. Rajesh; Mouli, Chandra

    2012-12-01

    In this work, a new methodology based on artificial neural networks (ANN) has been developed to study the low-velocity impact characteristics of woven glass epoxy laminates of EP3 grade. To train and test the networks, multiple impact cases have been generated using statistical analysis of variance (ANOVA). Experimental tests were performed using an instrumented falling-weight impact-testing machine. Different impact velocities and impact energies on different thicknesses of laminates were considered as the input parameters of the ANN model. This model is a feed-forward back-propagation neural network. Using the input/output data of the experiments, the model was trained and tested. Further, the effects of the low-velocity impact response of the laminates at different energy levels were investigated by studying the cause-effect relationship among the influential factors using response surface methodology. The most significant parameter is determined from the other input variables through ANOVA.

  13. A Metric-Based Validation Process to Assess the Realism of Synthetic Power Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Birchfield, Adam; Schweitzer, Eran; Athari, Mir

    Public power system test cases that are of high quality benefit the power systems research community with expanded resources for testing, demonstrating, and cross-validating new innovations. Building synthetic grid models for this purpose is a relatively new problem, for which a challenge is to show that created cases are sufficiently realistic. This paper puts forth a validation process based on a set of metrics observed from actual power system cases. These metrics follow the structure, proportions, and parameters of key power system elements, which can be used in assessing and validating the quality of synthetic power grids. Though wide diversitymore » exists in the characteristics of power systems, the paper focuses on an initial set of common quantitative metrics to capture the distribution of typical values from real power systems. The process is applied to two new public test cases, which are shown to meet the criteria specified in the metrics of this paper.« less

  14. Radiant Energy Measurements from a Scaled Jet Engine Axisymmetric Exhaust Nozzle for a Baseline Code Validation Case

    NASA Technical Reports Server (NTRS)

    Baumeister, Joseph F.

    1994-01-01

    A non-flowing, electrically heated test rig was developed to verify computer codes that calculate radiant energy propagation from nozzle geometries that represent aircraft propulsion nozzle systems. Since there are a variety of analysis tools used to evaluate thermal radiation propagation from partially enclosed nozzle surfaces, an experimental benchmark test case was developed for code comparison. This paper briefly describes the nozzle test rig and the developed analytical nozzle geometry used to compare the experimental and predicted thermal radiation results. A major objective of this effort was to make available the experimental results and the analytical model in a format to facilitate conversion to existing computer code formats. For code validation purposes this nozzle geometry represents one validation case for one set of analysis conditions. Since each computer code has advantages and disadvantages based on scope, requirements, and desired accuracy, the usefulness of this single nozzle baseline validation case can be limited for some code comparisons.

  15. A Metric-Based Validation Process to Assess the Realism of Synthetic Power Grids

    DOE PAGES

    Birchfield, Adam; Schweitzer, Eran; Athari, Mir; ...

    2017-08-19

    Public power system test cases that are of high quality benefit the power systems research community with expanded resources for testing, demonstrating, and cross-validating new innovations. Building synthetic grid models for this purpose is a relatively new problem, for which a challenge is to show that created cases are sufficiently realistic. This paper puts forth a validation process based on a set of metrics observed from actual power system cases. These metrics follow the structure, proportions, and parameters of key power system elements, which can be used in assessing and validating the quality of synthetic power grids. Though wide diversitymore » exists in the characteristics of power systems, the paper focuses on an initial set of common quantitative metrics to capture the distribution of typical values from real power systems. The process is applied to two new public test cases, which are shown to meet the criteria specified in the metrics of this paper.« less

  16. Basic research on design analysis methods for rotorcraft vibrations

    NASA Technical Reports Server (NTRS)

    Hanagud, S.

    1991-01-01

    The objective of the present work was to develop a method for identifying physically plausible finite element system models of airframe structures from test data. The assumed models were based on linear elastic behavior with general (nonproportional) damping. Physical plausibility of the identified system matrices was insured by restricting the identification process to designated physical parameters only and not simply to the elements of the system matrices themselves. For example, in a large finite element model the identified parameters might be restricted to the moduli for each of the different materials used in the structure. In the case of damping, a restricted set of damping values might be assigned to finite elements based on the material type and on the fabrication processes used. In this case, different damping values might be associated with riveted, bolted and bonded elements. The method itself is developed first, and several approaches are outlined for computing the identified parameter values. The method is applied first to a simple structure for which the 'measured' response is actually synthesized from an assumed model. Both stiffness and damping parameter values are accurately identified. The true test, however, is the application to a full-scale airframe structure. In this case, a NASTRAN model and actual measured modal parameters formed the basis for the identification of a restricted set of physically plausible stiffness and damping parameters.

  17. Comparative evaluation of the IPCC AR5 CMIP5 versus the AR4 CMIP3 model ensembles for regional precipitation and their extremes over South America

    NASA Astrophysics Data System (ADS)

    Tolen, J.; Kodra, E. A.; Ganguly, A. R.

    2011-12-01

    The assertion that higher-resolution experiments or more sophisticated process models within the IPCC AR5 CMIP5 suite of global climate model ensembles improves precipitation projections over the IPCC AR4 CMIP3 suite remains a hypothesis that needs to be rigorously tested. The questions are particularly important for local to regional assessments at scales relevant for the management of critical infrastructures and key resources, particularly for the attributes of sever precipitation events, for example, the intensity, frequency and duration of extreme precipitation. Our case study is South America, where precipitation and their extremes play a central role in sustaining natural, built and human systems. To test the hypothesis that CMIP5 improves over CMIP3 in this regard, spatial and temporal measures of prediction skill are constructed and computed by comparing climate model hindcasts with the NCEP-II reanalysis data, considered here as surrogate observations, for the entire globe and for South America. In addition, gridded precipitation observations over South America based on rain gage measurements are considered. The results suggest that the utility of the next-generation of global climate models over the current generation needs to be carefully evaluated on a case-by-case basis before communicating to resource managers and policy makers.

  18. A hybrid Bayesian hierarchical model combining cohort and case-control studies for meta-analysis of diagnostic tests: Accounting for partial verification bias.

    PubMed

    Ma, Xiaoye; Chen, Yong; Cole, Stephen R; Chu, Haitao

    2016-12-01

    To account for between-study heterogeneity in meta-analysis of diagnostic accuracy studies, bivariate random effects models have been recommended to jointly model the sensitivities and specificities. As study design and population vary, the definition of disease status or severity could differ across studies. Consequently, sensitivity and specificity may be correlated with disease prevalence. To account for this dependence, a trivariate random effects model had been proposed. However, the proposed approach can only include cohort studies with information estimating study-specific disease prevalence. In addition, some diagnostic accuracy studies only select a subset of samples to be verified by the reference test. It is known that ignoring unverified subjects may lead to partial verification bias in the estimation of prevalence, sensitivities, and specificities in a single study. However, the impact of this bias on a meta-analysis has not been investigated. In this paper, we propose a novel hybrid Bayesian hierarchical model combining cohort and case-control studies and correcting partial verification bias at the same time. We investigate the performance of the proposed methods through a set of simulation studies. Two case studies on assessing the diagnostic accuracy of gadolinium-enhanced magnetic resonance imaging in detecting lymph node metastases and of adrenal fluorine-18 fluorodeoxyglucose positron emission tomography in characterizing adrenal masses are presented. © The Author(s) 2014.

  19. A Hybrid Bayesian Hierarchical Model Combining Cohort and Case-control Studies for Meta-analysis of Diagnostic Tests: Accounting for Partial Verification Bias

    PubMed Central

    Ma, Xiaoye; Chen, Yong; Cole, Stephen R.; Chu, Haitao

    2014-01-01

    To account for between-study heterogeneity in meta-analysis of diagnostic accuracy studies, bivariate random effects models have been recommended to jointly model the sensitivities and specificities. As study design and population vary, the definition of disease status or severity could differ across studies. Consequently, sensitivity and specificity may be correlated with disease prevalence. To account for this dependence, a trivariate random effects model had been proposed. However, the proposed approach can only include cohort studies with information estimating study-specific disease prevalence. In addition, some diagnostic accuracy studies only select a subset of samples to be verified by the reference test. It is known that ignoring unverified subjects may lead to partial verification bias in the estimation of prevalence, sensitivities and specificities in a single study. However, the impact of this bias on a meta-analysis has not been investigated. In this paper, we propose a novel hybrid Bayesian hierarchical model combining cohort and case-control studies and correcting partial verification bias at the same time. We investigate the performance of the proposed methods through a set of simulation studies. Two case studies on assessing the diagnostic accuracy of gadolinium-enhanced magnetic resonance imaging in detecting lymph node metastases and of adrenal fluorine-18 fluorodeoxyglucose positron emission tomography in characterizing adrenal masses are presented. PMID:24862512

  20. Pneumococcal vaccine targeting strategy for older adults: customized risk profiling.

    PubMed

    Balicer, Ran D; Cohen, Chandra J; Leibowitz, Morton; Feldman, Becca S; Brufman, Ilan; Roberts, Craig; Hoshen, Moshe

    2014-02-12

    Current pneumococcal vaccine campaigns take a broad, primarily age-based approach to immunization targeting, overlooking many clinical and administrative considerations necessary in disease prevention and resource planning for specific patient populations. We aim to demonstrate the utility of a population-specific predictive model for hospital-treated pneumonia to direct effective vaccine targeting. Data was extracted for 1,053,435 members of an Israeli HMO, age 50 and older, during the study period 2008-2010. We developed and validated a logistic regression model to predict hospital-treated pneumonia using training and test samples, including a set of standard and population-specific risk factors. The model's predictive value was tested for prospectively identifying cases of pneumonia and invasive pneumococcal disease (IPD), and was compared to the existing international paradigm for patient immunization targeting. In a multivariate regression, age, co-morbidity burden and previous pneumonia events were most strongly positively associated with hospital-treated pneumonia. The model predicting hospital-treated pneumonia yielded a c-statistic of 0.80. Utilizing the predictive model, the top 17% highest-risk within the study validation population were targeted to detect 54% of those members who were subsequently treated for hospitalized pneumonia in the follow up period. The high-risk population identified through this model included 46% of the follow-up year's IPD cases, and 27% of community-treated pneumonia cases. These outcomes were compared with international guidelines for risk for pneumococcal diseases that accurately identified only 35% of hospitalized pneumonia, 41% of IPD cases and 21% of community-treated pneumonia. We demonstrate that a customized model for vaccine targeting performs better than international guidelines, and therefore, risk modeling may allow for more precise vaccine targeting and resource allocation than current national and international guidelines. Health care managers and policy-makers may consider the strategic potential of utilizing clinical and administrative databases for creating population-specific risk prediction models to inform vaccination campaigns. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. A Meteorological Model's Dependence on Radiation Update Frequency

    NASA Technical Reports Server (NTRS)

    Eastman, Joseph L.; Peters-Lidard, Christa; Tao, Wei-Kuo; Kumar, Sujay; Tian, Yudong; Lang, Stephen E.; Zeng, Xiping

    2004-01-01

    Numerical weather models are used to simulate circulations in the atmosphere including clouds and precipitation by applying a set of mathematical equations over a three-dimensional grid. The grid is composed of discrete points at which the meteorological variables are defined. As computing power continues to rise these models are being used at finer grid spacing, but they must still cover a wide range of scales. Some of the physics that must be accounted for in the model cannot be explicitly resolved, and their effects, therefore, must be estimated or "parameterized". Some of these parameterizations are computationally expensive. To alleviate the problem, they are not always updated at the time resolution of the model with the assumption being that the impact will be small. In this study, a coupled land-atmosphere model is used to assess the impact of less frequent updates of the computationally expensive radiation physics for a case on June 6, 2002, that occurred during a field experiment over the central plains known as International H20 Project (IHOP). The model was tested using both the original conditions, which were dry, and with modified conditions wherein moisture was added to the lower part of the atmosphere to produce clouds and precipitation (i.e., a wet case). For each of the conditions (i.e., dry and wet), four set of experiments were conducted wherein the model was run for a period of 24 hours and the radiation fields (including both incoming solar and outgoing longwave) were updated every 1, 3, 10, and 100 time steps. Statistical tests indicated that average quantities of surface variables for both the dry and wet cases were the same for the various update frequencies. However, spatially the results could be quite different especially in the wet case after it began to rain. The near-surface wind field was found to be different most of the time even for the dry case. In the wet case, rain intensities and average vertical profiles of heating associated with cloudy areas were found to differ for the various radiation update frequencies. The latter implies that the mean state of the model could be different as a result of not updating the radiation fields every time step and has important implications for longer term climate studies

  2. Multicomponent ensemble models to forecast induced seismicity

    NASA Astrophysics Data System (ADS)

    Király-Proag, E.; Gischig, V.; Zechar, J. D.; Wiemer, S.

    2018-01-01

    In recent years, human-induced seismicity has become a more and more relevant topic due to its economic and social implications. Several models and approaches have been developed to explain underlying physical processes or forecast induced seismicity. They range from simple statistical models to coupled numerical models incorporating complex physics. We advocate the need for forecast testing as currently the best method for ascertaining if models are capable to reasonably accounting for key physical governing processes—or not. Moreover, operational forecast models are of great interest to help on-site decision-making in projects entailing induced earthquakes. We previously introduced a standardized framework following the guidelines of the Collaboratory for the Study of Earthquake Predictability, the Induced Seismicity Test Bench, to test, validate, and rank induced seismicity models. In this study, we describe how to construct multicomponent ensemble models based on Bayesian weightings that deliver more accurate forecasts than individual models in the case of Basel 2006 and Soultz-sous-Forêts 2004 enhanced geothermal stimulation projects. For this, we examine five calibrated variants of two significantly different model groups: (1) Shapiro and Smoothed Seismicity based on the seismogenic index, simple modified Omori-law-type seismicity decay, and temporally weighted smoothed seismicity; (2) Hydraulics and Seismicity based on numerically modelled pore pressure evolution that triggers seismicity using the Mohr-Coulomb failure criterion. We also demonstrate how the individual and ensemble models would perform as part of an operational Adaptive Traffic Light System. Investigating seismicity forecasts based on a range of potential injection scenarios, we use forecast periods of different durations to compute the occurrence probabilities of seismic events M ≥ 3. We show that in the case of the Basel 2006 geothermal stimulation the models forecast hazardous levels of seismicity days before the occurrence of felt events.

  3. Estimating error rates for firearm evidence identifications in forensic science

    PubMed Central

    Song, John; Vorburger, Theodore V.; Chu, Wei; Yen, James; Soons, Johannes A.; Ott, Daniel B.; Zhang, Nien Fan

    2018-01-01

    Estimating error rates for firearm evidence identification is a fundamental challenge in forensic science. This paper describes the recently developed congruent matching cells (CMC) method for image comparisons, its application to firearm evidence identification, and its usage and initial tests for error rate estimation. The CMC method divides compared topography images into correlation cells. Four identification parameters are defined for quantifying both the topography similarity of the correlated cell pairs and the pattern congruency of the registered cell locations. A declared match requires a significant number of CMCs, i.e., cell pairs that meet all similarity and congruency requirements. Initial testing on breech face impressions of a set of 40 cartridge cases fired with consecutively manufactured pistol slides showed wide separation between the distributions of CMC numbers observed for known matching and known non-matching image pairs. Another test on 95 cartridge cases from a different set of slides manufactured by the same process also yielded widely separated distributions. The test results were used to develop two statistical models for the probability mass function of CMC correlation scores. The models were applied to develop a framework for estimating cumulative false positive and false negative error rates and individual error rates of declared matches and non-matches for this population of breech face impressions. The prospect for applying the models to large populations and realistic case work is also discussed. The CMC method can provide a statistical foundation for estimating error rates in firearm evidence identifications, thus emulating methods used for forensic identification of DNA evidence. PMID:29331680

  4. Estimating error rates for firearm evidence identifications in forensic science.

    PubMed

    Song, John; Vorburger, Theodore V; Chu, Wei; Yen, James; Soons, Johannes A; Ott, Daniel B; Zhang, Nien Fan

    2018-03-01

    Estimating error rates for firearm evidence identification is a fundamental challenge in forensic science. This paper describes the recently developed congruent matching cells (CMC) method for image comparisons, its application to firearm evidence identification, and its usage and initial tests for error rate estimation. The CMC method divides compared topography images into correlation cells. Four identification parameters are defined for quantifying both the topography similarity of the correlated cell pairs and the pattern congruency of the registered cell locations. A declared match requires a significant number of CMCs, i.e., cell pairs that meet all similarity and congruency requirements. Initial testing on breech face impressions of a set of 40 cartridge cases fired with consecutively manufactured pistol slides showed wide separation between the distributions of CMC numbers observed for known matching and known non-matching image pairs. Another test on 95 cartridge cases from a different set of slides manufactured by the same process also yielded widely separated distributions. The test results were used to develop two statistical models for the probability mass function of CMC correlation scores. The models were applied to develop a framework for estimating cumulative false positive and false negative error rates and individual error rates of declared matches and non-matches for this population of breech face impressions. The prospect for applying the models to large populations and realistic case work is also discussed. The CMC method can provide a statistical foundation for estimating error rates in firearm evidence identifications, thus emulating methods used for forensic identification of DNA evidence. Published by Elsevier B.V.

  5. Stages of syphilis in South China - a multilevel analysis of early diagnosis.

    PubMed

    Wong, Ngai Sze; Huang, Shujie; Zheng, Heping; Chen, Lei; Zhao, Peizhen; Tucker, Joseph D; Yang, Li Gang; Goh, Beng Tin; Yang, Bin

    2017-01-31

    Early diagnosis of syphilis and timely treatment can effectively reduce ongoing syphilis transmission and morbidity. We examined the factors associated with the early diagnosis of syphilis to inform syphilis screening strategic planning. In an observational study, we analyzed reported syphilis cases in Guangdong Province, China (from 2014 to mid-2015) accessed from the national case-based surveillance system. We categorized primary and secondary syphilis cases as early diagnosis and categorized latent and tertiary syphilis as delayed diagnosis. Univariate analyses and multivariable logistic regressions were performed to identify the factors associated with early diagnosis. We also examined the factors associated with early diagnosis at the individual and city levels in multilevel logistic regression models with cases nested by city (n = 21), adjusted for age at diagnosis and gender. Among 83,944 diagnosed syphilis cases, 22% were early diagnoses. The city-level early diagnosis rate ranged from 7 to 46%, consistent with substantial geographic variation as shown in the multilevel model. Early diagnosis was associated with cases presenting to specialist clinics for screening, being male and attaining higher education level. Cases received syphilis testing in institutions and hospitals, and diagnosed in hospitals were less likely to be in early diagnosis. At the city-level, cases living in a city equipped with more hospitals per capita were less likely to be early diagnosis. To enhance early diagnosis of syphilis, city-specific syphilis screening strategies with a mix of passive and client/provider-initiated testing might be a useful approach.

  6. Implicit Large Eddy Simulation of a wingtip vortex at Rec =1.2x106

    NASA Astrophysics Data System (ADS)

    Lombard, Jean-Eloi; Moxey, Dave; Sherwin, Spencer; SherwinLab Team

    2015-11-01

    We present recent developments in numerical methods for performing a Large Eddy Simulation (LES) of the formation and evolution of a wingtip vortex. The development of these vortices in the near wake, in combination with the large Reynolds numbers present in these cases, make these types of test cases particularly challenging to investigate numerically. To demonstrate the method's viability, we present results from numerical simulations of flow over a NACA 0012 profile wingtip at Rec = 1.2 x106 and compare them against experimental data, which is to date the highest Reynolds number achieved for a LES that has been correlated with experiments for this test case. Our model correlates favorably with experiment, both for the characteristic jetting in the primary vortex and pressure distribution on the wing surface. The proposed method is of general interest for the modeling of transitioning vortex dominated flows over complex geometries. McLaren Racing/Royal Academy of Engineering Research Chair.

  7. An Affine Invariant Bivariate Version of the Sign Test.

    DTIC Science & Technology

    1987-06-01

    words: affine invariance, bivariate quantile, bivariate symmetry, model,. generalized median, influence function , permutation test, normal efficiency...calculate a bivariate version of the influence function , and the resulting form is bounded, as is the case for the univartate sign test, and shows the...terms of a blvariate analogue of IHmpel’s (1974) influence function . The latter, though usually defined as a von-Mises derivative of certain

  8. Development and Validation of a Predictive Model to Identify Individuals Likely to Have Undiagnosed Chronic Obstructive Pulmonary Disease Using an Administrative Claims Database.

    PubMed

    Moretz, Chad; Zhou, Yunping; Dhamane, Amol D; Burslem, Kate; Saverno, Kim; Jain, Gagan; Devercelli, Giovanna; Kaila, Shuchita; Ellis, Jeffrey J; Hernandez, Gemzel; Renda, Andrew

    2015-12-01

    Despite the importance of early detection, delayed diagnosis of chronic obstructive pulmonary disease (COPD) is relatively common. Approximately 12 million people in the United States have undiagnosed COPD. Diagnosis of COPD is essential for the timely implementation of interventions, such as smoking cessation programs, drug therapies, and pulmonary rehabilitation, which are aimed at improving outcomes and slowing disease progression. To develop and validate a predictive model to identify patients likely to have undiagnosed COPD using administrative claims data. A predictive model was developed and validated utilizing a retro-spective cohort of patients with and without a COPD diagnosis (cases and controls), aged 40-89, with a minimum of 24 months of continuous health plan enrollment (Medicare Advantage Prescription Drug [MAPD] and commercial plans), and identified between January 1, 2009, and December 31, 2012, using Humana's claims database. Stratified random sampling based on plan type (commercial or MAPD) and index year was performed to ensure that cases and controls had a similar distribution of these variables. Cases and controls were compared to identify demographic, clinical, and health care resource utilization (HCRU) characteristics associated with a COPD diagnosis. Stepwise logistic regression (SLR), neural networking, and decision trees were used to develop a series of models. The models were trained, validated, and tested on randomly partitioned subsets of the sample (Training, Validation, and Test data subsets). Measures used to evaluate and compare the models included area under the curve (AUC); index of the receiver operating characteristics (ROC) curve; sensitivity, specificity, positive predictive value (PPV); and negative predictive value (NPV). The optimal model was selected based on AUC index on the Test data subset. A total of 50,880 cases and 50,880 controls were included, with MAPD patients comprising 92% of the study population. Compared with controls, cases had a statistically significantly higher comorbidity burden and HCRU (including hospitalizations, emergency room visits, and medical procedures). The optimal predictive model was generated using SLR, which included 34 variables that were statistically significantly associated with a COPD diagnosis. After adjusting for covariates, anticholinergic bronchodilators (OR = 3.336) and tobacco cessation counseling (OR = 2.871) were found to have a large influence on the model. The final predictive model had an AUC of 0.754, sensitivity of 60%, specificity of 78%, PPV of 73%, and an NPV of 66%. This claims-based predictive model provides an acceptable level of accuracy in identifying patients likely to have undiagnosed COPD in a large national health plan. Identification of patients with undiagnosed COPD may enable timely management and lead to improved health outcomes and reduced COPD-related health care expenditures.

  9. Rocket Combustion Modelling Test Case RCM-3. Numerical Calculation of MASCOTTE 60 bar Case with THESEE

    DTIC Science & Technology

    2001-03-01

    flame length is about 230 mm. Figure 10 shows three characteristic structures of a cryogenic flame : "* A first expansion cone of length L1 = 15xDlox...correctly represented. However, the computed flame length is longer than the experimental data. This phenomenon is due to the droplets injection

  10. A Case Study of Co-Teaching in an Inclusive Secondary High-Stakes World History I Classroom

    ERIC Educational Resources Information Center

    van Hover, Stephanie; Hicks, David; Sayeski, Kristin

    2012-01-01

    In order to provide increasing support for students with disabilities in inclusive classrooms in high-stakes testing contexts, some schools have implemented co-teaching models. This qualitative case study explores how 1 special education teacher (Anna) and 1 general education history teacher (John) make sense of working together in an inclusive…

  11. Passing the Test: Ecological Regression Analysis in the Los Angeles County Case and Beyond.

    ERIC Educational Resources Information Center

    Lichtman, Allan J.

    1991-01-01

    Statistical analysis of racially polarized voting prepared for the Garza v County of Los Angeles (California) (1990) voting rights case is reviewed to demonstrate that ecological regression is a flexible, robust technique that illuminates the reality of ethnic voting, and superior to the neighborhood model supported by the defendants. (SLD)

  12. Feasibility of a new model for early detection of patients with multidrug-resistant tuberculosis in a developed setting of eastern China.

    PubMed

    Liu, Zhengwei; Pan, Aizhen; Wu, BeiBei; Zhou, Lin; He, Haibo; Meng, Qiong; Chen, Songhua; Pang, Yu; Wang, Xiaomeng

    2017-10-01

    The poor detection rate of multidrug-resistant tuberculosis (MDR-TB) highlights the urgent need to explore new case finding model to improve the detection of MDR-TB in China. The aim of this study was to evaluate the feasibility of a new model that combines molecular diagnostics and sputum transportation for early detection of patients with MDR-TB in Zhejiang. From May 2014 to January 2015, TB suspects were continuously enrolled at six county-level designated TB hospitals in Zhejiang. Each patient gave three sputum samples, which were submitted to laboratory for smear microscopy, solid culture and GeneXpert. The specimens from rifampin (RIF)-resistant cases detected by GeneXpert, and positive cultures were transported from county-level to prefecture-level laboratories for line probe analysis (LPA) and drug susceptibility testing (DST). The performance and interval of MDR-TB detection of the new model were compared with those of conventional model. A total of 3151 sputum specimens were collected from TB suspects. The sensitivity of GeneXpert for detecting culture-positive cases was 92.7% (405/437), and its specificity was 91.3% (2428/2659). Of 16 RIF-resistant cases detected by DST, GeneXpert could correctly identify 15 cases, yielding a sensitivity of 93.8% (15/16). The specificity of GeneXpert for detecting RIF susceptibility was 100.0% (383/383). The average interval to diagnosis of the conventional DST model was 56.5 days, ranging from 43 to 71 days, which was significantly longer than that of GeneXpert plus LPA (22.2 days, P < 0.01). Our data demonstrate that the combination of improved molecular TB tests and sputum transportation could significantly shorten the time required for detection of MDR-TB, which will bring benefits for preventing an epidemic of MDR-TB in this high-prevalence setting. © 2017 John Wiley & Sons Ltd.

  13. Towards Additive Manufacture of Functional, Spline-Based Morphometric Models of Healthy and Diseased Coronary Arteries: In Vitro Proof-of-Concept Using a Porcine Template.

    PubMed

    Jewkes, Rachel; Burton, Hanna E; Espino, Daniel M

    2018-02-02

    The aim of this study is to assess the additive manufacture of morphometric models of healthy and diseased coronary arteries. Using a dissected porcine coronary artery, a model was developed with the use of computer aided engineering, with splines used to design arteries in health and disease. The model was altered to demonstrate four cases of stenosis displaying varying severity, based on published morphometric data available. Both an Objet Eden 250 printer and a Solidscape 3Z Pro printer were used in this analysis. A wax printed model was set into a flexible thermoplastic and was valuable for experimental testing with helical flow patterns observed in healthy models, dominating the distal LAD (left anterior descending) and left circumflex arteries. Recirculation zones were detected in all models, but were visibly larger in the stenosed cases. Resin models provide useful analytical tools for understanding the spatial relationships of blood vessels, and could be applied to preoperative planning techniques, but were not suitable for physical testing. In conclusion, it is feasible to develop blood vessel models enabling experimental work; further, through additive manufacture of bio-compatible materials, there is the possibility of manufacturing customized replacement arteries.

  14. Towards Additive Manufacture of Functional, Spline-Based Morphometric Models of Healthy and Diseased Coronary Arteries: In Vitro Proof-of-Concept Using a Porcine Template

    PubMed Central

    Jewkes, Rachel; Burton, Hanna E.; Espino, Daniel M.

    2018-01-01

    The aim of this study is to assess the additive manufacture of morphometric models of healthy and diseased coronary arteries. Using a dissected porcine coronary artery, a model was developed with the use of computer aided engineering, with splines used to design arteries in health and disease. The model was altered to demonstrate four cases of stenosis displaying varying severity, based on published morphometric data available. Both an Objet Eden 250 printer and a Solidscape 3Z Pro printer were used in this analysis. A wax printed model was set into a flexible thermoplastic and was valuable for experimental testing with helical flow patterns observed in healthy models, dominating the distal LAD (left anterior descending) and left circumflex arteries. Recirculation zones were detected in all models, but were visibly larger in the stenosed cases. Resin models provide useful analytical tools for understanding the spatial relationships of blood vessels, and could be applied to preoperative planning techniques, but were not suitable for physical testing. In conclusion, it is feasible to develop blood vessel models enabling experimental work; further, through additive manufacture of bio-compatible materials, there is the possibility of manufacturing customized replacement arteries. PMID:29393899

  15. A deterministic model of electron transport for electron probe microanalysis

    NASA Astrophysics Data System (ADS)

    Bünger, J.; Richter, S.; Torrilhon, M.

    2018-01-01

    Within the last decades significant improvements in the spatial resolution of electron probe microanalysis (EPMA) were obtained by instrumental enhancements. In contrast, the quantification procedures essentially remained unchanged. As the classical procedures assume either homogeneity or a multi-layered structure of the material, they limit the spatial resolution of EPMA. The possibilities of improving the spatial resolution through more sophisticated quantification procedures are therefore almost untouched. We investigate a new analytical model (M 1-model) for the quantification procedure based on fast and accurate modelling of electron-X-ray-matter interactions in complex materials using a deterministic approach to solve the electron transport equations. We outline the derivation of the model from the Boltzmann equation for electron transport using the method of moments with a minimum entropy closure and present first numerical results for three different test cases (homogeneous, thin film and interface). Taking Monte Carlo as a reference, the results for the three test cases show that the M 1-model is able to reproduce the electron dynamics in EPMA applications very well. Compared to classical analytical models like XPP and PAP, the M 1-model is more accurate and far more flexible, which indicates the potential of deterministic models of electron transport to further increase the spatial resolution of EPMA.

  16. Verification test of the SURF and SURFplus models in xRage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menikoff, Ralph

    2016-05-18

    As a verification test of the SURF and SURFplus models in the xRage code we use a propagating underdriven detonation wave in 1-D. This is about the only test cases for which an accurate solution can be determined based on the theoretical structure of the solution. The solution consists of a steady ZND reaction zone profile joined with a scale invariant rarefaction or Taylor wave and followed by a constant state. The end of the reaction profile and the head of the rarefaction coincide with the sonic CJ state of the detonation wave. The constant state is required to matchmore » a rigid wall boundary condition. For a test case, we use PBX 9502 with the same EOS and burn rate as previously used to test the shock detector algorithm utilized by the SURF model. The detonation wave is propagated for 10 μs (slightly under 80mm). As expected, the pointwise errors are largest in the neighborhood of discontinuities; pressure discontinuity at the lead shock front and pressure derivative discontinuities at the head and tail of the rarefaction. As a quantitative measure of the overall accuracy, the L2 norm of the difference of the numerical pressure and the exact solution is used. Results are presented for simulations using both a uniform grid and an adaptive grid that refines the reaction zone.« less

  17. Interactive modelling with stakeholders in two cases in flood management

    NASA Astrophysics Data System (ADS)

    Leskens, Johannes; Brugnach, Marcela

    2013-04-01

    New policies on flood management called Multi-Level Safety (MLS), demand for an integral and collaborative approach. The goal of MLS is to minimize flood risks by a coherent package of protection measures, crisis management and flood resilience measures. To achieve this, various stakeholders, such as water boards, municipalities and provinces, have to collaborate in composing these measures. Besides the many advances this integral and collaborative approach gives, the decision-making environment becomes also more complex. Participants have to consider more criteria than they used to do and have to take a wide network of participants into account, all with specific perspectives, cultures and preferences. In response, sophisticated models are developed to support decision-makers in grasping this complexity. These models provide predictions of flood events and offer the opportunity to test the effectiveness of various measures under different criteria. Recent model advances in computation speed and model flexibility allow stakeholders to directly interact with a hydrological hydraulic model during meetings. Besides a better understanding of the decision content, these interactive models are supposed to support the incorporation of stakeholder knowledge in modelling and to support mutual understanding of different perspectives of stakeholders To explore the support of interactive modelling in integral and collaborate policies, such as MLS, we tested a prototype of an interactive flood model (3Di) with respect to a conventional model (Sobek) in two cases. The two cases included the designing of flood protection measures in Amsterdam and a flood event exercise in Delft. These case studies yielded two main results. First, we observed that in the exploration phase of a decision-making process, stakeholders participated actively in interactive modelling sessions. This increased the technical understanding of complex problems and the insight in the effectiveness of various integral measures. Second, when measures became more concrete, the model played a minor role, as stakeholders were still bounded to goals, responsibilities and budgets of their own organization. Model results in this phase are mainly used in a political way to maximize the goals of particular organizations.

  18. Mountain Bike Wheel Endurance Testing and Modeling

    DTIC Science & Technology

    2012-01-01

    at a tire pressure of 276 kPa. At very low load the rubber casing of the tire is relatively compliant, but its stiffness increases rapidly as the...Empirical Model for Determining the Radial Force-Deflection Characteristics of Off-Road bicycle Tyres ,” International Journal of Vehicle Design, 17 (4

  19. Convective Systems over the South China Sea: Cloud-Resolving Model Simulations.

    NASA Astrophysics Data System (ADS)

    Tao, W.-K.; Shie, C.-L.; Simpson, J.; Braun, S.; Johnson, R. H.; Ciesielski, P. E.

    2003-12-01

    The two-dimensional version of the Goddard Cumulus Ensemble (GCE) model is used to simulate two South China Sea Monsoon Experiment (SCSMEX) convective periods [18 26 May (prior to and during the monsoon onset) and 2 11 June (after the onset of the monsoon) 1998]. Observed large-scale advective tendencies for potential temperature, water vapor mixing ratio, and horizontal momentum are used as the main forcing in governing the GCE model in a semiprognostic manner. The June SCSMEX case has stronger forcing in both temperature and water vapor, stronger low-level vertical shear of the horizontal wind, and larger convective available potential energy (CAPE).The temporal variation of the model-simulated rainfall, time- and domain-averaged heating, and moisture budgets compares well to those diagnostically determined from soundings. However, the model results have a higher temporal variability. The model underestimates the rainfall by 17% to 20% compared to that based on soundings. The GCE model-simulated rainfall for June is in very good agreement with the Tropical Rainfall Measuring Mission (TRMM), precipitation radar (PR), and the Global Precipitation Climatology Project (GPCP). Overall, the model agrees better with observations for the June case rather than the May case.The model-simulated energy budgets indicate that the two largest terms for both cases are net condensation (heating/drying) and imposed large-scale forcing (cooling/moistening). These two terms are opposite in sign, however. The model results also show that there are more latent heat fluxes for the May case. However, more rainfall is simulated for the June case. Net radiation (solar heating and longwave cooling) are about 34% and 25%, respectively, of the net condensation (condensation minus evaporation) for the May and June cases. Sensible heat fluxes do not contribute to rainfall in either of the SCSMEX cases. Two types of organized convective systems, unicell (May case) and multicell (June case), are simulated by the model. They are determined by the observed mean U wind shear (unidirectional versus reverse shear profiles above midlevels).Several sensitivity tests are performed to examine the impact of the radiation, microphysics, and large-scale mean horizontal wind on the organization and intensity of the SCSMEX convective systems.

  20. Correlation of spacecraft thermal mathematical models to reference data

    NASA Astrophysics Data System (ADS)

    Torralbo, Ignacio; Perez-Grande, Isabel; Sanz-Andres, Angel; Piqueras, Javier

    2018-03-01

    Model-to-test correlation is a frequent problem in spacecraft-thermal control design. The idea is to determine the values of the parameters of the thermal mathematical model (TMM) that allows reaching a good fit between the TMM results and test data, in order to reduce the uncertainty of the mathematical model. Quite often, this task is performed manually, mainly because a good engineering knowledge and experience is needed to reach a successful compromise, but the use of a mathematical tool could facilitate this work. The correlation process can be considered as the minimization of the error of the model results with regard to the reference data. In this paper, a simple method is presented suitable to solve the TMM-to-test correlation problem, using Jacobian matrix formulation and Moore-Penrose pseudo-inverse, generalized to include several load cases. Aside, in simple cases, this method also allows for analytical solutions to be obtained, which helps to analyze some problems that appear when the Jacobian matrix is singular. To show the implementation of the method, two problems have been considered, one more academic, and the other one the TMM of an electronic box of PHI instrument of ESA Solar Orbiter mission, to be flown in 2019. The use of singular value decomposition of the Jacobian matrix to analyze and reduce these models is also shown. The error in parameter space is used to assess the quality of the correlation results in both models.

Top