Sample records for static code analysis

  1. Analysis of SMA Hybrid Composite Structures using Commercial Codes

    NASA Technical Reports Server (NTRS)

    Turner, Travis L.; Patel, Hemant D.

    2004-01-01

    A thermomechanical model for shape memory alloy (SMA) actuators and SMA hybrid composite (SMAHC) structures has been recently implemented in the commercial finite element codes MSC.Nastran and ABAQUS. The model may be easily implemented in any code that has the capability for analysis of laminated composite structures with temperature dependent material properties. The model is also relatively easy to use and requires input of only fundamental engineering properties. A brief description of the model is presented, followed by discussion of implementation and usage in the commercial codes. Results are presented from static and dynamic analysis of SMAHC beams of two types; a beam clamped at each end and a cantilevered beam. Nonlinear static (post-buckling) and random response analyses are demonstrated for the first specimen. Static deflection (shape) control is demonstrated for the cantilevered beam. Approaches for modeling SMAHC material systems with embedded SMA in ribbon and small round wire product forms are demonstrated and compared. The results from the commercial codes are compared to those from a research code as validation of the commercial implementations; excellent correlation is achieved in all cases.

  2. A Semantic Analysis Method for Scientific and Engineering Code

    NASA Technical Reports Server (NTRS)

    Stewart, Mark E. M.

    1998-01-01

    This paper develops a procedure to statically analyze aspects of the meaning or semantics of scientific and engineering code. The analysis involves adding semantic declarations to a user's code and parsing this semantic knowledge with the original code using multiple expert parsers. These semantic parsers are designed to recognize formulae in different disciplines including physical and mathematical formulae and geometrical position in a numerical scheme. In practice, a user would submit code with semantic declarations of primitive variables to the analysis procedure, and its semantic parsers would automatically recognize and document some static, semantic concepts and locate some program semantic errors. A prototype implementation of this analysis procedure is demonstrated. Further, the relationship between the fundamental algebraic manipulations of equations and the parsing of expressions is explained. This ability to locate some semantic errors and document semantic concepts in scientific and engineering code should reduce the time, risk, and effort of developing and using these codes.

  3. Analysis of SMA Hybrid Composite Structures in MSC.Nastran and ABAQUS

    NASA Technical Reports Server (NTRS)

    Turner, Travis L.; Patel, Hemant D.

    2005-01-01

    A thermoelastic constitutive model for shape memory alloy (SMA) actuators and SMA hybrid composite (SMAHC) structures was recently implemented in the commercial finite element codes MSC.Nastran and ABAQUS. The model may be easily implemented in any code that has the capability for analysis of laminated composite structures with temperature dependent material properties. The model is also relatively easy to use and requires input of only fundamental engineering properties. A brief description of the model is presented, followed by discussion of implementation and usage in the commercial codes. Results are presented from static and dynamic analysis of SMAHC beams of two types; a beam clamped at each end and a cantilever beam. Nonlinear static (post-buckling) and random response analyses are demonstrated for the first specimen. Static deflection (shape) control is demonstrated for the cantilever beam. Approaches for modeling SMAHC material systems with embedded SMA in ribbon and small round wire product forms are demonstrated and compared. The results from the commercial codes are compared to those from a research code as validation of the commercial implementations; excellent correlation is achieved in all cases.

  4. IKOS: A Framework for Static Analysis based on Abstract Interpretation (Tool Paper)

    NASA Technical Reports Server (NTRS)

    Brat, Guillaume P.; Laserna, Jorge A.; Shi, Nija; Venet, Arnaud Jean

    2014-01-01

    The RTCA standard (DO-178C) for developing avionic software and getting certification credits includes an extension (DO-333) that describes how developers can use static analysis in certification. In this paper, we give an overview of the IKOS static analysis framework that helps developing static analyses that are both precise and scalable. IKOS harnesses the power of Abstract Interpretation and makes it accessible to a larger class of static analysis developers by separating concerns such as code parsing, model development, abstract domain management, results management, and analysis strategy. The benefits of the approach is demonstrated by a buffer overflow analysis applied to flight control systems.

  5. Augmenting Traditional Static Analysis With Commonly Available Metadata

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cook, Devin

    Developers and security analysts have been using static analysis for a long time to analyze programs for defects and vulnerabilities with some success. Generally a static analysis tool is run on the source code for a given program, flagging areas of code that need to be further inspected by a human analyst. These areas may be obvious bugs like potential bu er over flows, information leakage flaws, or the use of uninitialized variables. These tools tend to work fairly well - every year they find many important bugs. These tools are more impressive considering the fact that they only examinemore » the source code, which may be very complex. Now consider the amount of data available that these tools do not analyze. There are many pieces of information that would prove invaluable for finding bugs in code, things such as a history of bug reports, a history of all changes to the code, information about committers, etc. By leveraging all this additional data, it is possible to nd more bugs with less user interaction, as well as track useful metrics such as number and type of defects injected by committer. This dissertation provides a method for leveraging development metadata to find bugs that would otherwise be difficult to find using standard static analysis tools. We showcase two case studies that demonstrate the ability to find 0day vulnerabilities in large and small software projects by finding new vulnerabilities in the cpython and Roundup open source projects.« less

  6. Digital microarray analysis for digital artifact genomics

    NASA Astrophysics Data System (ADS)

    Jaenisch, Holger; Handley, James; Williams, Deborah

    2013-06-01

    We implement a Spatial Voting (SV) based analogy of microarray analysis for digital gene marker identification in malware code sections. We examine a famous set of malware formally analyzed by Mandiant and code named Advanced Persistent Threat (APT1). APT1 is a Chinese organization formed with specific intent to infiltrate and exploit US resources. Manidant provided a detailed behavior and sting analysis report for the 288 malware samples available. We performed an independent analysis using a new alternative to the traditional dynamic analysis and static analysis we call Spatial Analysis (SA). We perform unsupervised SA on the APT1 originating malware code sections and report our findings. We also show the results of SA performed on some members of the families associated by Manidant. We conclude that SV based SA is a practical fast alternative to dynamics analysis and static analysis.

  7. Nonlinear static and dynamic finite element analysis of an eccentrically loaded graphite-epoxy beam

    NASA Technical Reports Server (NTRS)

    Fasanella, Edwin L.; Jackson, Karen E.; Jones, Lisa E.

    1991-01-01

    The Dynamic Crash Analysis of Structures (DYCAT) and NIKE3D nonlinear finite element codes were used to model the static and implulsive response of an eccentrically loaded graphite-epoxy beam. A 48-ply unidirectional composite beam was tested under an eccentric axial compressive load until failure. This loading configuration was chosen to highlight the capabilities of two finite element codes for modeling a highly nonlinear, large deflection structural problem which has an exact solution. These codes are currently used to perform dynamic analyses of aircraft structures under impact loads to study crashworthiness and energy absorbing capabilities. Both beam and plate element models were developed to compare with the experimental data using the DYCAST and NIKE3D codes.

  8. A Categorization of Dynamic Analyzers

    NASA Technical Reports Server (NTRS)

    Lujan, Michelle R.

    1997-01-01

    Program analysis techniques and tools are essential to the development process because of the support they provide in detecting errors and deficiencies at different phases of development. The types of information rendered through analysis includes the following: statistical measurements of code, type checks, dataflow analysis, consistency checks, test data,verification of code, and debugging information. Analyzers can be broken into two major categories: dynamic and static. Static analyzers examine programs with respect to syntax errors and structural properties., This includes gathering statistical information on program content, such as the number of lines of executable code, source lines. and cyclomatic complexity. In addition, static analyzers provide the ability to check for the consistency of programs with respect to variables. Dynamic analyzers in contrast are dependent on input and the execution of a program providing the ability to find errors that cannot be detected through the use of static analysis alone. Dynamic analysis provides information on the behavior of a program rather than on the syntax. Both types of analysis detect errors in a program, but dynamic analyzers accomplish this through run-time behavior. This paper focuses on the following broad classification of dynamic analyzers: 1) Metrics; 2) Models; and 3) Monitors. Metrics are those analyzers that provide measurement. The next category, models, captures those analyzers that present the state of the program to the user at specified points in time. The last category, monitors, checks specified code based on some criteria. The paper discusses each classification and the techniques that are included under them. In addition, the role of each technique in the software life cycle is discussed. Familiarization with the tools that measure, model and monitor programs provides a framework for understanding the program's dynamic behavior from different, perspectives through analysis of the input/output data.

  9. LSENS, The NASA Lewis Kinetics and Sensitivity Analysis Code

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, K.

    2000-01-01

    A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS (the NASA Lewis kinetics and sensitivity analysis code), are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include: static system; steady, one-dimensional, inviscid flow; incident-shock initiated reaction in a shock tube; and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method (LSODE, the Livermore Solver for Ordinary Differential Equations), which works efficiently for the extremes of very fast and very slow reactions, is used to solve the "stiff" ordinary differential equation systems that arise in chemical kinetics. For static reactions, the code uses the decoupled direct method to calculate sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters. Solution methods for the equilibrium and post-shock conditions and for perfectly stirred reactor problems are either adapted from or based on the procedures built into the NASA code CEA (Chemical Equilibrium and Applications).

  10. a Virtual Trip to the Schwarzschild-De Sitter Black Hole

    NASA Astrophysics Data System (ADS)

    Bakala, Pavel; Hledík, Stanislav; Stuchlík, Zdenĕk; Truparová, Kamila; Čermák, Petr

    2008-09-01

    We developed realistic fully general relativistic computer code for simulation of optical projection in a strong, spherically symmetric gravitational field. Standard theoretical analysis of optical projection for an observer in the vicinity of a Schwarzschild black hole is extended to black hole spacetimes with a repulsive cosmological constant, i.e, Schwarzschild-de Sitter (SdS) spacetimes. Influence of the cosmological constant is investigated for static observers and observers radially free-falling from static radius. Simulation includes effects of gravitational lensing, multiple images, Doppler and gravitational frequency shift, as well as the amplification of intensity. The code generates images of static observers sky and a movie simulations for radially free-falling observers. Techniques of parallel programming are applied to get high performance and fast run of the simulation code.

  11. Precise and Efficient Static Array Bound Checking for Large Embedded C Programs

    NASA Technical Reports Server (NTRS)

    Venet, Arnaud

    2004-01-01

    In this paper we describe the design and implementation of a static array-bound checker for a family of embedded programs: the flight control software of recent Mars missions. These codes are large (up to 250 KLOC), pointer intensive, heavily multithreaded and written in an object-oriented style, which makes their analysis very challenging. We designed a tool called C Global Surveyor (CGS) that can analyze the largest code in a couple of hours with a precision of 80%. The scalability and precision of the analyzer are achieved by using an incremental framework in which a pointer analysis and a numerical analysis of array indices mutually refine each other. CGS has been designed so that it can distribute the analysis over several processors in a cluster of machines. To the best of our knowledge this is the first distributed implementation of static analysis algorithms. Throughout the paper we will discuss the scalability setbacks that we encountered during the construction of the tool and their impact on the initial design decisions.

  12. DARKDROID: Exposing the Dark Side of Android Marketplaces

    DTIC Science & Technology

    2016-06-01

    Moreover, our approaches can detect apps containing both intentional and unintentional vulnerabilities, such as unsafe code loading mechanisms and...Security, Static Analysis, Dynamic Analysis, Malware Detection , Vulnerability Scanning 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18...applications in a DoD context. ................... 1 1.2.2 Develop sophisticated whole-system static analyses to detect malicious Android applications

  13. An empirical comparison of a dynamic software testability metric to static cyclomatic complexity

    NASA Technical Reports Server (NTRS)

    Voas, Jeffrey M.; Miller, Keith W.; Payne, Jeffrey E.

    1993-01-01

    This paper compares the dynamic testability prediction technique termed 'sensitivity analysis' to the static testability technique termed cyclomatic complexity. The application that we chose in this empirical study is a CASE generated version of a B-737 autoland system. For the B-737 system we analyzed, we isolated those functions that we predict are more prone to hide errors during system/reliability testing. We also analyzed the code with several other well-known static metrics. This paper compares and contrasts the results of sensitivity analysis to the results of the static metrics.

  14. Code query by example

    NASA Astrophysics Data System (ADS)

    Vaucouleur, Sebastien

    2011-02-01

    We introduce code query by example for customisation of evolvable software products in general and of enterprise resource planning systems (ERPs) in particular. The concept is based on an initial empirical study on practices around ERP systems. We motivate our design choices based on those empirical results, and we show how the proposed solution helps with respect to the infamous upgrade problem: the conflict between the need for customisation and the need for upgrade of ERP systems. We further show how code query by example can be used as a form of lightweight static analysis, to detect automatically potential defects in large software products. Code query by example as a form of lightweight static analysis is particularly interesting in the context of ERP systems: it is often the case that programmers working in this field are not computer science specialists but more of domain experts. Hence, they require a simple language to express custom rules.

  15. LSENS, a general chemical kinetics and sensitivity analysis code for gas-phase reactions: User's guide

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1993-01-01

    A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS, are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include static system, steady, one-dimensional, inviscid flow, shock initiated reaction, and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method, which works efficiently for the extremes of very fast and very slow reaction, is used for solving the 'stiff' differential equation systems that arise in chemical kinetics. For static reactions, sensitivity coefficients of all dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters can be computed. This paper presents descriptions of the code and its usage, and includes several illustrative example problems.

  16. A CFD/CSD Interaction Methodology for Aircraft Wings

    NASA Technical Reports Server (NTRS)

    Bhardwaj, Manoj K.

    1997-01-01

    With advanced subsonic transports and military aircraft operating in the transonic regime, it is becoming important to determine the effects of the coupling between aerodynamic loads and elastic forces. Since aeroelastic effects can contribute significantly to the design of these aircraft, there is a strong need in the aerospace industry to predict these aero-structure interactions computationally. To perform static aeroelastic analysis in the transonic regime, high fidelity computational fluid dynamics (CFD) analysis tools must be used in conjunction with high fidelity computational structural fluid dynamics (CSD) analysis tools due to the nonlinear behavior of the aerodynamics in the transonic regime. There is also a need to be able to use a wide variety of CFD and CSD tools to predict these aeroelastic effects in the transonic regime. Because source codes are not always available, it is necessary to couple the CFD and CSD codes without alteration of the source codes. In this study, an aeroelastic coupling procedure is developed which will perform static aeroelastic analysis using any CFD and CSD code with little code integration. The aeroelastic coupling procedure is demonstrated on an F/A-18 Stabilator using NASTD (an in-house McDonnell Douglas CFD code) and NASTRAN. In addition, the Aeroelastic Research Wing (ARW-2) is used for demonstration of the aeroelastic coupling procedure by using ENSAERO (NASA Ames Research Center CFD code) and a finite element wing-box code (developed as part of this research).

  17. An easily implemented static condensation method for structural sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Gangadharan, S. N.; Haftka, R. T.; Nikolaidis, E.

    1990-01-01

    A black-box approach to static condensation for sensitivity analysis is presented with illustrative examples of a cube and a car structure. The sensitivity of the structural response with respect to joint stiffness parameter is calculated using the direct method, forward-difference, and central-difference schemes. The efficiency of the various methods for identifying joint stiffness parameters from measured static deflections of these structures is compared. The results indicate that the use of static condensation can reduce computation times significantly and the black-box approach is only slightly less efficient than the standard implementation of static condensation. The ease of implementation of the black-box approach recommends it for use with general-purpose finite element codes that do not have a built-in facility for static condensation.

  18. Study of Geometric Porosity on Static Stability and Drag Using Computational Fluid Dynamics for Rigid Parachute Shapes

    NASA Technical Reports Server (NTRS)

    Greathouse, James S.; Schwing, Alan M.

    2015-01-01

    This paper explores use of computational fluid dynamics to study the e?ect of geometric porosity on static stability and drag for NASA's Multi-Purpose Crew Vehicle main parachute. Both of these aerodynamic characteristics are of interest to in parachute design, and computational methods promise designers the ability to perform detailed parametric studies and other design iterations with a level of control previously unobtainable using ground or flight testing. The approach presented here uses a canopy structural analysis code to define the inflated parachute shapes on which structured computational grids are generated. These grids are used by the computational fluid dynamics code OVERFLOW and are modeled as rigid, impermeable bodies for this analysis. Comparisons to Apollo drop test data is shown as preliminary validation of the technique. Results include several parametric sweeps through design variables in order to better understand the trade between static stability and drag. Finally, designs that maximize static stability with a minimal loss in drag are suggested for further study in subscale ground and flight testing.

  19. Supporting secure programming in web applications through interactive static analysis.

    PubMed

    Zhu, Jun; Xie, Jing; Lipford, Heather Richter; Chu, Bill

    2014-07-01

    Many security incidents are caused by software developers' failure to adhere to secure programming practices. Static analysis tools have been used to detect software vulnerabilities. However, their wide usage by developers is limited by the special training required to write rules customized to application-specific logic. Our approach is interactive static analysis, to integrate static analysis into Integrated Development Environment (IDE) and provide in-situ secure programming support to help developers prevent vulnerabilities during code construction. No additional training is required nor are there any assumptions on ways programs are built. Our work is motivated in part by the observation that many vulnerabilities are introduced due to failure to practice secure programming by knowledgeable developers. We implemented a prototype interactive static analysis tool as a plug-in for Java in Eclipse. Our technical evaluation of our prototype detected multiple zero-day vulnerabilities in a large open source project. Our evaluations also suggest that false positives may be limited to a very small class of use cases.

  20. Supporting secure programming in web applications through interactive static analysis

    PubMed Central

    Zhu, Jun; Xie, Jing; Lipford, Heather Richter; Chu, Bill

    2013-01-01

    Many security incidents are caused by software developers’ failure to adhere to secure programming practices. Static analysis tools have been used to detect software vulnerabilities. However, their wide usage by developers is limited by the special training required to write rules customized to application-specific logic. Our approach is interactive static analysis, to integrate static analysis into Integrated Development Environment (IDE) and provide in-situ secure programming support to help developers prevent vulnerabilities during code construction. No additional training is required nor are there any assumptions on ways programs are built. Our work is motivated in part by the observation that many vulnerabilities are introduced due to failure to practice secure programming by knowledgeable developers. We implemented a prototype interactive static analysis tool as a plug-in for Java in Eclipse. Our technical evaluation of our prototype detected multiple zero-day vulnerabilities in a large open source project. Our evaluations also suggest that false positives may be limited to a very small class of use cases. PMID:25685513

  1. electromagnetics, eddy current, computer codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gartling, David

    TORO Version 4 is designed for finite element analysis of steady, transient and time-harmonic, multi-dimensional, quasi-static problems in electromagnetics. The code allows simulation of electrostatic fields, steady current flows, magnetostatics and eddy current problems in plane or axisymmetric, two-dimensional geometries. TORO is easily coupled to heat conduction and solid mechanics codes to allow multi-physics simulations to be performed.

  2. Methodology for fast detection of false sharing in threaded scientific codes

    DOEpatents

    Chung, I-Hsin; Cong, Guojing; Murata, Hiroki; Negishi, Yasushi; Wen, Hui-Fang

    2014-11-25

    A profiling tool identifies a code region with a false sharing potential. A static analysis tool classifies variables and arrays in the identified code region. A mapping detection library correlates memory access instructions in the identified code region with variables and arrays in the identified code region while a processor is running the identified code region. The mapping detection library identifies one or more instructions at risk, in the identified code region, which are subject to an analysis by a false sharing detection library. A false sharing detection library performs a run-time analysis of the one or more instructions at risk while the processor is re-running the identified code region. The false sharing detection library determines, based on the performed run-time analysis, whether two different portions of the cache memory line are accessed by the generated binary code.

  3. The STAGS computer code

    NASA Technical Reports Server (NTRS)

    Almroth, B. O.; Brogan, F. A.

    1978-01-01

    Basic information about the computer code STAGS (Structural Analysis of General Shells) is presented to describe to potential users the scope of the code and the solution procedures that are incorporated. Primarily, STAGS is intended for analysis of shell structures, although it has been extended to more complex shell configurations through the inclusion of springs and beam elements. The formulation is based on a variational approach in combination with local two dimensional power series representations of the displacement components. The computer code includes options for analysis of linear or nonlinear static stress, stability, vibrations, and transient response. Material as well as geometric nonlinearities are included. A few examples of applications of the code are presented for further illustration of its scope.

  4. The FORTRAN static source code analyzer program (SAP) user's guide, revision 1

    NASA Technical Reports Server (NTRS)

    Decker, W.; Taylor, W.; Eslinger, S.

    1982-01-01

    The FORTRAN Static Source Code Analyzer Program (SAP) User's Guide (Revision 1) is presented. SAP is a software tool designed to assist Software Engineering Laboratory (SEL) personnel in conducting studies of FORTRAN programs. SAP scans FORTRAN source code and produces reports that present statistics and measures of statements and structures that make up a module. This document is a revision of the previous SAP user's guide, Computer Sciences Corporation document CSC/TM-78/6045. SAP Revision 1 is the result of program modifications to provide several new reports, additional complexity analysis, and recognition of all statements described in the FORTRAN 77 standard. This document provides instructions for operating SAP and contains information useful in interpreting SAP output.

  5. Development and Application of Benchmark Examples for Mixed-Mode I/II Quasi-Static Delamination Propagation Predictions

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2012-01-01

    The development of benchmark examples for quasi-static delamination propagation prediction is presented and demonstrated for a commercial code. The examples are based on finite element models of the Mixed-Mode Bending (MMB) specimen. The examples are independent of the analysis software used and allow the assessment of the automated delamination propagation prediction capability in commercial finite element codes based on the virtual crack closure technique (VCCT). First, quasi-static benchmark examples were created for the specimen. Second, starting from an initially straight front, the delamination was allowed to propagate under quasi-static loading. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Good agreement between the results obtained from the automated propagation analysis and the benchmark results could be achieved by selecting input parameters that had previously been determined during analyses of mode I Double Cantilever Beam and mode II End Notched Flexure specimens. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Overall the results are encouraging, but further assessment for mixed-mode delamination fatigue onset and growth is required.

  6. Transient analysis using conical shell elements

    NASA Technical Reports Server (NTRS)

    Yang, J. C. S.; Goeller, J. E.; Messick, W. T.

    1973-01-01

    The use of the NASTRAN conical shell element in static, eigenvalue, and direct transient analyses is demonstrated. The results of a NASTRAN static solution of an externally pressurized ring-stiffened cylinder agree well with a theoretical discontinuity analysis. Good agreement is also obtained between the NASTRAN direct transient response of a uniform cylinder to a dynamic end load and one-dimensional solutions obtained using a method of characteristics stress wave code and a standing wave solution. Finally, a NASTRAN eigenvalue analysis is performed on a hydroballistic model idealized with conical shell elements.

  7. Comparison of Space Shuttle Hot Gas Manifold analysis to air flow data

    NASA Technical Reports Server (NTRS)

    Mcconnaughey, P. K.

    1988-01-01

    This paper summarizes several recent analyses of the Space Shuttle Main Engine Hot Gas Manifold and compares predicted flow environments to air flow data. Codes used in these analyses include INS3D, PAGE, PHOENICS, and VAST. Both laminar (Re = 250, M = 0.30) and turbulent (Re = 1.9 million, M = 0.30) results are discussed, with the latter being compared to data for system losses, outer wall static pressures, and manifold exit Mach number profiles. Comparison of predicted results for the turbulent case to air flow data shows that the analysis using INS3D predicted system losses within 1 percent error, while the PHOENICS, PAGE, and VAST codes erred by 31, 35, and 47 percent, respectively. The INS3D, PHOENICS, and PAGE codes did a reasonable job of predicting outer wall static pressure, while the PHOENICS code predicted exit Mach number profiles with acceptable accuracy. INS3D was approximately an order of magnitude more efficient than the other codes in terms of code speed and memory requirements. In general, it is seen that complex internal flows in manifold-like geometries can be predicted with a limited degree of confidence, and further development is necessary to improve both efficiency and accuracy of codes if they are to be used as design tools for complex three-dimensional geometries.

  8. SAPNEW: Parallel finite element code for thin shell structures on the Alliant FX/80

    NASA Astrophysics Data System (ADS)

    Kamat, Manohar P.; Watson, Brian C.

    1992-02-01

    The results of a research activity aimed at providing a finite element capability for analyzing turbo-machinery bladed-disk assemblies in a vector/parallel processing environment are summarized. Analysis of aircraft turbofan engines is very computationally intensive. The performance limit of modern day computers with a single processing unit was estimated at 3 billions of floating point operations per second (3 gigaflops). In view of this limit of a sequential unit, performance rates higher than 3 gigaflops can be achieved only through vectorization and/or parallelization as on Alliant FX/80. Accordingly, the efforts of this critically needed research were geared towards developing and evaluating parallel finite element methods for static and vibration analysis. A special purpose code, named with the acronym SAPNEW, performs static and eigen analysis of multi-degree-of-freedom blade models built-up from flat thin shell elements.

  9. SAPNEW: Parallel finite element code for thin shell structures on the Alliant FX/80

    NASA Technical Reports Server (NTRS)

    Kamat, Manohar P.; Watson, Brian C.

    1992-01-01

    The results of a research activity aimed at providing a finite element capability for analyzing turbo-machinery bladed-disk assemblies in a vector/parallel processing environment are summarized. Analysis of aircraft turbofan engines is very computationally intensive. The performance limit of modern day computers with a single processing unit was estimated at 3 billions of floating point operations per second (3 gigaflops). In view of this limit of a sequential unit, performance rates higher than 3 gigaflops can be achieved only through vectorization and/or parallelization as on Alliant FX/80. Accordingly, the efforts of this critically needed research were geared towards developing and evaluating parallel finite element methods for static and vibration analysis. A special purpose code, named with the acronym SAPNEW, performs static and eigen analysis of multi-degree-of-freedom blade models built-up from flat thin shell elements.

  10. High Temperature Composite Analyzer (HITCAN) demonstration manual, version 1.0

    NASA Technical Reports Server (NTRS)

    Singhal, S. N; Lackney, J. J.; Murthy, P. L. N.

    1993-01-01

    This manual comprises a variety of demonstration cases for the HITCAN (HIgh Temperature Composite ANalyzer) code. HITCAN is a general purpose computer program for predicting nonlinear global structural and local stress-strain response of arbitrarily oriented, multilayered high temperature metal matrix composite structures. HITCAN is written in FORTRAN 77 computer language and has been configured and executed on the NASA Lewis Research Center CRAY XMP and YMP computers. Detailed description of all program variables and terms used in this manual may be found in the User's Manual. The demonstration includes various cases to illustrate the features and analysis capabilities of the HITCAN computer code. These cases include: (1) static analysis, (2) nonlinear quasi-static (incremental) analysis, (3) modal analysis, (4) buckling analysis, (5) fiber degradation effects, (6) fabrication-induced stresses for a variety of structures; namely, beam, plate, ring, shell, and built-up structures. A brief discussion of each demonstration case with the associated input data file is provided. Sample results taken from the actual computer output are also included.

  11. PCC Framework for Program-Generators

    NASA Technical Reports Server (NTRS)

    Kong, Soonho; Choi, Wontae; Yi, Kwangkeun

    2009-01-01

    In this paper, we propose a proof-carrying code framework for program-generators. The enabling technique is abstract parsing, a static string analysis technique, which is used as a component for generating and validating certificates. Our framework provides an efficient solution for certifying program-generators whose safety properties are expressed in terms of the grammar representing the generated program. The fixed-point solution of the analysis is generated and attached with the program-generator on the code producer side. The consumer receives the code with a fixed-point solution and validates that the received fixed point is indeed a fixed point of the received code. This validation can be done in a single pass.

  12. Development of Benchmark Examples for Static Delamination Propagation and Fatigue Growth Predictions

    NASA Technical Reports Server (NTRS)

    Kruger, Ronald

    2011-01-01

    The development of benchmark examples for static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of an End-Notched Flexure (ENF) specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, static benchmark examples were created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during stable delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall, the results are encouraging but further assessment for mixed-mode delamination is required.

  13. Development and Application of Benchmark Examples for Mode II Static Delamination Propagation and Fatigue Growth Predictions

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2011-01-01

    The development of benchmark examples for static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of an End-Notched Flexure (ENF) specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, static benchmark examples were created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall the results are encouraging, but further assessment for mixed-mode delamination is required.

  14. LSENS, a general chemical kinetics and sensitivity analysis code for homogeneous gas-phase reactions. 2: Code description and usage

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 2 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 2 describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part 1 (NASA RP-1328) derives the governing equations describes the numerical solution procedures for the types of problems that can be solved by lSENS. Part 3 (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.

  15. LSENS: A General Chemical Kinetics and Sensitivity Analysis Code for homogeneous gas-phase reactions. Part 1: Theory and numerical solution procedures

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 1 of a series of three reference publications that describe LENS, provide a detailed guide to its usage, and present many example problems. Part 1 derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved. The accuracy and efficiency of LSENS are examined by means of various test problems, and comparisons with other methods and codes are presented. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.

  16. Benchmarking Defmod, an open source FEM code for modeling episodic fault rupture

    NASA Astrophysics Data System (ADS)

    Meng, Chunfang

    2017-03-01

    We present Defmod, an open source (linear) finite element code that enables us to efficiently model the crustal deformation due to (quasi-)static and dynamic loadings, poroelastic flow, viscoelastic flow and frictional fault slip. Ali (2015) provides the original code introducing an implicit solver for (quasi-)static problem, and an explicit solver for dynamic problem. The fault constraint is implemented via Lagrange Multiplier. Meng (2015) combines these two solvers into a hybrid solver that uses failure criteria and friction laws to adaptively switch between the (quasi-)static state and dynamic state. The code is capable of modeling episodic fault rupture driven by quasi-static loadings, e.g. due to reservoir fluid withdraw or injection. Here, we focus on benchmarking the Defmod results against some establish results.

  17. LSENS: A General Chemical Kinetics and Sensitivity Analysis Code for homogeneous gas-phase reactions. Part 3: Illustrative test problems

    NASA Technical Reports Server (NTRS)

    Bittker, David A.; Radhakrishnan, Krishnan

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 3 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 3 explains the kinetics and kinetics-plus-sensitivity analysis problems supplied with LSENS and presents sample results. These problems illustrate the various capabilities of, and reaction models that can be solved by, the code and may provide a convenient starting point for the user to construct the problem data file required to execute LSENS. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.

  18. Architectural Analysis of Systems Based on the Publisher-Subscriber Style

    NASA Technical Reports Server (NTRS)

    Ganesun, Dharmalingam; Lindvall, Mikael; Ruley, Lamont; Wiegand, Robert; Ly, Vuong; Tsui, Tina

    2010-01-01

    Architectural styles impose constraints on both the topology and the interaction behavior of involved parties. In this paper, we propose an approach for analyzing implemented systems based on the publisher-subscriber architectural style. From the style definition, we derive a set of reusable questions and show that some of them can be answered statically whereas others are best answered using dynamic analysis. The paper explains how the results of static analysis can be used to orchestrate dynamic analysis. The proposed method was successfully applied on the NASA's Goddard Mission Services Evolution Center (GMSEC) software product line. The results show that the GMSEC has a) a novel reusable vendor-independent middleware abstraction layer that allows the NASA's missions to configure the middleware of interest without changing the publishers' or subscribers' source code, and b) some high priority bugs due to behavioral discrepancies, which were eluded during testing and code reviews, among different implementations of the same APIs for different vendors.

  19. TORO II: A finite element computer program for nonlinear quasi-static problems in electromagnetics: Part 1, Theoretical background

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gartling, D.K.

    The theoretical and numerical background for the finite element computer program, TORO II, is presented in detail. TORO II is designed for the multi-dimensional analysis of nonlinear, electromagnetic field problems described by the quasi-static form of Maxwell`s equations. A general description of the boundary value problems treated by the program is presented. The finite element formulation and the associated numerical methods used in TORO II are also outlined. Instructions for the use of the code are documented in SAND96-0903; examples of problems analyzed with the code are also provided in the user`s manual. 24 refs., 8 figs.

  20. C++ software quality in the ATLAS experiment: tools and experience

    NASA Astrophysics Data System (ADS)

    Martin-Haugh, S.; Kluth, S.; Seuster, R.; Snyder, S.; Obreshkov, E.; Roe, S.; Sherwood, P.; Stewart, G. A.

    2017-10-01

    In this paper we explain how the C++ code quality is managed in ATLAS using a range of tools from compile-time through to run time testing and reflect on the substantial progress made in the last two years largely through the use of static analysis tools such as Coverity®, an industry-standard tool which enables quality comparison with general open source C++ code. Other available code analysis tools are also discussed, as is the role of unit testing with an example of how the GoogleTest framework can be applied to our codebase.

  1. Structural integrity of a confinement vessel for testing nuclear fuels for space propulsion

    NASA Astrophysics Data System (ADS)

    Bergmann, V. L.

    Nuclear propulsion systems for rockets could significantly reduce the travel time to distant destinations in space. However, long before such a concept can become reality, a significant effort must be invested in analysis and ground testing to guide the development of nuclear fuels. Any testing in support of development of nuclear fuels for space propulsion must be safely contained to prevent the release of radioactive materials. This paper describes analyses performed to assess the structural integrity of a test confinement vessel. The confinement structure, a stainless steel pressure vessel with bolted flanges, was designed for operating static pressures in accordance with the ASME Boiler and Pressure Vessel Code. In addition to the static operating pressures, the confinement barrier must withstand static overpressures from off-normal conditions without releasing radioactive material. Results from axisymmetric finite element analyses are used to evaluate the response of the confinement structure under design and accident conditions. For the static design conditions, the stresses computed from the ASME code are compared with the stresses computed by the finite element method.

  2. Application of a transonic potential flow code to the static aeroelastic analysis of three-dimensional wings

    NASA Technical Reports Server (NTRS)

    Whitlow, W., Jr.; Bennett, R. M.

    1982-01-01

    Since the aerodynamic theory is nonlinear, the method requires the coupling of two iterative processes - an aerodynamic analysis and a structural analysis. A full potential analysis code, FLO22, is combined with a linear structural analysis to yield aerodynamic load distributions on and deflections of elastic wings. This method was used to analyze an aeroelastically-scaled wind tunnel model of a proposed executive-jet transport wing and an aeroelastic research wing. The results are compared with the corresponding rigid-wing analyses, and some effects of elasticity on the aerodynamic loading are noted.

  3. Speech processing using maximum likelihood continuity mapping

    DOEpatents

    Hogden, John E.

    2000-01-01

    Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.

  4. Speech processing using maximum likelihood continuity mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, J.E.

    Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.

  5. Progress in The Semantic Analysis of Scientific Code

    NASA Technical Reports Server (NTRS)

    Stewart, Mark

    2000-01-01

    This paper concerns a procedure that analyzes aspects of the meaning or semantics of scientific and engineering code. This procedure involves taking a user's existing code, adding semantic declarations for some primitive variables, and parsing this annotated code using multiple, independent expert parsers. These semantic parsers encode domain knowledge and recognize formulae in different disciplines including physics, numerical methods, mathematics, and geometry. The parsers will automatically recognize and document some static, semantic concepts and help locate some program semantic errors. These techniques may apply to a wider range of scientific codes. If so, the techniques could reduce the time, risk, and effort required to develop and modify scientific codes.

  6. Finite Element Analysis of M15 and M19 Mines Under Wheeled Vehicle Load

    DTIC Science & Technology

    2008-03-01

    the plate statically. An implicit finite element option in a code called LSDYNA was used to model the pressure generated in the explosive by the...figure 4 for the M19 mines. Maximum pressure in the explosive for each mine calculated by LSDYNA code shown for a variety of plate sizes and weights...Director U.S. Army TRADOC Analysis Center-WSMR ATTN: ATRC-WSS-R White Sands Missile Range, NM 88002 Chemical Propulsion Information Agency ATTN

  7. LSENS, A General Chemical Kinetics and Sensitivity Analysis Code for Homogeneous Gas-Phase Reactions. Part 2; Code Description and Usage

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part II of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part II describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part I (NASA RP-1328) derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved by LSENS. Part III (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.

  8. Proceedings of the U.S. Army Symposium on Gun Dynamics (5th) Held in Rensselaerville, New York on 23-25 September 1987

    DTIC Science & Technology

    1987-09-01

    have shown that gun barrel heating, and hence thermal expansion , is both axially and circumferentially asymmetric. Circumferential, or cross-barrel...element code, which ended in the selection of ABAQUS . The code will perform static, dynamic, and thermal anal- ysis on a broad range of structures...analysis may be performed by a user supplied FORTRAN subroutine which is automatically linked to the code and supplements the stand- ard ABAQUS

  9. The effects of different representations on static structure analysis of computer malware signatures.

    PubMed

    Narayanan, Ajit; Chen, Yi; Pang, Shaoning; Tao, Ban

    2013-01-01

    The continuous growth of malware presents a problem for internet computing due to increasingly sophisticated techniques for disguising malicious code through mutation and the time required to identify signatures for use by antiviral software systems (AVS). Malware modelling has focused primarily on semantics due to the intended actions and behaviours of viral and worm code. The aim of this paper is to evaluate a static structure approach to malware modelling using the growing malware signature databases now available. We show that, if malware signatures are represented as artificial protein sequences, it is possible to apply standard sequence alignment techniques in bioinformatics to improve accuracy of distinguishing between worm and virus signatures. Moreover, aligned signature sequences can be mined through traditional data mining techniques to extract metasignatures that help to distinguish between viral and worm signatures. All bioinformatics and data mining analysis were performed on publicly available tools and Weka.

  10. The Effects of Different Representations on Static Structure Analysis of Computer Malware Signatures

    PubMed Central

    Narayanan, Ajit; Chen, Yi; Pang, Shaoning; Tao, Ban

    2013-01-01

    The continuous growth of malware presents a problem for internet computing due to increasingly sophisticated techniques for disguising malicious code through mutation and the time required to identify signatures for use by antiviral software systems (AVS). Malware modelling has focused primarily on semantics due to the intended actions and behaviours of viral and worm code. The aim of this paper is to evaluate a static structure approach to malware modelling using the growing malware signature databases now available. We show that, if malware signatures are represented as artificial protein sequences, it is possible to apply standard sequence alignment techniques in bioinformatics to improve accuracy of distinguishing between worm and virus signatures. Moreover, aligned signature sequences can be mined through traditional data mining techniques to extract metasignatures that help to distinguish between viral and worm signatures. All bioinformatics and data mining analysis were performed on publicly available tools and Weka. PMID:23983644

  11. Displacement measurement with nanoscale resolution using a coded micro-mark and digital image correlation

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Ma, Chengfu; Chen, Yuhang

    2014-12-01

    A method for simple and reliable displacement measurement with nanoscale resolution is proposed. The measurement is realized by combining a common optical microscopy imaging of a specially coded nonperiodic microstructure, namely two-dimensional zero-reference mark (2-D ZRM), and subsequent correlation analysis of the obtained image sequence. The autocorrelation peak contrast of the ZRM code is maximized with well-developed artificial intelligence algorithms, which enables robust and accurate displacement determination. To improve the resolution, subpixel image correlation analysis is employed. Finally, we experimentally demonstrate the quasi-static and dynamic displacement characterization ability of a micro 2-D ZRM.

  12. Low-speed Aerodynamic Investigations of a Hybrid Wing Body Configuration

    NASA Technical Reports Server (NTRS)

    Vicroy, Dan D.; Gatlin, Gregory M.; Jenkins, Luther N.; Murphy, Patrick C.; Carter, Melissa B.

    2014-01-01

    Two low-speed static wind tunnel tests and a water tunnel static and dynamic forced-motion test have been conducted on a hybrid wing-body (HWB) twinjet configuration. These tests, in addition to computational fluid dynamics (CFD) analysis, have provided a comprehensive dataset of the low-speed aerodynamic characteristics of this nonproprietary configuration. In addition to force and moment measurements, the tests included surface pressures, flow visualization, and off-body particle image velocimetry measurements. This paper will summarize the results of these tests and highlight the data that is available for code comparison or additional analysis.

  13. Individuals Achieve More Accurate Results with Meters That Are Codeless and Employ Dynamic Electrochemistry

    PubMed Central

    Rao, Anoop; Wiley, Meg; Iyengar, Sridhar; Nadeau, Dan; Carnevale, Julie

    2010-01-01

    Background Studies have shown that controlling blood glucose can reduce the onset and progression of the long-term microvascular and neuropathic complications associated with the chronic course of diabetes mellitus. Improved glycemic control can be achieved by frequent testing combined with changes in medication, exercise, and diet. Technological advancements have enabled improvements in analytical accuracy of meters, and this paper explores two such parameters to which that accuracy can be attributed. Methods Four blood glucose monitoring systems (with or without dynamic electrochemistry algorithms, codeless or requiring coding prior to testing) were evaluated and compared with respect to their accuracy. Results Altogether, 108 blood glucose values were obtained for each system from 54 study participants and compared with the reference values. The analysis depicted in the International Organization for Standardization table format indicates that the devices with dynamic electrochemistry and the codeless feature had the highest proportion of acceptable results overall (System A, 101/103). Results were significant when compared at the 10% bias level with meters that were codeless and utilized static electrochemistry (p = .017) or systems that had static electrochemistry but needed coding (p = .008). Conclusions Analytical performance of these blood glucose meters differed significantly depending on their technologic features. Meters that utilized dynamic electrochemistry and did not require coding were more accurate than meters that used static electrochemistry or required coding. PMID:20167178

  14. Individuals achieve more accurate results with meters that are codeless and employ dynamic electrochemistry.

    PubMed

    Rao, Anoop; Wiley, Meg; Iyengar, Sridhar; Nadeau, Dan; Carnevale, Julie

    2010-01-01

    Studies have shown that controlling blood glucose can reduce the onset and progression of the long-term microvascular and neuropathic complications associated with the chronic course of diabetes mellitus. Improved glycemic control can be achieved by frequent testing combined with changes in medication, exercise, and diet. Technological advancements have enabled improvements in analytical accuracy of meters, and this paper explores two such parameters to which that accuracy can be attributed. Four blood glucose monitoring systems (with or without dynamic electrochemistry algorithms, codeless or requiring coding prior to testing) were evaluated and compared with respect to their accuracy. Altogether, 108 blood glucose values were obtained for each system from 54 study participants and compared with the reference values. The analysis depicted in the International Organization for Standardization table format indicates that the devices with dynamic electrochemistry and the codeless feature had the highest proportion of acceptable results overall (System A, 101/103). Results were significant when compared at the 10% bias level with meters that were codeless and utilized static electrochemistry (p = .017) or systems that had static electrochemistry but needed coding (p = .008). Analytical performance of these blood glucose meters differed significantly depending on their technologic features. Meters that utilized dynamic electrochemistry and did not require coding were more accurate than meters that used static electrochemistry or required coding. 2010 Diabetes Technology Society.

  15. An Experiment in Scientific Code Semantic Analysis

    NASA Technical Reports Server (NTRS)

    Stewart, Mark E. M.

    1998-01-01

    This paper concerns a procedure that analyzes aspects of the meaning or semantics of scientific and engineering code. This procedure involves taking a user's existing code, adding semantic declarations for some primitive variables, and parsing this annotated code using multiple, distributed expert parsers. These semantic parser are designed to recognize formulae in different disciplines including physical and mathematical formulae and geometrical position in a numerical scheme. The parsers will automatically recognize and document some static, semantic concepts and locate some program semantic errors. Results are shown for a subroutine test case and a collection of combustion code routines. This ability to locate some semantic errors and document semantic concepts in scientific and engineering code should reduce the time, risk, and effort of developing and using these codes.

  16. Static Verification for Code Contracts

    NASA Astrophysics Data System (ADS)

    Fähndrich, Manuel

    The Code Contracts project [3] at Microsoft Research enables programmers on the .NET platform to author specifications in existing languages such as C# and VisualBasic. To take advantage of these specifications, we provide tools for documentation generation, runtime contract checking, and static contract verification.

  17. RICH: OPEN-SOURCE HYDRODYNAMIC SIMULATION ON A MOVING VORONOI MESH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yalinewich, Almog; Steinberg, Elad; Sari, Re’em

    2015-02-01

    We present here RICH, a state-of-the-art two-dimensional hydrodynamic code based on Godunov’s method, on an unstructured moving mesh (the acronym stands for Racah Institute Computational Hydrodynamics). This code is largely based on the code AREPO. It differs from AREPO in the interpolation and time-advancement schemeS as well as a novel parallelization scheme based on Voronoi tessellation. Using our code, we study the pros and cons of a moving mesh (in comparison to a static mesh). We also compare its accuracy to other codes. Specifically, we show that our implementation of external sources and time-advancement scheme is more accurate and robustmore » than is AREPO when the mesh is allowed to move. We performed a parameter study of the cell rounding mechanism (Lloyd iterations) and its effects. We find that in most cases a moving mesh gives better results than a static mesh, but it is not universally true. In the case where matter moves in one way and a sound wave is traveling in the other way (such that relative to the grid the wave is not moving) a static mesh gives better results than a moving mesh. We perform an analytic analysis for finite difference schemes that reveals that a Lagrangian simulation is better than a Eulerian simulation in the case of a highly supersonic flow. Moreover, we show that Voronoi-based moving mesh schemes suffer from an error, which is resolution independent, due to inconsistencies between the flux calculation and the change in the area of a cell. Our code is publicly available as open source and designed in an object-oriented, user-friendly way that facilitates incorporation of new algorithms and physical processes.« less

  18. Performance and Architecture Lab Modeling Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-06-19

    Analytical application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult. Furthermore, models are frequently expressed in forms that are hard to distribute and validate. The Performance and Architecture Lab Modeling tool, or Palm, is a modeling tool designed to make application modeling easier. Palm provides a source code modeling annotation language. Not only does the modeling language divide the modeling task into sub problems, it formally links an application's source code with its model. This link is important because a model's purpose is to capture application behavior. Furthermore, this linkmore » makes it possible to define rules for generating models according to source code organization. Palm generates hierarchical models according to well-defined rules. Given an application, a set of annotations, and a representative execution environment, Palm will generate the same model. A generated model is a an executable program whose constituent parts directly correspond to the modeled application. Palm generates models by combining top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. A model's hierarchy is defined by static and dynamic source code structure. Because Palm coordinates models and source code, Palm's models are 'first-class' and reproducible. Palm automates common modeling tasks. For instance, Palm incorporates measurements to focus attention, represent constant behavior, and validate models. Palm's workflow is as follows. The workflow's input is source code annotated with Palm modeling annotations. The most important annotation models an instance of a block of code. Given annotated source code, the Palm Compiler produces executables and the Palm Monitor collects a representative performance profile. The Palm Generator synthesizes a model based on the static and dynamic mapping of annotations to program behavior. The model -- an executable program -- is a hierarchical composition of annotation functions, synthesized functions, statistics for runtime values, and performance measurements.« less

  19. Static Stretching of the Hamstring Muscle for Injury Prevention in Football Codes: a Systematic Review

    PubMed Central

    Rogan, Slavko; Wüst, Dirk; Schwitter, Thomas; Schmidtbleicher, Dietmar

    2012-01-01

    Purpose Hamstring injuries are common among football players. There is still disagreement regarding prevention. The aim of this review is to determine whether static stretching reduces hamstring injuries in football codes. Methods A systematic literature search was conducted on the online databases PubMed, PEDro, Cochrane, Web of Science, Bisp and Clinical Trial register. Study results were presented descriptively and the quality of the studies assessed were based on Cochrane's ‘risk of bias’ tool. Results The review identified 35 studies, including four analysis studies. These studies show deficiencies in the quality of study designs. Conclusion The study protocols are varied in terms of the length of intervention and follow-up. No RCT studies are available, however, RCT studies should be conducted in the near future. PMID:23785569

  20. Equivalent Linearization Analysis of Geometrically Nonlinear Random Vibrations Using Commercial Finite Element Codes

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Muravyov, Alexander A.

    2002-01-01

    Two new equivalent linearization implementations for geometrically nonlinear random vibrations are presented. Both implementations are based upon a novel approach for evaluating the nonlinear stiffness within commercial finite element codes and are suitable for use with any finite element code having geometrically nonlinear static analysis capabilities. The formulation includes a traditional force-error minimization approach and a relatively new version of a potential energy-error minimization approach, which has been generalized for multiple degree-of-freedom systems. Results for a simply supported plate under random acoustic excitation are presented and comparisons of the displacement root-mean-square values and power spectral densities are made with results from a nonlinear time domain numerical simulation.

  1. Parallel-vector computation for linear structural analysis and non-linear unconstrained optimization problems

    NASA Technical Reports Server (NTRS)

    Nguyen, D. T.; Al-Nasra, M.; Zhang, Y.; Baddourah, M. A.; Agarwal, T. K.; Storaasli, O. O.; Carmona, E. A.

    1991-01-01

    Several parallel-vector computational improvements to the unconstrained optimization procedure are described which speed up the structural analysis-synthesis process. A fast parallel-vector Choleski-based equation solver, pvsolve, is incorporated into the well-known SAP-4 general-purpose finite-element code. The new code, denoted PV-SAP, is tested for static structural analysis. Initial results on a four processor CRAY 2 show that using pvsolve reduces the equation solution time by a factor of 14-16 over the original SAP-4 code. In addition, parallel-vector procedures for the Golden Block Search technique and the BFGS method are developed and tested for nonlinear unconstrained optimization. A parallel version of an iterative solver and the pvsolve direct solver are incorporated into the BFGS method. Preliminary results on nonlinear unconstrained optimization test problems, using pvsolve in the analysis, show excellent parallel-vector performance indicating that these parallel-vector algorithms can be used in a new generation of finite-element based structural design/analysis-synthesis codes.

  2. SAPNEW: Parallel finite element code for thin shell structures on the Alliant FX-80

    NASA Astrophysics Data System (ADS)

    Kamat, Manohar P.; Watson, Brian C.

    1992-11-01

    The finite element method has proven to be an invaluable tool for analysis and design of complex, high performance systems, such as bladed-disk assemblies in aircraft turbofan engines. However, as the problem size increase, the computation time required by conventional computers can be prohibitively high. Parallel processing computers provide the means to overcome these computation time limits. This report summarizes the results of a research activity aimed at providing a finite element capability for analyzing turbomachinery bladed-disk assemblies in a vector/parallel processing environment. A special purpose code, named with the acronym SAPNEW, has been developed to perform static and eigen analysis of multi-degree-of-freedom blade models built-up from flat thin shell elements. SAPNEW provides a stand alone capability for static and eigen analysis on the Alliant FX/80, a parallel processing computer. A preprocessor, named with the acronym NTOS, has been developed to accept NASTRAN input decks and convert them to the SAPNEW format to make SAPNEW more readily used by researchers at NASA Lewis Research Center.

  3. Statistical inference of static analysis rules

    NASA Technical Reports Server (NTRS)

    Engler, Dawson Richards (Inventor)

    2009-01-01

    Various apparatus and methods are disclosed for identifying errors in program code. Respective numbers of observances of at least one correctness rule by different code instances that relate to the at least one correctness rule are counted in the program code. Each code instance has an associated counted number of observances of the correctness rule by the code instance. Also counted are respective numbers of violations of the correctness rule by different code instances that relate to the correctness rule. Each code instance has an associated counted number of violations of the correctness rule by the code instance. A respective likelihood of the validity is determined for each code instance as a function of the counted number of observances and counted number of violations. The likelihood of validity indicates a relative likelihood that a related code instance is required to observe the correctness rule. The violations may be output in order of the likelihood of validity of a violated correctness rule.

  4. Commercial turbofan engine exhaust nozzle flow analyses using PAB3D

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Uenishi, K.; Carlson, John R.; Keith, B. D.

    1992-01-01

    Recent developments of a three-dimensional (PAB3D) code have paved the way for a computational investigation of complex aircraft aerodynamic components. The PAB3D code was developed for solving the simplified Reynolds Averaged Navier-Stokes equations in a three-dimensional multiblock/multizone structured mesh domain. The present analysis was applied to commercial turbofan exhaust flow systems. Solution sensitivity to grid density is presented. Laminar flow solutions were developed for all grids and two-equation k-epsilon solutions were developed for selected grids. Static pressure distributions, mass flow and thrust quantities were calculated for on-design engine operating conditions. Good agreement between predicted surface static pressures and experimental data was observed at different locations. Mass flow was predicted within 0.2 percent of experimental data. Thrust forces were typically within 0.4 percent of experimental data.

  5. Palm: Easing the Burden of Analytical Performance Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tallent, Nathan R.; Hoisie, Adolfy

    2014-06-01

    Analytical (predictive) application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult because they must be both accurate and concise. To ease the burden of performance modeling, we developed Palm, a modeling tool that combines top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. To express insight, Palm defines a source code modeling annotation language. By coordinating models and source code, Palm's models are `first-class' and reproducible. Unlike prior work, Palm formally links models, functions, and measurements. As a result, Palm (a) uses functions to either abstract or express complexitymore » (b) generates hierarchical models (representing an application's static and dynamic structure); and (c) automatically incorporates measurements to focus attention, represent constant behavior, and validate models. We discuss generating models for three different applications.« less

  6. An Aeroelastic Analysis of a Thin Flexible Membrane

    NASA Technical Reports Server (NTRS)

    Scott, Robert C.; Bartels, Robert E.; Kandil, Osama A.

    2007-01-01

    Studies have shown that significant vehicle mass and cost savings are possible with the use of ballutes for aero-capture. Through NASA's In-Space Propulsion program, a preliminary examination of ballute sensitivity to geometry and Reynolds number was conducted, and a single-pass coupling between an aero code and a finite element solver was used to assess the static aeroelastic effects. There remain, however, a variety of open questions regarding the dynamic aeroelastic stability of membrane structures for aero-capture, with the primary challenge being the prediction of the membrane flutter onset. The purpose of this paper is to describe and begin addressing these issues. The paper includes a review of the literature associated with the structural analysis of membranes and membrane utter. Flow/structure analysis coupling and hypersonic flow solver options are also discussed. An approach is proposed for tackling this problem that starts with a relatively simple geometry and develops and evaluates analysis methods and procedures. This preliminary study considers a computationally manageable 2-dimensional problem. The membrane structural models used in the paper include a nonlinear finite-difference model for static and dynamic analysis and a NASTRAN finite element membrane model for nonlinear static and linear normal modes analysis. Both structural models are coupled with a structured compressible flow solver for static aeroelastic analysis. For dynamic aeroelastic analyses, the NASTRAN normal modes are used in the structured compressible flow solver and 3rd order piston theories were used with the finite difference membrane model to simulate utter onset. Results from the various static and dynamic aeroelastic analyses are compared.

  7. Comparison of Damage Path Predictions for Composite Laminates by Explicit and Standard Finite Element Analysis Tools

    NASA Technical Reports Server (NTRS)

    Bogert, Philip B.; Satyanarayana, Arunkumar; Chunchu, Prasad B.

    2006-01-01

    Splitting, ultimate failure load and the damage path in center notched composite specimens subjected to in-plane tension loading are predicted using progressive failure analysis methodology. A 2-D Hashin-Rotem failure criterion is used in determining intra-laminar fiber and matrix failures. This progressive failure methodology has been implemented in the Abaqus/Explicit and Abaqus/Standard finite element codes through user written subroutines "VUMAT" and "USDFLD" respectively. A 2-D finite element model is used for predicting the intra-laminar damages. Analysis results obtained from the Abaqus/Explicit and Abaqus/Standard code show good agreement with experimental results. The importance of modeling delamination in progressive failure analysis methodology is recognized for future studies. The use of an explicit integration dynamics code for simple specimen geometry and static loading establishes a foundation for future analyses where complex loading and nonlinear dynamic interactions of damage and structure will necessitate it.

  8. Computer code for preliminary sizing analysis of axial-flow turbines

    NASA Technical Reports Server (NTRS)

    Glassman, Arthur J.

    1992-01-01

    This mean diameter flow analysis uses a stage average velocity diagram as the basis for the computational efficiency. Input design requirements include power or pressure ratio, flow rate, temperature, pressure, and rotative speed. Turbine designs are generated for any specified number of stages and for any of three types of velocity diagrams (symmetrical, zero exit swirl, or impulse) or for any specified stage swirl split. Exit turning vanes can be included in the design. The program output includes inlet and exit annulus dimensions, exit temperature and pressure, total and static efficiencies, flow angles, and last stage absolute and relative Mach numbers. An analysis is presented along with a description of the computer program input and output with sample cases. The analysis and code presented herein are modifications of those described in NASA-TN-D-6702. These modifications improve modeling rigor and extend code applicability.

  9. A comparison of the calculated and experimental off-design performance of a radial flow turbine

    NASA Technical Reports Server (NTRS)

    Tirres, Lizet

    1992-01-01

    Off design aerodynamic performance of the solid version of a cooled radial inflow turbine is analyzed. Rotor surface static pressure data and other performance parameters were obtained experimentally. Overall stage performance and turbine blade surface static to inlet total pressure ratios were calculated by using a quasi-three dimensional inviscid code. The off design prediction capability of this code for radial inflow turbines shows accurate static pressure prediction. Solutions show a difference of 3 to 5 points between the experimentally obtained efficiencies and the calculated values.

  10. A comparison of the calculated and experimental off-design performance of a radial flow turbine

    NASA Technical Reports Server (NTRS)

    Tirres, Lizet

    1991-01-01

    Off design aerodynamic performance of the solid version of a cooled radial inflow turbine is analyzed. Rotor surface static pressure data and other performance parameters were obtained experimentally. Overall stage performance and turbine blade surface static to inlet total pressure ratios were calculated by using a quasi-three dimensional inviscid code. The off design prediction capability of this code for radial inflow turbines shows accurate static pressure prediction. Solutions show a difference of 3 to 5 points between the experimentally obtained efficiencies and the calculated values.

  11. Development and Application of Benchmark Examples for Mixed-Mode I/II Quasi-Static Delamination Propagation Predictions

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2012-01-01

    The development of benchmark examples for quasi-static delamination propagation prediction is presented. The example is based on a finite element model of the Mixed-Mode Bending (MMB) specimen for 50% mode II. The benchmarking is demonstrated for Abaqus/Standard, however, the example is independent of the analysis software used and allows the assessment of the automated delamination propagation prediction capability in commercial finite element codes based on the virtual crack closure technique (VCCT). First, a quasi-static benchmark example was created for the specimen. Second, starting from an initially straight front, the delamination was allowed to propagate under quasi-static loading. Third, the load-displacement as well as delamination length versus applied load/displacement relationships from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Overall, the results are encouraging, but further assessment for mixed-mode delamination fatigue onset and growth is required.

  12. MO-F-16A-01: Implementation of MPPG TPS Verification Tests On Various Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smilowitz, J; Bredfeldt, J; Geurts, M

    2014-06-15

    Purpose: To demonstrate the implementation of the Medical Physics Practice Guideline (MPPG) for dose calculation and beam parameters verification of treatment planning systems (TPS). Methods: We implemented the draft TPS MPPG for three linacs: Varian Trilogy, TomoHDA and Elekta Infinity. Static and modulated test plans were created. The static fields are different than used in commissioning. Data was collected using ion chambers and diodes in a scanning water tank, Delta4 phantom and a custom phantom. MatLab and Microsoft Excel were used to create analysis tools to compare reference DICOM dose with scan data. This custom code allowed for the interpolation,more » registration and gamma analysis of arbitrary dose profiles. It will be provided as open source code. IMRT fields were validated with Delta4 registration and comparison tools. The time for each task was recorded. Results: The tests confirmed the strengths, and revealed some limitations, of our TPS. The agreement between calculated and measured dose was reported for all beams. For static fields, percent depth dose and profiles were analyzed with criteria in the draft MPPG. The results reveal areas of slight mismatch with the model (MLC leaf penumbra, buildup region.) For TomoTherapy, the IMRT plan 2%/2 mm gamma analysis revealed poorest agreement in the low dose regions. For one static test plan for all 10MV Trilogy photon beams, the plan generation, scan queue creation, data collection, data analysis and report took 2 hours, excluding tank setup. Conclusions: We have demonstrated the implementation feasibility of the TPS MPPG. This exercise generated an open source tool for dose comparisons between scan data and DICOM dose data. An easily reproducible and efficient infrastructure with streamlined data collection was created for repeatable robust testing of the TPS. The tests revealed minor discrepancies in our models and areas for improvement that are being investigated.« less

  13. Dynamic Analysis of Spur Gear Transmissions (DANST). PC Version 3.00 User Manual

    NASA Technical Reports Server (NTRS)

    Oswald, Fred B.; Lin, Hsiang Hsi; Delgado, Irebert R.

    1996-01-01

    DANST is a FORTRAN computer program for static and dynamic analysis of spur gear systems. The program can be used for parametric studies to predict the static transmission error, dynamic load, tooth bending stress and other properties of spur gears as they are influenced by operating speed, torque, stiffness, damping, inertia, and tooth profile. DANST performs geometric modeling and dynamic analysis for low- or high-contact-ratio spur gears. DANST can simulate gear systems with contact ratios ranging from one to three. It was designed to be easy to use and it is extensively documented in several previous reports and by comments in the source code. This report describes installing and using a new PC version of DANST, covers input data requirements and presents examples.

  14. Artificial neural network prediction of aircraft aeroelastic behavior

    NASA Astrophysics Data System (ADS)

    Pesonen, Urpo Juhani

    An Artificial Neural Network that predicts aeroelastic behavior of aircraft is presented. The neural net was designed to predict the shape of a flexible wing in static flight conditions using results from a structural analysis and an aerodynamic analysis performed with traditional computational tools. To generate reliable training and testing data for the network, an aeroelastic analysis code using these tools as components was designed and validated. To demonstrate the advantages and reliability of Artificial Neural Networks, a network was also designed and trained to predict airfoil maximum lift at low Reynolds numbers where wind tunnel data was used for the training. Finally, a neural net was designed and trained to predict the static aeroelastic behavior of a wing without the need to iterate between the structural and aerodynamic solvers.

  15. A Model for Simulating the Response of Aluminum Honeycomb Structure to Transverse Loading

    NASA Technical Reports Server (NTRS)

    Ratcliffe, James G.; Czabaj, Michael W.; Jackson, Wade C.

    2012-01-01

    A 1-dimensional material model was developed for simulating the transverse (thickness-direction) loading and unloading response of aluminum honeycomb structure. The model was implemented as a user-defined material subroutine (UMAT) in the commercial finite element analysis code, ABAQUS(Registered TradeMark)/Standard. The UMAT has been applied to analyses for simulating quasi-static indentation tests on aluminum honeycomb-based sandwich plates. Comparison of analysis results with data from these experiments shows overall good agreement. Specifically, analyses of quasi-static indentation tests yielded accurate global specimen responses. Predicted residual indentation was also in reasonable agreement with measured values. Overall, this simple model does not involve a significant computational burden, which makes it more tractable to simulate other damage mechanisms in the same analysis.

  16. NPSS Multidisciplinary Integration and Analysis

    NASA Technical Reports Server (NTRS)

    Hall, Edward J.; Rasche, Joseph; Simons, Todd A.; Hoyniak, Daniel

    2006-01-01

    The objective of this task was to enhance the capability of the Numerical Propulsion System Simulation (NPSS) by expanding its reach into the high-fidelity multidisciplinary analysis area. This task investigated numerical techniques to convert between cold static to hot running geometry of compressor blades. Numerical calculations of blade deformations were iteratively done with high fidelity flow simulations together with high fidelity structural analysis of the compressor blade. The flow simulations were performed with the Advanced Ducted Propfan Analysis (ADPAC) code, while structural analyses were performed with the ANSYS code. High fidelity analyses were used to evaluate the effects on performance of: variations in tip clearance, uncertainty in manufacturing tolerance, variable inlet guide vane scheduling, and the effects of rotational speed on the hot running geometry of the compressor blades.

  17. Adaptive EAGLE dynamic solution adaptation and grid quality enhancement

    NASA Technical Reports Server (NTRS)

    Luong, Phu Vinh; Thompson, J. F.; Gatlin, B.; Mastin, C. W.; Kim, H. J.

    1992-01-01

    In the effort described here, the elliptic grid generation procedure in the EAGLE grid code was separated from the main code into a subroutine, and a new subroutine which evaluates several grid quality measures at each grid point was added. The elliptic grid routine can now be called, either by a computational fluid dynamics (CFD) code to generate a new adaptive grid based on flow variables and quality measures through multiple adaptation, or by the EAGLE main code to generate a grid based on quality measure variables through static adaptation. Arrays of flow variables can be read into the EAGLE grid code for use in static adaptation as well. These major changes in the EAGLE adaptive grid system make it easier to convert any CFD code that operates on a block-structured grid (or single-block grid) into a multiple adaptive code.

  18. User's Guide for ENSAERO_FE Parallel Finite Element Solver

    NASA Technical Reports Server (NTRS)

    Eldred, Lloyd B.; Guruswamy, Guru P.

    1999-01-01

    A high fidelity parallel static structural analysis capability is created and interfaced to the multidisciplinary analysis package ENSAERO-MPI of Ames Research Center. This new module replaces ENSAERO's lower fidelity simple finite element and modal modules. Full aircraft structures may be more accurately modeled using the new finite element capability. Parallel computation is performed by breaking the full structure into multiple substructures. This approach is conceptually similar to ENSAERO's multizonal fluid analysis capability. The new substructure code is used to solve the structural finite element equations for each substructure in parallel. NASTRANKOSMIC is utilized as a front end for this code. Its full library of elements can be used to create an accurate and realistic aircraft model. It is used to create the stiffness matrices for each substructure. The new parallel code then uses an iterative preconditioned conjugate gradient method to solve the global structural equations for the substructure boundary nodes.

  19. Nuclear reactor transient analysis via a quasi-static kinetics Monte Carlo method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jo, YuGwon; Cho, Bumhee; Cho, Nam Zin, E-mail: nzcho@kaist.ac.kr

    2015-12-31

    The predictor-corrector quasi-static (PCQS) method is applied to the Monte Carlo (MC) calculation for reactor transient analysis. To solve the transient fixed-source problem of the PCQS method, fission source iteration is used and a linear approximation of fission source distributions during a macro-time step is introduced to provide delayed neutron source. The conventional particle-tracking procedure is modified to solve the transient fixed-source problem via MC calculation. The PCQS method with MC calculation is compared with the direct time-dependent method of characteristics (MOC) on a TWIGL two-group problem for verification of the computer code. Then, the results on a continuous-energy problemmore » are presented.« less

  20. Model-Driven Engineering of Machine Executable Code

    NASA Astrophysics Data System (ADS)

    Eichberg, Michael; Monperrus, Martin; Kloppenburg, Sven; Mezini, Mira

    Implementing static analyses of machine-level executable code is labor intensive and complex. We show how to leverage model-driven engineering to facilitate the design and implementation of programs doing static analyses. Further, we report on important lessons learned on the benefits and drawbacks while using the following technologies: using the Scala programming language as target of code generation, using XML-Schema to express a metamodel, and using XSLT to implement (a) transformations and (b) a lint like tool. Finally, we report on the use of Prolog for writing model transformations.

  1. An examination of the interrater reliability between practitioners and researchers on the static-99.

    PubMed

    Quesada, Stephen P; Calkins, Cynthia; Jeglic, Elizabeth L

    2014-11-01

    Many studies have validated the psychometric properties of the Static-99, the most widely used measure of sexual offender recidivism risk. However much of this research relied on instrument coding completed by well-trained researchers. This study is the first to examine the interrater reliability (IRR) of the Static-99 between practitioners in the field and researchers. Using archival data from a sample of 1,973 formerly incarcerated sex offenders, field raters' scores on the Static-99 were compared with those of researchers. Overall, clinicians and researchers had excellent IRR on Static-99 total scores, with IRR coefficients ranging from "substantial" to "outstanding" for the individual 10 items of the scale. The most common causes of discrepancies were coding manual errors, followed by item subjectivity, inaccurate item scoring, and calculation errors. These results offer important data with regard to the frequency and perceived nature of scoring errors. © The Author(s) 2013.

  2. Comparison of Predicted and Measured Attenuation of Turbine Noise from a Static Engine Test

    NASA Technical Reports Server (NTRS)

    Chien, Eugene W.; Ruiz, Marta; Yu, Jia; Morin, Bruce L.; Cicon, Dennis; Schwieger, Paul S.; Nark, Douglas M.

    2007-01-01

    Aircraft noise has become an increasing concern for commercial airlines. Worldwide demand for quieter aircraft is increasing, making the prediction of engine noise suppression one of the most important fields of research. The Low-Pressure Turbine (LPT) can be an important noise source during the approach condition for commercial aircraft. The National Aeronautics and Space Administration (NASA), Pratt & Whitney (P&W), and Goodrich Aerostructures (Goodrich) conducted a joint program to validate a method for predicting turbine noise attenuation. The method includes noise-source estimation, acoustic treatment impedance prediction, and in-duct noise propagation analysis. Two noise propagation prediction codes, Eversman Finite Element Method (FEM) code [1] and the CDUCT-LaRC [2] code, were used in this study to compare the predicted and the measured turbine noise attenuation from a static engine test. In this paper, the test setup, test configurations and test results are detailed in Section II. A description of the input parameters, including estimated noise modal content (in terms of acoustic potential), and acoustic treatment impedance values are provided in Section III. The prediction-to-test correlation study results are illustrated and discussed in Section IV and V for the FEM and the CDUCT-LaRC codes, respectively, and a summary of the results is presented in Section VI.

  3. SAP- FORTRAN STATIC SOURCE CODE ANALYZER PROGRAM (IBM VERSION)

    NASA Technical Reports Server (NTRS)

    Manteufel, R.

    1994-01-01

    The FORTRAN Static Source Code Analyzer program, SAP, was developed to automatically gather statistics on the occurrences of statements and structures within a FORTRAN program and to provide for the reporting of those statistics. Provisions have been made for weighting each statistic and to provide an overall figure of complexity. Statistics, as well as figures of complexity, are gathered on a module by module basis. Overall summed statistics are also accumulated for the complete input source file. SAP accepts as input syntactically correct FORTRAN source code written in the FORTRAN 77 standard language. In addition, code written using features in the following languages is also accepted: VAX-11 FORTRAN, IBM S/360 FORTRAN IV Level H Extended; and Structured FORTRAN. The SAP program utilizes two external files in its analysis procedure. A keyword file allows flexibility in classifying statements and in marking a statement as either executable or non-executable. A statistical weight file allows the user to assign weights to all output statistics, thus allowing the user flexibility in defining the figure of complexity. The SAP program is written in FORTRAN IV for batch execution and has been implemented on a DEC VAX series computer under VMS and on an IBM 370 series computer under MVS. The SAP program was developed in 1978 and last updated in 1985.

  4. SAP- FORTRAN STATIC SOURCE CODE ANALYZER PROGRAM (DEC VAX VERSION)

    NASA Technical Reports Server (NTRS)

    Merwarth, P. D.

    1994-01-01

    The FORTRAN Static Source Code Analyzer program, SAP, was developed to automatically gather statistics on the occurrences of statements and structures within a FORTRAN program and to provide for the reporting of those statistics. Provisions have been made for weighting each statistic and to provide an overall figure of complexity. Statistics, as well as figures of complexity, are gathered on a module by module basis. Overall summed statistics are also accumulated for the complete input source file. SAP accepts as input syntactically correct FORTRAN source code written in the FORTRAN 77 standard language. In addition, code written using features in the following languages is also accepted: VAX-11 FORTRAN, IBM S/360 FORTRAN IV Level H Extended; and Structured FORTRAN. The SAP program utilizes two external files in its analysis procedure. A keyword file allows flexibility in classifying statements and in marking a statement as either executable or non-executable. A statistical weight file allows the user to assign weights to all output statistics, thus allowing the user flexibility in defining the figure of complexity. The SAP program is written in FORTRAN IV for batch execution and has been implemented on a DEC VAX series computer under VMS and on an IBM 370 series computer under MVS. The SAP program was developed in 1978 and last updated in 1985.

  5. Single Event Upset Rate Estimates for a 16-K CMOS (Complementary Metal Oxide Semiconductor) SRAM (Static Random Access Memory).

    DTIC Science & Technology

    1986-09-30

    4 . ~**..ft.. ft . - - - ft SI TABLES 9 I. SA32~40 Single Event Upset Test, 1140-MeV Krypton, 9/l8/8~4. . .. .. .. .. .. .16 II. CRUP Simulation...cosmic ray interaction analysis described in the remainder of this report were calculated using the CRUP computer code 3 modified for funneling. The... CRUP code requires, as inputs, the size of a depletion region specified as a retangular parallel piped with dimensions a 9 b S c, the effective funnel

  6. Quasi-Static Indentation Analysis of Carbon-Fiber Laminates.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Briggs, Timothy; English, Shawn Allen; Nelson, Stacy Michelle

    2015-12-01

    A series of quasi - static indentation experiments are conducted on carbon fiber reinforced polymer laminates with a systematic variation of thicknesses and fixture boundary conditions. Different deformation mechanisms and their resulting damage mechanisms are activated b y changing the thickn ess and boundary conditions. The quasi - static indentation experiments have been shown to achieve damage mechanisms similar to impact and penetration, however without strain rate effects. The low rate allows for the detailed analysis on the load response. Moreover, interrupted tests allow for the incremental analysis of various damage mechanisms and pr ogressions. The experimentally tested specimens aremore » non - destructively evaluated (NDE) with optical imaging, ultrasonics and computed tomography. The load displacement responses and the NDE are then utilized in numerical simulations for the purpose of model validation and vetting. The accompanying numerical simulation work serves two purposes. First, the results further reveal the time sequence of events and the meaning behind load dro ps not clear from NDE . Second, the simulations demonstrate insufficiencies in the code and can then direct future efforts for development.« less

  7. A Flexible and Non-instrusive Approach for Computing Complex Structural Coverage Metrics

    NASA Technical Reports Server (NTRS)

    Whalen, Michael W.; Person, Suzette J.; Rungta, Neha; Staats, Matt; Grijincu, Daniela

    2015-01-01

    Software analysis tools and techniques often leverage structural code coverage information to reason about the dynamic behavior of software. Existing techniques instrument the code with the required structural obligations and then monitor the execution of the compiled code to report coverage. Instrumentation based approaches often incur considerable runtime overhead for complex structural coverage metrics such as Modified Condition/Decision (MC/DC). Code instrumentation, in general, has to be approached with great care to ensure it does not modify the behavior of the original code. Furthermore, instrumented code cannot be used in conjunction with other analyses that reason about the structure and semantics of the code under test. In this work, we introduce a non-intrusive preprocessing approach for computing structural coverage information. It uses a static partial evaluation of the decisions in the source code and a source-to-bytecode mapping to generate the information necessary to efficiently track structural coverage metrics during execution. Our technique is flexible; the results of the preprocessing can be used by a variety of coverage-driven software analysis tasks, including automated analyses that are not possible for instrumented code. Experimental results in the context of symbolic execution show the efficiency and flexibility of our nonintrusive approach for computing code coverage information

  8. Application of Benchmark Examples to Assess the Single and Mixed-Mode Static Delamination Propagation Capabilities in ANSYS

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2012-01-01

    The application of benchmark examples for the assessment of quasi-static delamination propagation capabilities is demonstrated for ANSYS. The examples are independent of the analysis software used and allow the assessment of the automated delamination propagation in commercial finite element codes based on the virtual crack closure technique (VCCT). The examples selected are based on two-dimensional finite element models of Double Cantilever Beam (DCB), End-Notched Flexure (ENF), Mixed-Mode Bending (MMB) and Single Leg Bending (SLB) specimens. First, the quasi-static benchmark examples were recreated for each specimen using the current implementation of VCCT in ANSYS . Second, the delamination was allowed to propagate under quasi-static loading from its initial location using the automated procedure implemented in the finite element software. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Overall the results are encouraging, but further assessment for three-dimensional solid models is required.

  9. Application of a personal computer for the uncoupled vibration analysis of wind turbine blade and counterweight assemblies

    NASA Technical Reports Server (NTRS)

    White, P. R.; Little, R. R.

    1985-01-01

    A research effort was undertaken to develop personal computer based software for vibrational analysis. The software was developed to analytically determine the natural frequencies and mode shapes for the uncoupled lateral vibrations of the blade and counterweight assemblies used in a single bladed wind turbine. The uncoupled vibration analysis was performed in both the flapwise and chordwise directions for static rotor conditions. The effects of rotation on the uncoupled flapwise vibration of the blade and counterweight assemblies were evaluated for various rotor speeds up to 90 rpm. The theory, used in the vibration analysis codes, is based on a lumped mass formulation for the blade and counterweight assemblies. The codes are general so that other designs can be readily analyzed. The input for the codes is generally interactive to facilitate usage. The output of the codes is both tabular and graphical. Listings of the codes are provided. Predicted natural frequencies of the first several modes show reasonable agreement with experimental results. The analysis codes were originally developed on a DEC PDP 11/34 minicomputer and then downloaded and modified to run on an ITT XTRA personal computer. Studies conducted to evaluate the efficiency of running the programs on a personal computer as compared with the minicomputer indicated that, with the proper combination of hardware and software options, the efficiency of using a personal computer exceeds that of a minicomputer.

  10. Quasi-Static Evolution, Catastrophe, and Failed Eruption of Solar Flux Ropes

    DTIC Science & Technology

    2016-12-30

    Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/6794--16-9710 Quasi -Static Evolution, Catastrophe, and “Failed” Eruption of Solar Flux...TELEPHONE NUMBER (include area code) b. ABSTRACT c. THIS PAGE 18. NUMBER OF PAGES 17. LIMITATION OF ABSTRACT Quasi -Static Evolution, Catastrophe...evolution of solar flux ropes subject to slowly increasing magnetic energy, encompassing quasi -static evolution, “catastrophic” transition to an eruptive

  11. Development and validation of a low-frequency modeling code for high-moment transmitter rod antennas

    NASA Astrophysics Data System (ADS)

    Jordan, Jared Williams; Sternberg, Ben K.; Dvorak, Steven L.

    2009-12-01

    The goal of this research is to develop and validate a low-frequency modeling code for high-moment transmitter rod antennas to aid in the design of future low-frequency TX antennas with high magnetic moments. To accomplish this goal, a quasi-static modeling algorithm was developed to simulate finite-length, permeable-core, rod antennas. This quasi-static analysis is applicable for low frequencies where eddy currents are negligible, and it can handle solid or hollow cores with winding insulation thickness between the antenna's windings and its core. The theory was programmed in Matlab, and the modeling code has the ability to predict the TX antenna's gain, maximum magnetic moment, saturation current, series inductance, and core series loss resistance, provided the user enters the corresponding complex permeability for the desired core magnetic flux density. In order to utilize the linear modeling code to model the effects of nonlinear core materials, it is necessary to use the correct complex permeability for a specific core magnetic flux density. In order to test the modeling code, we demonstrated that it can accurately predict changes in the electrical parameters associated with variations in the rod length and the core thickness for antennas made out of low carbon steel wire. These tests demonstrate that the modeling code was successful in predicting the changes in the rod antenna characteristics under high-current nonlinear conditions due to changes in the physical dimensions of the rod provided that the flux density in the core was held constant in order to keep the complex permeability from changing.

  12. FASOR - A second generation shell of revolution code

    NASA Technical Reports Server (NTRS)

    Cohen, G. A.

    1978-01-01

    An integrated computer program entitled Field Analysis of Shells of Revolution (FASOR) currently under development for NASA is described. When completed, this code will treat prebuckling, buckling, initial postbuckling and vibrations under axisymmetric static loads as well as linear response and bifurcation under asymmetric static loads. Although these modes of response are treated by existing programs, FASOR extends the class of problems treated to include general anisotropy and transverse shear deformations of stiffened laminated shells. At the same time, a primary goal is to develop a program which is free of the usual problems of modeling, numerical convergence and ill-conditioning, laborious problem setup, limitations on problem size and interpretation of output. The field method is briefly described, the shell differential equations are cast in a suitable form for solution by this method and essential aspects of the input format are presented. Numerical results are given for both unstiffened and stiffened anisotropic cylindrical shells and compared with previously published analytical solutions.

  13. Thermohydrodynamic analysis of cryogenic liquid turbulent flow fluid film bearings

    NASA Technical Reports Server (NTRS)

    Andres, Luis San

    1993-01-01

    A thermohydrodynamic analysis is presented and a computer code developed for prediction of the static and dynamic force response of hydrostatic journal bearings (HJB's), annular seals or damper bearing seals, and fixed arc pad bearings for cryogenic liquid applications. The study includes the most important flow characteristics found in cryogenic fluid film bearings such as flow turbulence, fluid inertia, liquid compressibility and thermal effects. The analysis and computational model devised allow the determination of the flow field in cryogenic fluid film bearings along with the dynamic force coefficients for rotor-bearing stability analysis.

  14. An Expert System for the Development of Efficient Parallel Code

    NASA Technical Reports Server (NTRS)

    Jost, Gabriele; Chun, Robert; Jin, Hao-Qiang; Labarta, Jesus; Gimenez, Judit

    2004-01-01

    We have built the prototype of an expert system to assist the user in the development of efficient parallel code. The system was integrated into the parallel programming environment that is currently being developed at NASA Ames. The expert system interfaces to tools for automatic parallelization and performance analysis. It uses static program structure information and performance data in order to automatically determine causes of poor performance and to make suggestions for improvements. In this paper we give an overview of our programming environment, describe the prototype implementation of our expert system, and demonstrate its usefulness with several case studies.

  15. Space Shuttle Main Engine structural analysis and data reduction/evaluation. Volume 7: High pressure fuel turbo-pump third stage impeller analysis

    NASA Technical Reports Server (NTRS)

    Pool, Kirby V.

    1989-01-01

    This volume summarizes the analysis used to assess the structural life of the Space Shuttle Main Engine (SSME) High Pressure Fuel Turbo-Pump (HPFTP) Third Stage Impeller. This analysis was performed in three phases, all using the DIAL finite element code. The first phase was a static stress analysis to determine the mean (non-varying) stress and static margin of safety for the part. The loads involved were steady state pressure and centrifugal force due to spinning. The second phase of the analysis was a modal survey to determine the vibrational modes and natural frequencies of the impeller. The third phase was a dynamic response analysis to determine the alternating component of the stress due to time varying pressure impulses at the outlet (diffuser) side of the impeller. The results of the three phases of the analysis show that the Third Stage Impeller operates very near the upper limits of its capability at full power level (FPL) loading. The static loading alone creates stresses in some areas of the shroud which exceed the yield point of the material. Additional cyclic loading due to the dynamic force could lead to a significant reduction in the life of this part. The cyclic stresses determined in the dynamic response phase of this study are based on an assumption regarding the magnitude of the forcing function.

  16. A5: Automated Analysis of Adversarial Android Applications

    DTIC Science & Technology

    2014-06-03

    algorithm is fairly intuitive. First, A5 invokes the DED [11] decompiler to create Java classes from the Android application code. Next, A5 uses Soot [30...implemented such as Bluetooth, Wi-Fi, sensors , etc. These hardware features are very common in physical devices and are simply not present in the...such as Androguard [1] and Soot [30]. Deficiencies in these tools may also manifest in A5. The bytecode static analysis is limited to finding only

  17. Thermohydrodynamic Analysis of Cryogenic Liquid Turbulent Flow Fluid Film Bearings

    NASA Technical Reports Server (NTRS)

    San Andres, Luis

    1996-01-01

    This report describes a thermohydrodynamic analysis and computer programs for the prediction of the static and dynamic force response of fluid film bearings for cryogenic applications. The research performed addressed effectively the most important theoretical and practical issues related to the operation and performance of cryogenic fluid film bearings. Five computer codes have been licensed by the Texas A&M University to NASA centers and contractors and a total of 14 technical papers have been published.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sesigur, Haluk; Cili, Feridun

    Seismic isolation is an effective design strategy to mitigate the seismic hazard wherein the structure and its contents are protected from the damaging effects of an earthquake. This paper presents the Hangar Project in Sabiha Goekcen Airport which is located in Istanbul, Turkey. Seismic isolation system where the isolation layer arranged at the top of the columns is selected. The seismic hazard analysis, superstructure design, isolator design and testing were based on the Uniform Building Code (1997) and met all requirements of the Turkish Earthquake Code (2007). The substructure which has the steel vertical trusses on facades and RC Hmore » shaped columns in the middle axis of the building was designed with an R factor limited to 2.0 in accordance with Turkish Earthquake Code. In order to verify the effectiveness of the isolation system, nonlinear static and dynamic analyses are performed. The analysis revealed that isolated building has lower base shear (approximately 1/4) against the non-isolated structure.« less

  19. Development of Benchmark Examples for Quasi-Static Delamination Propagation and Fatigue Growth Predictions

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2012-01-01

    The development of benchmark examples for quasi-static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for Abaqus/Standard. The example is based on a finite element model of a Double-Cantilever Beam specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, a quasi-static benchmark example was created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall the results are encouraging, but further assessment for mixed-mode delamination is required.

  20. Static voltage distribution between turns of secondary winding of air-core spiral strip transformer and its application

    NASA Astrophysics Data System (ADS)

    Zhang, Hong-bo; Liu, Jin-liang; Cheng, Xin-bing; Zhang, Yu

    2011-09-01

    The static voltage distribution between winding turns has great impact on output characteristics and lifetime of the air-core spiral strip pulse transformer (ACSSPT). In this paper, winding inductance was calculated by electromagnetic theory, so that the static voltage distribution between turns of secondary winding of ACSSPT was analyzed conveniently. According to theoretical analysis, a voltage gradient because of the turn-to-turn capacitance was clearly noticeable across the ground turns. Simulation results of Pspice and CST EM Studio codes showed that the voltage distribution between turns of secondary winding had linear increments from the output turn to the ground turn. In experiment, the difference in increased voltage between the ground turns and the output turns of a 20-turns secondary winding is almost 50%, which is believed to be responsible for premature breakdown of the insulation, particularly between the ground turns. The experimental results demonstrated the theoretical analysis and simulation results, which had important value for stable and long lifetime ACSSPT design. A new ACSSPT with improved structure has been used successfully in intense electron beam accelerators steadily.

  1. Static electricity: A literature review

    NASA Astrophysics Data System (ADS)

    Crow, Rita M.

    1991-11-01

    The major concern with static electricity is its discharging in a flammable atmosphere which can explode and cause a fire. Textile materials can have their electrical resistivity decreased by the addition of antistatic finishes, imbedding conductive particles into the fibres or by adding metal fibers to the yarns. The test methods used in the studies of static electricity include measuring the static properties of materials, of clothed persons, and of the ignition energy of flammable gases. Surveys have shown that there is sparse evidence for fires definitively being caused by static electricity. However, the 'worst-case' philosophy has been adopted and a static electricity safety code is described, including correct grounding procedures and the wearing of anti-static clothing and footwear.

  2. Tools for Designing and Analyzing Structures

    NASA Technical Reports Server (NTRS)

    Luz, Paul L.

    2005-01-01

    Structural Design and Analysis Toolset is a collection of approximately 26 Microsoft Excel spreadsheet programs, each of which performs calculations within a different subdiscipline of structural design and analysis. These programs present input and output data in user-friendly, menu-driven formats. Although these programs cannot solve complex cases like those treated by larger finite element codes, these programs do yield quick solutions to numerous common problems more rapidly than the finite element codes, thereby making it possible to quickly perform multiple preliminary analyses - e.g., to establish approximate limits prior to detailed analyses by the larger finite element codes. These programs perform different types of calculations, as follows: 1. determination of geometric properties for a variety of standard structural components; 2. analysis of static, vibrational, and thermal- gradient loads and deflections in certain structures (mostly beams and, in the case of thermal-gradients, mirrors); 3. kinetic energies of fans; 4. detailed analysis of stress and buckling in beams, plates, columns, and a variety of shell structures; and 5. temperature dependent properties of materials, including figures of merit that characterize strength, stiffness, and deformation response to thermal gradients

  3. Development of a dynamic coupled hydro-geomechanical code and its application to induced seismicity

    NASA Astrophysics Data System (ADS)

    Miah, Md Mamun

    This research describes the importance of a hydro-geomechanical coupling in the geologic sub-surface environment from fluid injection at geothermal plants, large-scale geological CO2 sequestration for climate mitigation, enhanced oil recovery, and hydraulic fracturing during wells construction in the oil and gas industries. A sequential computational code is developed to capture the multiphysics interaction behavior by linking a flow simulation code TOUGH2 and a geomechanics modeling code PyLith. Numerical formulation of each code is discussed to demonstrate their modeling capabilities. The computational framework involves sequential coupling, and solution of two sub-problems- fluid flow through fractured and porous media and reservoir geomechanics. For each time step of flow calculation, pressure field is passed to the geomechanics code to compute effective stress field and fault slips. A simplified permeability model is implemented in the code that accounts for the permeability of porous and saturated rocks subject to confining stresses. The accuracy of the TOUGH-PyLith coupled simulator is tested by simulating Terzaghi's 1D consolidation problem. The modeling capability of coupled poroelasticity is validated by benchmarking it against Mandel's problem. The code is used to simulate both quasi-static and dynamic earthquake nucleation and slip distribution on a fault from the combined effect of far field tectonic loading and fluid injection by using an appropriate fault constitutive friction model. Results from the quasi-static induced earthquake simulations show a delayed response in earthquake nucleation. This is attributed to the increased total stress in the domain and not accounting for pressure on the fault. However, this issue is resolved in the final chapter in simulating a single event earthquake dynamic rupture. Simulation results show that fluid pressure has a positive effect on slip nucleation and subsequent crack propagation. This is confirmed by running a sensitivity analysis that shows an increase in injection well distance results in delayed slip nucleation and rupture propagation on the fault.

  4. Fundamental period of Italian reinforced concrete buildings: comparison between numerical, experimental and Italian code simplified values

    NASA Astrophysics Data System (ADS)

    Ditommaso, Rocco; Carlo Ponzo, Felice; Auletta, Gianluca; Iacovino, Chiara; Nigro, Antonella

    2015-04-01

    Aim of this study is a comparison among the fundamental period of reinforced concrete buildings evaluated using the simplified approach proposed by the Italian Seismic code (NTC 2008), numerical models and real values retrieved from an experimental campaign performed on several buildings located in Basilicata region (Italy). With the intention of proposing simplified relationships to evaluate the fundamental period of reinforced concrete buildings, scientists and engineers performed several numerical and experimental campaigns, on different structures all around the world, to calibrate different kind of formulas. Most of formulas retrieved from both numerical and experimental analyses provides vibration periods smaller than those suggested by the Italian seismic code. However, it is well known that the fundamental period of a structure play a key role in the correct evaluation of the spectral acceleration for seismic static analyses. Generally, simplified approaches impose the use of safety factors greater than those related to in depth nonlinear analyses with the aim to cover possible unexpected uncertainties. Using the simplified formula proposed by the Italian seismic code the fundamental period is quite higher than fundamental periods experimentally evaluated on real structures, with the consequence that the spectral acceleration adopted in the seismic static analysis may be significantly different than real spectral acceleration. This approach could produces a decreasing in safety factors obtained using linear and nonlinear seismic static analyses. Finally, the authors suggest a possible update of the Italian seismic code formula for the simplified estimation of the fundamental period of vibration of existing RC buildings, taking into account both elastic and inelastic structural behaviour and the interaction between structural and non-structural elements. Acknowledgements This study was partially funded by the Italian Civil Protection Department within the project DPC-RELUIS 2014 - RS4 ''Seismic observatory of structures and health monitoring''. References R. Ditommaso, M. Vona, M. R. Gallipoli and M. Mucciarelli (2013). Evaluation and considerations about fundamental periods of damaged reinforced concrete buildings. Nat. Hazards Earth Syst. Sci., 13, 1903-1912, 2013. www.nat-hazards-earth-syst-sci.net/13/1903/2013. doi:10.5194/nhess-13-1903-2013

  5. Effect of URM infills on seismic vulnerability of Indian code designed RC frame buildings

    NASA Astrophysics Data System (ADS)

    Haldar, Putul; Singh, Yogendra; Paul, D. K.

    2012-03-01

    Unreinforced Masonry (URM) is the most common partitioning material in framed buildings in India and many other countries. Although it is well-known that under lateral loading the behavior and modes of failure of the frame buildings change significantly due to infill-frame interaction, the general design practice is to treat infills as nonstructural elements and their stiffness, strength and interaction with the frame is often ignored, primarily because of difficulties in simulation and lack of modeling guidelines in design codes. The Indian Standard, like many other national codes, does not provide explicit insight into the anticipated performance and associated vulnerability of infilled frames. This paper presents an analytical study on the seismic performance and fragility analysis of Indian code-designed RC frame buildings with and without URM infills. Infills are modeled as diagonal struts as per ASCE 41 guidelines and various modes of failure are considered. HAZUS methodology along with nonlinear static analysis is used to compare the seismic vulnerability of bare and infilled frames. The comparative study suggests that URM infills result in a significant increase in the seismic vulnerability of RC frames and their effect needs to be properly incorporated in design codes.

  6. Beam-dynamics codes used at DARHT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ekdahl, Jr., Carl August

    Several beam simulation codes are used to help gain a better understanding of beam dynamics in the DARHT LIAs. The most notable of these fall into the following categories: for beam production – Tricomp Trak orbit tracking code, LSP Particle in cell (PIC) code, for beam transport and acceleration – XTR static envelope and centroid code, LAMDA time-resolved envelope and centroid code, LSP-Slice PIC code, for coasting-beam transport to target – LAMDA time-resolved envelope code, LSP-Slice PIC code. These codes are also being used to inform the design of Scorpius.

  7. Benchmark of FDNS CFD Code For Direct Connect RBCC Test Data

    NASA Technical Reports Server (NTRS)

    Ruf, J. H.

    2000-01-01

    Computational Fluid Dynamics (CFD) analysis results are compared with experimental data from the Pennsylvania State University's (PSU) Propulsion Engineering Research Center (PERC) rocket based combined cycle (RBCC) rocket-ejector experiments. The PERC RBCC experimental hardware was in a direct-connect configuration in diffusion and afterburning (DAB) operation. The objective of the present work was to validate the Finite Difference Navier Stokes (FDNS) CFD code for the rocket-ejector mode internal fluid mechanics and combustion phenomena. A second objective was determine the best application procedures to use FDNS as a predictive/engineering tool. Three-dimensional CFD analysis was performed. Solution methodology and grid requirements are discussed. CFD results are compared to experimental data for static pressure, Raman Spectroscopy species distribution data and RBCC net thrust and specified impulse.

  8. A method for the design of transonic flexible wings

    NASA Technical Reports Server (NTRS)

    Smith, Leigh Ann; Campbell, Richard L.

    1990-01-01

    Methodology was developed for designing airfoils and wings at transonic speeds which includes a technique that can account for static aeroelastic deflections. This procedure is capable of designing either supercritical or more conventional airfoil sections. Methods for including viscous effects are also illustrated and are shown to give accurate results. The methodology developed is an interactive system containing three major parts. A design module was developed which modifies airfoil sections to achieve a desired pressure distribution. This design module works in conjunction with an aerodynamic analysis module, which for this study is a small perturbation transonic flow code. Additionally, an aeroelastic module is included which determines the wing deformation due to the calculated aerodynamic loads. Because of the modular nature of the method, it can be easily coupled with any aerodynamic analysis code.

  9. Mining Program Source Code for Improving Software Quality

    DTIC Science & Technology

    2013-01-01

    conduct static verification on the software application under analysis to detect defects around APIs. (a) Papers published in peer-reviewed journals...N/A for none) Enter List of papers submitted or published that acknowledge ARO support from the start of the project to the date of this printing...List the papers , including journal references, in the following categories: Received Paper 05/06/2013 21.00 Tao Xie, Suresh Thummalapenta, David Lo

  10. Probabilistic evaluation of uncertainties and risks in aerospace components

    NASA Technical Reports Server (NTRS)

    Shah, A. R.; Shiao, M. C.; Nagpal, V. K.; Chamis, C. C.

    1992-01-01

    A methodology is presented for the computational simulation of primitive variable uncertainties, and attention is given to the simulation of specific aerospace components. Specific examples treated encompass a probabilistic material behavior model, as well as static, dynamic, and fatigue/damage analyses of a turbine blade in a mistuned bladed rotor in the SSME turbopumps. An account is given of the use of the NESSES probabilistic FEM analysis CFD code.

  11. International Aerospace and Ground Conference on Lightning and Static Electricity (15th) Held in Atlantic City, New Jersey on October 6 - 8, 1992. Addendum

    DTIC Science & Technology

    1992-11-01

    November 1992 1992 INTERNATIONAL AEROSPACE AND GROUND CONFERENCE 6. Perfrming Orgnis.aten Code ON LIGHTNING AND STATIC ELECTRICITY - ADDENDUM 111...October 6-8 1992 Program and the Federal Aviation Administration 14. Sponsoring Agency Code Technical Center ACD-230 15. Supplementary Metes The NICG...area]. The program runs well on an IBM PC or compatible 386 with a math co-processor 387 chip and a VGA monitor. For this study, streamers were added

  12. Updated Chemical Kinetics and Sensitivity Analysis Code

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan

    2005-01-01

    An updated version of the General Chemical Kinetics and Sensitivity Analysis (LSENS) computer code has become available. A prior version of LSENS was described in "Program Helps to Determine Chemical-Reaction Mechanisms" (LEW-15758), NASA Tech Briefs, Vol. 19, No. 5 (May 1995), page 66. To recapitulate: LSENS solves complex, homogeneous, gas-phase, chemical-kinetics problems (e.g., combustion of fuels) that are represented by sets of many coupled, nonlinear, first-order ordinary differential equations. LSENS has been designed for flexibility, convenience, and computational efficiency. The present version of LSENS incorporates mathematical models for (1) a static system; (2) steady, one-dimensional inviscid flow; (3) reaction behind an incident shock wave, including boundary layer correction; (4) a perfectly stirred reactor; and (5) a perfectly stirred reactor followed by a plug-flow reactor. In addition, LSENS can compute equilibrium properties for the following assigned states: enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. For static and one-dimensional-flow problems, including those behind an incident shock wave and following a perfectly stirred reactor calculation, LSENS can compute sensitivity coefficients of dependent variables and their derivatives, with respect to the initial values of dependent variables and/or the rate-coefficient parameters of the chemical reactions.

  13. Precise and Scalable Static Program Analysis of NASA Flight Software

    NASA Technical Reports Server (NTRS)

    Brat, G.; Venet, A.

    2005-01-01

    Recent NASA mission failures (e.g., Mars Polar Lander and Mars Orbiter) illustrate the importance of having an efficient verification and validation process for such systems. One software error, as simple as it may be, can cause the loss of an expensive mission, or lead to budget overruns and crunched schedules. Unfortunately, traditional verification methods cannot guarantee the absence of errors in software systems. Therefore, we have developed the CGS static program analysis tool, which can exhaustively analyze large C programs. CGS analyzes the source code and identifies statements in which arrays are accessed out of bounds, or, pointers are used outside the memory region they should address. This paper gives a high-level description of CGS and its theoretical foundations. It also reports on the use of CGS on real NASA software systems used in Mars missions (from Mars PathFinder to Mars Exploration Rover) and on the International Space Station.

  14. Calculating Reuse Distance from Source Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narayanan, Sri Hari Krishna; Hovland, Paul

    The efficient use of a system is of paramount importance in high-performance computing. Applications need to be engineered for future systems even before the architecture of such a system is clearly known. Static performance analysis that generates performance bounds is one way to approach the task of understanding application behavior. Performance bounds provide an upper limit on the performance of an application on a given architecture. Predicting cache hierarchy behavior and accesses to main memory is a requirement for accurate performance bounds. This work presents our static reuse distance algorithm to generate reuse distance histograms. We then use these histogramsmore » to predict cache miss rates. Experimental results for kernels studied show that the approach is accurate.« less

  15. New Tool Released for Engine-Airframe Blade-Out Structural Simulations

    NASA Technical Reports Server (NTRS)

    Lawrence, Charles

    2004-01-01

    Researchers at the NASA Glenn Research Center have enhanced a general-purpose finite element code, NASTRAN, for engine-airframe structural simulations during steady-state and transient operating conditions. For steady-state simulations, the code can predict critical operating speeds, natural modes of vibration, and forced response (e.g., cabin noise and component fatigue). The code can be used to perform static analysis to predict engine-airframe response and component stresses due to maneuver loads. For transient response, the simulation code can be used to predict response due to bladeoff events and subsequent engine shutdown and windmilling conditions. In addition, the code can be used as a pretest analysis tool to predict the results of the bladeout test required for FAA certification of new and derivative aircraft engines. Before the present analysis code was developed, all the major aircraft engine and airframe manufacturers in the United States and overseas were performing similar types of analyses to ensure the structural integrity of engine-airframe systems. Although there were many similarities among the analysis procedures, each manufacturer was developing and maintaining its own structural analysis capabilities independently. This situation led to high software development and maintenance costs, complications with manufacturers exchanging models and results, and limitations in predicting the structural response to the desired degree of accuracy. An industry-NASA team was formed to overcome these problems by developing a common analysis tool that would satisfy all the structural analysis needs of the industry and that would be available and supported by a commercial software vendor so that the team members would be relieved of maintenance and development responsibilities. Input from all the team members was used to ensure that everyone's requirements were satisfied and that the best technology was incorporated into the code. Furthermore, because the code would be distributed by a commercial software vendor, it would be more readily available to engine and airframe manufacturers, as well as to nonaircraft companies that did not previously have access to this capability.

  16. Modeling of a Turbofan Engine with Ice Crystal Ingestion in the NASA Propulsion System Laboratory

    NASA Technical Reports Server (NTRS)

    Veres, Joseph P.; Jorgenson, Philip C. E.; Jones, Scott M.; Nili, Samaun

    2017-01-01

    The main focus of this study is to apply a computational tool for the flow analysis of the turbine engine that has been tested with ice crystal ingestion in the Propulsion Systems Laboratory (PSL) at NASA Glenn Research Center. The PSL has been used to test a highly instrumented Honeywell ALF502R-5A (LF11) turbofan engine at simulated altitude operating conditions. Test data analysis with an engine cycle code and a compressor flow code was conducted to determine the values of key icing parameters, that can indicate the risk of ice accretion, which can lead to engine rollback (un-commanded loss of engine thrust). The full engine aerothermodynamic performance was modeled with the Honeywell Customer Deck specifically created for the ALF502R-5A engine. The mean-line compressor flow analysis code, which includes a code that models the state of the ice crystal, was used to model the air flow through the fan-core and low pressure compressor. The results of the compressor flow analyses included calculations of the ice-water flow rate to air flow rate ratio (IWAR), the local static wet bulb temperature, and the particle melt ratio throughout the flow field. It was found that the assumed particle size had a large effect on the particle melt ratio, and on the local wet bulb temperature. In this study the particle size was varied parametrically to produce a non-zero calculated melt ratio in the exit guide vane (EGV) region of the low pressure compressor (LPC) for the data points that experienced a growth of blockage there, and a subsequent engine called rollback (CRB). At data points where the engine experienced a CRB having the lowest wet bulb temperature of 492 degrees Rankine at the EGV trailing edge, the smallest particle size that produced a non-zero melt ratio (between 3 percent - 4 percent) was on the order of 1 micron. This value of melt ratio was utilized as the target for all other subsequent data points analyzed, while the particle size was varied from 1 micron - 9.5 microns to achieve the target melt ratio. For data points that did not experience a CRB which had static wet bulb temperatures in the EGV region below 492 degrees Rankine, a non-zero melt ratio could not be achieved even with a 1 micron ice particle size. The highest value of static wet bulb temperature for data points that experienced engine CRB was 498 degrees Rankine with a particle size of 9.5 microns. Based on this study of the LF11 engine test data, the range of static wet bulb temperature at the EGV exit for engine CRB was in the narrow range of 492 degrees Rankine - 498 degrees Rankine , while the minimum value of IWAR was 0.002. The rate of blockage growth due to ice accretion and boundary layer growth was estimated by scaling from a known blockage growth rate that was determined in a previous study. These results obtained from the LF11 engine analysis formed the basis of a unique “icing wedge.”

  17. Finite element modelling of crash response of composite aerospace sub-floor structures

    NASA Astrophysics Data System (ADS)

    McCarthy, M. A.; Harte, C. G.; Wiggenraad, J. F. M.; Michielsen, A. L. P. J.; Kohlgrüber, D.; Kamoulakos, A.

    Composite energy-absorbing structures for use in aircraft are being studied within a European Commission research programme (CRASURV - Design for Crash Survivability). One of the aims of the project is to evaluate the current capabilities of crashworthiness simulation codes for composites modelling. This paper focuses on the computational analysis using explicit finite element analysis, of a number of quasi-static and dynamic tests carried out within the programme. It describes the design of the structures, the analysis techniques used, and the results of the analyses in comparison to the experimental test results. It has been found that current multi-ply shell models are capable of modelling the main energy-absorbing processes at work in such structures. However some deficiencies exist, particularly in modelling fabric composites. Developments within the finite element code are taking place as a result of this work which will enable better representation of composite fabrics.

  18. A Simulation Testbed for Adaptive Modulation and Coding in Airborne Telemetry

    DTIC Science & Technology

    2014-05-29

    its modulation waveforms and LDPC for the FEC codes . It also uses several sets of published telemetry channel sounding data as its channel models...waveforms and LDPC for the FEC codes . It also uses several sets of published telemetry channel sounding data as its channel models. Within the context...check ( LDPC ) codes with tunable code rates, and both static and dynamic telemetry channel models are included. In an effort to maximize the

  19. Automated verification of flight software. User's manual

    NASA Technical Reports Server (NTRS)

    Saib, S. H.

    1982-01-01

    (Automated Verification of Flight Software), a collection of tools for analyzing source programs written in FORTRAN and AED is documented. The quality and the reliability of flight software are improved by: (1) indented listings of source programs, (2) static analysis to detect inconsistencies in the use of variables and parameters, (3) automated documentation, (4) instrumentation of source code, (5) retesting guidance, (6) analysis of assertions, (7) symbolic execution, (8) generation of verification conditions, and (9) simplification of verification conditions. Use of AVFS in the verification of flight software is described.

  20. REDIR: Automated Static Detection of Obfuscated Anti-Debugging Techniques

    DTIC Science & Technology

    2014-03-27

    analyzing code samples that resist other forms of analysis. 2.5.6 RODS and HASTI: Software Engineering Cognitive Support Software Engineering (SE) is another...and (c) this method is resistant to common obfuscation techniques. To achieve this goal, the Data/Frame sensemaking theory guides the process of...No Starch Press, 2012. [46] C.-W. Hsu, S. W. Shieh et al., “Divergence Detector: A Fine-Grained Approach to Detecting VM-Awareness Malware,” in

  1. Theory-Driven Models for Correcting Fight or Flight Imbalance in Gulf War Illness

    DTIC Science & Technology

    2011-09-01

    testing on software • Performed static and dynamic analysis on safety code Research Interests To understand how the nervous system operates, how...dynamics of these systems to reset control of the HPA-immune axis to normal. We have completed the negotiation of sub-awards to the CFIDS Association...We propose that severe physical or psychological insult to the endocrine and immune systems can displace these from a normal regulatory equilibrium

  2. Investigation of the Finite Element Software Packages at KSC

    NASA Technical Reports Server (NTRS)

    Lu, Chu-Ho

    1991-01-01

    The useful and powerful features of NASTRAN and three real world problems for the testing of the capabilities of different NASTRAN versions are discussed. The test problems involve direct transient analysis, nonlinear analysis, and static analysis. The experiences in using graphics software packages are also discussed. It was found that MSC/XL can be more useful if it can be improved to generate picture files of the analysis results and to extend its capabilities to support finite element codes other than MSC/NASTRAN. It was found that the current version of SDRC/I-DEAS (version VI) may have bugs in the module 'Data Loader'.

  3. Combining Static Model Checking with Dynamic Enforcement Using the Statecall Policy Language

    NASA Astrophysics Data System (ADS)

    Madhavapeddy, Anil

    Internet protocols encapsulate a significant amount of state, making implementing the host software complex. In this paper, we define the Statecall Policy Language (SPL) which provides a usable middle ground between ad-hoc coding and formal reasoning. It enables programmers to embed automata in their code which can be statically model-checked using SPIN and dynamically enforced. The performance overheads are minimal, and the automata also provide higher-level debugging capabilities. We also describe some practical uses of SPL by describing the automata used in an SSH server written entirely in OCaml/SPL.

  4. A generic interface between COSMIC/NASTRAN and PATRAN (R)

    NASA Technical Reports Server (NTRS)

    Roschke, Paul N.; Premthamkorn, Prakit; Maxwell, James C.

    1990-01-01

    Despite its powerful analytical capabilities, COSMIC/NASTRAN lacks adequate post-processing adroitness. PATRAN, on the other hand is widely accepted for its graphical capabilities. A nonproprietary, public domain code mnemonically titled CPI (for COSMIC/NASTRAN-PATRAN Interface) is designed to manipulate a large number of files rapidly and efficiently between the two parent codes. In addition to PATRAN's results file preparation, CPI also prepares PATRAN's P/PLOT data files for xy plotting. The user is prompted for necessary information during an interactive session. Current implementation supports NASTRAN's displacement approach including the following rigid formats: (1) static analysis, (2) normal modal analysis, (3) direct transient response, and (4) modal transient response. A wide variety of data blocks are also supported. Error trapping is given special consideration. A sample session with CPI illustrates its simplicity and ease of use.

  5. Results of comparative RBMK neutron computation using VNIIEF codes (cell computation, 3D statics, 3D kinetics). Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grebennikov, A.N.; Zhitnik, A.K.; Zvenigorodskaya, O.A.

    1995-12-31

    In conformity with the protocol of the Workshop under Contract {open_quotes}Assessment of RBMK reactor safety using modern Western Codes{close_quotes} VNIIEF performed a neutronics computation series to compare western and VNIIEF codes and assess whether VNIIEF codes are suitable for RBMK type reactor safety assessment computation. The work was carried out in close collaboration with M.I. Rozhdestvensky and L.M. Podlazov, NIKIET employees. The effort involved: (1) cell computations with the WIMS, EKRAN codes (improved modification of the LOMA code) and the S-90 code (VNIIEF Monte Carlo). Cell, polycell, burnup computation; (2) 3D computation of static states with the KORAT-3D and NEUmore » codes and comparison with results of computation with the NESTLE code (USA). The computations were performed in the geometry and using the neutron constants presented by the American party; (3) 3D computation of neutron kinetics with the KORAT-3D and NEU codes. These computations were performed in two formulations, both being developed in collaboration with NIKIET. Formulation of the first problem maximally possibly agrees with one of NESTLE problems and imitates gas bubble travel through a core. The second problem is a model of the RBMK as a whole with imitation of control and protection system controls (CPS) movement in a core.« less

  6. A pedagogical walkthrough of computational modeling and simulation of Wnt signaling pathway using static causal models in MATLAB.

    PubMed

    Sinha, Shriprakash

    2016-12-01

    Simulation study in systems biology involving computational experiments dealing with Wnt signaling pathways abound in literature but often lack a pedagogical perspective that might ease the understanding of beginner students and researchers in transition, who intend to work on the modeling of the pathway. This paucity might happen due to restrictive business policies which enforce an unwanted embargo on the sharing of important scientific knowledge. A tutorial introduction to computational modeling of Wnt signaling pathway in a human colorectal cancer dataset using static Bayesian network models is provided. The walkthrough might aid biologists/informaticians in understanding the design of computational experiments that is interleaved with exposition of the Matlab code and causal models from Bayesian network toolbox. The manuscript elucidates the coding contents of the advance article by Sinha (Integr. Biol. 6:1034-1048, 2014) and takes the reader in a step-by-step process of how (a) the collection and the transformation of the available biological information from literature is done, (b) the integration of the heterogeneous data and prior biological knowledge in the network is achieved, (c) the simulation study is designed, (d) the hypothesis regarding a biological phenomena is transformed into computational framework, and (e) results and inferences drawn using d -connectivity/separability are reported. The manuscript finally ends with a programming assignment to help the readers get hands-on experience of a perturbation project. Description of Matlab files is made available under GNU GPL v3 license at the Google code project on https://code.google.com/p/static-bn-for-wnt-signaling-pathway and https: //sites.google.com/site/shriprakashsinha/shriprakashsinha/projects/static-bn-for-wnt-signaling-pathway. Latest updates can be found in the latter website.

  7. Using the ALEGRA Code for Analysis of Quasi-Static Magnetization of Metals

    DTIC Science & Technology

    2015-09-01

    covariant Levi - Civita skew-symmetric tensor. Using tensorial notation per- mits one to present all the equations in the universal covariant (i.e., coordinate...tensors numerically coincide with the corresponding values of the Kronnekker symbol δij, δij, δij. The Levi - Civita tensor z ijk has the main com...simulations: body -fitted (left) and regular (right). 6.1 Spatial Discretization Two mesh configurations were used: (1) a body -fitted irregular mesh

  8. Hardware-Independent Proofs of Numerical Programs

    NASA Technical Reports Server (NTRS)

    Boldo, Sylvie; Nguyen, Thi Minh Tuyen

    2010-01-01

    On recent architectures, a numerical program may give different answers depending on the execution hardware and the compilation. Our goal is to formally prove properties about numerical programs that are true for multiple architectures and compilers. We propose an approach that states the rounding error of each floating-point computation whatever the environment. This approach is implemented in the Frama-C platform for static analysis of C code. Small case studies using this approach are entirely and automatically proved

  9. Detecting and Characterizing Semantic Inconsistencies in Ported Code

    NASA Technical Reports Server (NTRS)

    Ray, Baishakhi; Kim, Miryung; Person, Suzette J.; Rungta, Neha

    2013-01-01

    Adding similar features and bug fixes often requires porting program patches from reference implementations and adapting them to target implementations. Porting errors may result from faulty adaptations or inconsistent updates. This paper investigates (I) the types of porting errors found in practice, and (2) how to detect and characterize potential porting errors. Analyzing version histories, we define five categories of porting errors, including incorrect control- and data-flow, code redundancy, inconsistent identifier renamings, etc. Leveraging this categorization, we design a static control- and data-dependence analysis technique, SPA, to detect and characterize porting inconsistencies. Our evaluation on code from four open-source projects shows thai SPA can dell-oct porting inconsistencies with 65% to 73% precision and 90% recall, and identify inconsistency types with 58% to 63% precision and 92% to 100% recall. In a comparison with two existing error detection tools, SPA improves precision by 14 to 17 percentage points

  10. Detecting and Characterizing Semantic Inconsistencies in Ported Code

    NASA Technical Reports Server (NTRS)

    Ray, Baishakhi; Kim, Miryung; Person,Suzette; Rungta, Neha

    2013-01-01

    Adding similar features and bug fixes often requires porting program patches from reference implementations and adapting them to target implementations. Porting errors may result from faulty adaptations or inconsistent updates. This paper investigates (1) the types of porting errors found in practice, and (2) how to detect and characterize potential porting errors. Analyzing version histories, we define five categories of porting errors, including incorrect control- and data-flow, code redundancy, inconsistent identifier renamings, etc. Leveraging this categorization, we design a static control- and data-dependence analysis technique, SPA, to detect and characterize porting inconsistencies. Our evaluation on code from four open-source projects shows that SPA can detect porting inconsistencies with 65% to 73% precision and 90% recall, and identify inconsistency types with 58% to 63% precision and 92% to 100% recall. In a comparison with two existing error detection tools, SPA improves precision by 14 to 17 percentage points.

  11. New coding advances for deep space communications

    NASA Technical Reports Server (NTRS)

    Yuen, Joseph H.

    1987-01-01

    Advances made in error-correction coding for deep space communications are described. The code believed to be the best is a (15, 1/6) convolutional code, with maximum likelihood decoding; when it is concatenated with a 10-bit Reed-Solomon code, it achieves a bit error rate of 10 to the -6th, at a bit SNR of 0.42 dB. This code outperforms the Voyager code by 2.11 dB. The use of source statics in decoding convolutionally encoded Voyager images from the Uranus encounter is investigated, and it is found that a 2 dB decoding gain can be achieved.

  12. Aeroelastic Ground Wind Loads Analysis Tool for Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Ivanco, Thomas G.

    2016-01-01

    Launch vehicles are exposed to ground winds during rollout and on the launch pad that can induce static and dynamic loads. Of particular concern are the dynamic loads caused by vortex shedding from nearly-cylindrical structures. When the frequency of vortex shedding nears that of a lowly-damped structural mode, the dynamic loads can be more than an order of magnitude greater than mean drag loads. Accurately predicting vehicle response to vortex shedding during the design and analysis cycles is difficult and typically exceeds the practical capabilities of modern computational fluid dynamics codes. Therefore, mitigating the ground wind loads risk typically requires wind-tunnel tests of dynamically-scaled models that are time consuming and expensive to conduct. In recent years, NASA has developed a ground wind loads analysis tool for launch vehicles to fill this analytical capability gap in order to provide predictions for prelaunch static and dynamic loads. This paper includes a background of the ground wind loads problem and the current state-of-the-art. It then discusses the history and significance of the analysis tool and the methodology used to develop it. Finally, results of the analysis tool are compared to wind-tunnel and full-scale data of various geometries and Reynolds numbers.

  13. KEGGParser: parsing and editing KEGG pathway maps in Matlab.

    PubMed

    Arakelyan, Arsen; Nersisyan, Lilit

    2013-02-15

    KEGG pathway database is a collection of manually drawn pathway maps accompanied with KGML format files intended for use in automatic analysis. KGML files, however, do not contain the required information for complete reproduction of all the events indicated in the static image of a pathway map. Several parsers and editors of KEGG pathways exist for processing KGML files. We introduce KEGGParser-a MATLAB based tool for KEGG pathway parsing, semiautomatic fixing, editing, visualization and analysis in MATLAB environment. It also works with Scilab. The source code is available at http://www.mathworks.com/matlabcentral/fileexchange/37561.

  14. Program Helps To Determine Chemical-Reaction Mechanisms

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.; Radhakrishnan, K.

    1995-01-01

    General Chemical Kinetics and Sensitivity Analysis (LSENS) computer code developed for use in solving complex, homogeneous, gas-phase, chemical-kinetics problems. Provides for efficient and accurate chemical-kinetics computations and provides for sensitivity analysis for variety of problems, including problems involving honisothermal conditions. Incorporates mathematical models for static system, steady one-dimensional inviscid flow, reaction behind incident shock wave (with boundary-layer correction), and perfectly stirred reactor. Computations of equilibrium properties performed for following assigned states: enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. Written in FORTRAN 77 with exception of NAMELIST extensions used for input.

  15. JPL-ANTOPT antenna structure optimization program

    NASA Technical Reports Server (NTRS)

    Strain, D. M.

    1994-01-01

    New antenna path-length error and pointing-error structure optimization codes were recently added to the MSC/NASTRAN structural analysis computer program. Path-length and pointing errors are important measured of structure-related antenna performance. The path-length and pointing errors are treated as scalar displacements for statics loading cases. These scalar displacements can be subject to constraint during the optimization process. Path-length and pointing-error calculations supplement the other optimization and sensitivity capabilities of NASTRAN. The analysis and design functions were implemented as 'DMAP ALTERs' to the Design Optimization (SOL 200) Solution Sequence of MSC-NASTRAN, Version 67.5.

  16. Modeling the Fluid Withdraw and Injection Induced Earthquakes

    NASA Astrophysics Data System (ADS)

    Meng, C.

    2016-12-01

    We present an open source numerical code, Defmod, that allows one to model the induced seismicity in an efficient and standalone manner. The fluid withdraw and injection induced earthquake has been a great concern to the industries including oil/gas, wastewater disposal and CO2 sequestration. Being able to numerically model the induced seismicity is long desired. To do that, one has to consider at lease two processes, a steady process that describes the inducing and aseismic stages before and in between the seismic events, and an abrupt process that describes the dynamic fault rupture accompanied by seismic energy radiations during the events. The steady process can be adequately modeled by a quasi-static model, while the abrupt process has to be modeled by a dynamic model. In most of the published modeling works, only one of these processes is considered. The geomechanicists and reservoir engineers are focused more on the quasi-static modeling, whereas the geophysicists and seismologists are focused more on the dynamic modeling. The finite element code Defmod combines these two models into a hybrid model that uses the failure criterion and frictional laws to adaptively switch between the (quasi-)static and dynamic states. The code is capable of modeling episodic fault rupture driven by quasi-static loading, e.g. due to reservoir fluid withdraw and/or injection, and by dynamic loading, e.g. due to the foregoing earthquakes. We demonstrate a case study for the 2013 Azle earthquake.

  17. Analysis of an unswept propfan blade with a semiempirical dynamic stall model

    NASA Technical Reports Server (NTRS)

    Reddy, T. S. R.; Kaza, K. R. V.

    1989-01-01

    The time history response of a propfan wind tunnel model with dynamic stall is studied analytically. The response obtained from the analysis is compared with available experimental data. The governing equations of motion are formulated in terms of blade normal modes which are calculated using the COSMIC-NASTRAN computer code. The response analysis considered the blade plunging and pitching motions. The lift, drag and moment coefficients for angles of attack below the static stall angle are obtained from a quasi-steady theory. For angles above static stall angles, a semiempirical dynamic stall model based on a correction to angle of attack is used to obtain lift, drag and moment coefficients. Using these coefficients, the aerodynamic forces are calculated at a selected number of strips, and integrated to obtain the total generalized forces. The combined momentum-blade element theory is used to calculate the induced velocity. The semiempirical stall model predicted a limit cycle oscillation near the setting angle at which large vibratory stresses were observed in an experiment. The predicted mode and frequency of oscillation also agreed with those measured in the experiment near the setting angle.

  18. Sensitivity analysis of a wing aeroelastic response

    NASA Technical Reports Server (NTRS)

    Kapania, Rakesh K.; Eldred, Lloyd B.; Barthelemy, Jean-Francois M.

    1991-01-01

    A variation of Sobieski's Global Sensitivity Equations (GSE) approach is implemented to obtain the sensitivity of the static aeroelastic response of a three-dimensional wing model. The formulation is quite general and accepts any aerodynamics and structural analysis capability. An interface code is written to convert one analysis's output to the other's input, and visa versa. Local sensitivity derivatives are calculated by either analytic methods or finite difference techniques. A program to combine the local sensitivities, such as the sensitivity of the stiffness matrix or the aerodynamic kernel matrix, into global sensitivity derivatives is developed. The aerodynamic analysis package FAST, using a lifting surface theory, and a structural package, ELAPS, implementing Giles' equivalent plate model are used.

  19. Visual pattern image sequence coding

    NASA Technical Reports Server (NTRS)

    Silsbee, Peter; Bovik, Alan C.; Chen, Dapang

    1990-01-01

    The visual pattern image coding (VPIC) configurable digital image-coding process is capable of coding with visual fidelity comparable to the best available techniques, at compressions which (at 30-40:1) exceed all other technologies. These capabilities are associated with unprecedented coding efficiencies; coding and decoding operations are entirely linear with respect to image size and entail a complexity that is 1-2 orders of magnitude faster than any previous high-compression technique. The visual pattern image sequence coding to which attention is presently given exploits all the advantages of the static VPIC in the reduction of information from an additional, temporal dimension, to achieve unprecedented image sequence coding performance.

  20. Limit analysis, rammed earth material and Casagrande test

    NASA Astrophysics Data System (ADS)

    El-Nabouch, Ranime; Pastor, Joseph; Bui, Quoc-Bao; Plé, Olivier

    2018-02-01

    The present paper is concerned with the simulation of the Casagrande test carried out on a rammed earth material for wall-type structures in the framework of Limit Analysis (LA). In a preliminary study, the material is considered as a homogeneous Coulomb material, and existing LA static and kinematic codes are used for the simulation of the test. In each loading case, static and kinematic bounds coincide; the corresponding exact solution is a two-rigid-block mechanism together with a quasi-constant stress vector and a velocity jump also constant along the interface, for the three loading cases. In a second study, to take into account the influence of compressive loadings related to the porosity of the material, an elliptic criterion (denoted Cohesive Cam-Clay, CCC) is defined based on recent homogenization results about the hollow sphere model for porous Coulomb materials. Finally, original finite element formulations of the static and mixed kinematic methods for the CCC material are developed and applied to the Casagrande test. The results are the same than above, except that this time the velocity jump depends on the compressive loading, which is more realistic but not satisfying fully the experimental observations. Therefore, the possible extensions of this work towards non-standard direct methods are analyzed in the conclusion section.

  1. Internal Flow Analysis of Large L/D Solid Rocket Motors

    NASA Technical Reports Server (NTRS)

    Laubacher, Brian A.

    2000-01-01

    Traditionally, Solid Rocket Motor (SRM) internal ballistic performance has been analyzed and predicted with either zero-dimensional (volume filling) codes or one-dimensional ballistics codes. One dimensional simulation of SRM performance is only necessary for ignition modeling, or for motors that have large length to port diameter ratios which exhibit an axial "pressure drop" during the early burn times. This type of prediction works quite well for many types of motors, however, when motor aspect ratios get large, and port to throat ratios get closer to one, two dimensional effects can become significant. The initial propellant grain configuration for the Space Shuttle Reusable Solid Rocket Motor (RSRM) was analyzed with 2-D, steady, axi-symmetric computational fluid dynamics (CFD). The results of the CFD analysis show that the steady-state performance prediction at the initial burn geometry, in general, agrees well with 1-D transient prediction results at an early time, however, significant features of the 2-D flow are captured with the CFD results that would otherwise go unnoticed. Capturing these subtle differences gives a greater confidence to modeling accuracy, and additional insight with which to model secondary internal flow effects like erosive burning. Detailed analysis of the 2-D flowfield has led to the discovery of its hidden 1-D isentropic behavior, and provided the means for a thorough and simplified understanding of internal solid rocket motor flow. Performance parameters such as nozzle stagnation pressure, static pressure drop, characteristic velocity, thrust and specific impulse are discussed in detail and compared for different modeling and prediction methods. The predicted performance using both the 1-D codes and the CFD results are compared with measured data obtained from static tests of the RSRM. The differences and limitations of predictions using ID and 2-D flow fields are discussed and some suggestions for the design of large L/D motors and more critically, motors with port to throat ratios near one, are covered.

  2. Analysis of supersonic plug nozzle flowfield and heat transfer

    NASA Technical Reports Server (NTRS)

    Murthy, S. N. B.; Sheu, W. H.

    1988-01-01

    A number of problems pertaining to the flowfield in a plug nozzle, designed as a supersonic thruster nozzle, with provision for cooling the plug with a coolant stream admitted parallel to the plug wall surface, were studied. First, an analysis was performed of the inviscid, nonturbulent, gas dynamic interaction between the primary hot stream and the secondary coolant stream. A numerical prediction code for establishing the resulting flowfield with a dividing surface between the two streams, for various combinations of stagnation and static properties of the two streams, was utilized for illustrating the nature of interactions. Secondly, skin friction coefficient, heat transfer coefficient and heat flux to the plug wall were analyzed under smooth flow conditions (without shocks or separation) for various coolant flow conditions. A numerical code was suitably modified and utilized for the determination of heat transfer parameters in a number of cases for which data are available. Thirdly, an analysis was initiated for modeling turbulence processes in transonic shock-boundary layer interaction without the appearance of flow separation.

  3. In situ determination of the static inductance and resistance of a plasma focus capacitor bank.

    PubMed

    Saw, S H; Lee, S; Roy, F; Chong, P L; Vengadeswaran, V; Sidik, A S M; Leong, Y W; Singh, A

    2010-05-01

    The static (unloaded) electrical parameters of a capacitor bank are of utmost importance for the purpose of modeling the system as a whole when the capacitor bank is discharged into its dynamic electromagnetic load. Using a physical short circuit across the electromagnetic load is usually technically difficult and is unnecessary. The discharge can be operated at the highest pressure permissible in order to minimize current sheet motion, thus simulating zero dynamic load, to enable bank parameters, static inductance L(0), and resistance r(0) to be obtained using lightly damped sinusoid equations given the bank capacitance C(0). However, for a plasma focus, even at the highest permissible pressure it is found that there is significant residual motion, so that the assumption of a zero dynamic load introduces unacceptable errors into the determination of the circuit parameters. To overcome this problem, the Lee model code is used to fit the computed current trace to the measured current waveform. Hence the dynamics is incorporated into the solution and the capacitor bank parameters are computed using the Lee model code, and more accurate static bank parameters are obtained.

  4. In situ determination of the static inductance and resistance of a plasma focus capacitor bank

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saw, S. H.; Institute for Plasma Focus Studies, 32 Oakpark Drive, Chadstone, Victoria 3148; Lee, S.

    2010-05-15

    The static (unloaded) electrical parameters of a capacitor bank are of utmost importance for the purpose of modeling the system as a whole when the capacitor bank is discharged into its dynamic electromagnetic load. Using a physical short circuit across the electromagnetic load is usually technically difficult and is unnecessary. The discharge can be operated at the highest pressure permissible in order to minimize current sheet motion, thus simulating zero dynamic load, to enable bank parameters, static inductance L{sub 0}, and resistance r{sub 0} to be obtained using lightly damped sinusoid equations given the bank capacitance C{sub 0}. However, formore » a plasma focus, even at the highest permissible pressure it is found that there is significant residual motion, so that the assumption of a zero dynamic load introduces unacceptable errors into the determination of the circuit parameters. To overcome this problem, the Lee model code is used to fit the computed current trace to the measured current waveform. Hence the dynamics is incorporated into the solution and the capacitor bank parameters are computed using the Lee model code, and more accurate static bank parameters are obtained.« less

  5. Runtime Detection of C-Style Errors in UPC Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pirkelbauer, P; Liao, C; Panas, T

    2011-09-29

    Unified Parallel C (UPC) extends the C programming language (ISO C 99) with explicit parallel programming support for the partitioned global address space (PGAS), which provides a global memory space with localized partitions to each thread. Like its ancestor C, UPC is a low-level language that emphasizes code efficiency over safety. The absence of dynamic (and static) safety checks allows programmer oversights and software flaws that can be hard to spot. In this paper, we present an extension of a dynamic analysis tool, ROSE-Code Instrumentation and Runtime Monitor (ROSECIRM), for UPC to help programmers find C-style errors involving the globalmore » address space. Built on top of the ROSE source-to-source compiler infrastructure, the tool instruments source files with code that monitors operations and keeps track of changes to the system state. The resulting code is linked to a runtime monitor that observes the program execution and finds software defects. We describe the extensions to ROSE-CIRM that were necessary to support UPC. We discuss complications that arise from parallel code and our solutions. We test ROSE-CIRM against a runtime error detection test suite, and present performance results obtained from running error-free codes. ROSE-CIRM is released as part of the ROSE compiler under a BSD-style open source license.« less

  6. The FORTRAN static source code analyzer program (SAP) system description

    NASA Technical Reports Server (NTRS)

    Decker, W.; Taylor, W.; Merwarth, P.; Oneill, M.; Goorevich, C.; Waligora, S.

    1982-01-01

    A source code analyzer program (SAP) designed to assist personnel in conducting studies of FORTRAN programs is described. The SAP scans FORTRAN source code and produces reports that present statistics and measures of statements and structures that make up a module. The processing performed by SAP and of the routines, COMMON blocks, and files used by SAP are described. The system generation procedure for SAP is also presented.

  7. Predicting Attack-Prone Components with Source Code Static Analyzers

    DTIC Science & Technology

    2009-05-01

    models to determine if additional metrics are required to increase the accuracy of the model: non-security SCSA warnings, code churn and size, the count...code churn and size, the count of faults found manually during development, and the measure of coupling between components. The dependent variable...is the count of vulnerabilities reported by testing and those found in the field. We evaluated our model on three commercial telecommunications

  8. A note on adding viscoelasticity to earthquake simulators

    USGS Publications Warehouse

    Pollitz, Fred

    2017-01-01

    Here, I describe how time‐dependent quasi‐static stress transfer can be implemented in an earthquake simulator code that is used to generate long synthetic seismicity catalogs. Most existing seismicity simulators use precomputed static stress interaction coefficients to rapidly implement static stress transfer in fault networks with typically tens of thousands of fault patches. The extension to quasi‐static deformation, which accounts for viscoelasticity of Earth’s ductile lower crust and mantle, involves the precomputation of additional interaction coefficients that represent time‐dependent stress transfer among the model fault patches, combined with defining and evolving additional state variables that track this stress transfer. The new approach is illustrated with application to a California‐wide synthetic fault network.

  9. High-Fidelity Buckling Analysis of Composite Cylinders Using the STAGS Finite Element Code

    NASA Technical Reports Server (NTRS)

    Hilburger, Mark W.

    2014-01-01

    Results from previous shell buckling studies are presented that illustrate some of the unique and powerful capabilities in the STAGS finite element analysis code that have made it an indispensable tool in structures research at NASA over the past few decades. In particular, prototypical results from the development and validation of high-fidelity buckling simulations are presented for several unstiffened thin-walled compression-loaded graphite-epoxy cylindrical shells along with a discussion on the specific methods and user-defined subroutines in STAGS that are used to carry out the high-fidelity simulations. These simulations accurately account for the effects of geometric shell-wall imperfections, shell-wall thickness variations, local shell-wall ply-gaps associated with the fabrication process, shell-end geometric imperfections, nonuniform applied end loads, and elastic boundary conditions. The analysis procedure uses a combination of nonlinear quasi-static and transient dynamic solution algorithms to predict the prebuckling and unstable collapse response characteristics of the cylinders. Finally, the use of high-fidelity models in the development of analysis-based shell-buckling knockdown (design) factors is demonstrated.

  10. Risk-Informed External Hazards Analysis for Seismic and Flooding Phenomena for a Generic PWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parisi, Carlo; Prescott, Steve; Ma, Zhegang

    This report describes the activities performed during the FY2017 for the US-DOE Light Water Reactor Sustainability Risk-Informed Safety Margin Characterization (LWRS-RISMC), Industry Application #2. The scope of Industry Application #2 is to deliver a risk-informed external hazards safety analysis for a representative nuclear power plant. Following the advancements occurred during the previous FYs (toolkits identification, models development), FY2017 focused on: increasing the level of realism of the analysis; improving the tools and the coupling methodologies. In particular the following objectives were achieved: calculation of buildings pounding and their effects on components seismic fragility; development of a SAPHIRE code PRA modelsmore » for 3-loops Westinghouse PWR; set-up of a methodology for performing static-dynamic PRA coupling between SAPHIRE and EMRALD codes; coupling RELAP5-3D/RAVEN for performing Best-Estimate Plus Uncertainty analysis and automatic limit surface search; and execute sample calculations for demonstrating the capabilities of the toolkit in performing a risk-informed external hazards safety analyses.« less

  11. Flow analysis for the nacelle of an advanced ducted propeller at high angle-of-attack and at cruise with boundary layer control

    NASA Technical Reports Server (NTRS)

    Hwang, D. P.; Boldman, D. R.; Hughes, C. E.

    1994-01-01

    An axisymmetric panel code and a three dimensional Navier-Stokes code (used as an inviscid Euler code) were verified for low speed, high angle of attack flow conditions. A three dimensional Navier-Stokes code (used as an inviscid code), and an axisymmetric Navier-Stokes code (used as both viscous and inviscid code) were also assessed for high Mach number cruise conditions. The boundary layer calculations were made by using the results from the panel code or Euler calculation. The panel method can predict the internal surface pressure distributions very well if no shock exists. However, only Euler and Navier-Stokes calculations can provide a good prediction of the surface static pressure distribution including the pressure rise across the shock. Because of the high CPU time required for a three dimensional Navier-Stokes calculation, only the axisymmetric Navier-Stokes calculation was considered at cruise conditions. The use of suction and tangential blowing boundary layer control to eliminate the flow separation on the internal surface was demonstrated for low free stream Mach number and high angle of attack cases. The calculation also shows that transition from laminar flow to turbulent flow on the external cowl surface can be delayed by using suction boundary layer control at cruise flow conditions. The results were compared with experimental data where possible.

  12. Applying Jlint to Space Exploration Software

    NASA Technical Reports Server (NTRS)

    Artho, Cyrille; Havelund, Klaus

    2004-01-01

    Java is a very successful programming language which is also becoming widespread in embedded systems, where software correctness is critical. Jlint is a simple but highly efficient static analyzer that checks a Java program for several common errors, such as null pointer exceptions, and overflow errors. It also includes checks for multi-threading problems, such as deadlocks and data races. The case study described here shows the effectiveness of Jlint in find-false positives in the multi-threading warnings gives an insight into design patterns commonly used in multi-threaded code. The results show that a few analysis techniques are sufficient to avoid almost all false positives. These techniques include investigating all possible callers and a few code idioms. Verifying the correct application of these patterns is still crucial, because their correct usage is not trivial.

  13. Effect of Microscopic Damage Events on Static and Ballistic Impact Strength of Triaxial Braid Composites

    NASA Technical Reports Server (NTRS)

    Littell, Justin D.; Binienda, Wieslaw K.; Arnold, William A.; Roberts, Gary D.; Goldberg, Robert K.

    2010-01-01

    The reliability of impact simulations for aircraft components made with triaxial-braided carbon-fiber composites is currently limited by inadequate material property data and lack of validated material models for analysis. Methods to characterize the material properties used in the analytical models from a systematically obtained set of test data are also lacking. A macroscopic finite element based analytical model to analyze the impact response of these materials has been developed. The stiffness and strength properties utilized in the material model are obtained from a set of quasi-static in-plane tension, compression and shear coupon level tests. Full-field optical strain measurement techniques are applied in the testing, and the results are used to help in characterizing the model. The unit cell of the braided composite is modeled as a series of shell elements, where each element is modeled as a laminated composite. The braided architecture can thus be approximated within the analytical model. The transient dynamic finite element code LS-DYNA is utilized to conduct the finite element simulations, and an internal LS-DYNA constitutive model is utilized in the analysis. Methods to obtain the stiffness and strength properties required by the constitutive model from the available test data are developed. Simulations of quasi-static coupon tests and impact tests of a represented braided composite are conducted. Overall, the developed method shows promise, but improvements that are needed in test and analysis methods for better predictive capability are examined.

  14. Continuous integration and quality control for scientific software

    NASA Astrophysics Data System (ADS)

    Neidhardt, A.; Ettl, M.; Brisken, W.; Dassing, R.

    2013-08-01

    Modern software has to be stable, portable, fast and reliable. This is going to be also more and more important for scientific software. But this requires a sophisticated way to inspect, check and evaluate the quality of source code with a suitable, automated infrastructure. A centralized server with a software repository and a version control system is one essential part, to manage the code basis and to control the different development versions. While each project can be compiled separately, the whole code basis can also be compiled with one central “Makefile”. This is used to create automated, nightly builds. Additionally all sources are inspected automatically with static code analysis and inspection tools, which check well-none error situations, memory and resource leaks, performance issues, or style issues. In combination with an automatic documentation generator it is possible to create the developer documentation directly from the code and the inline comments. All reports and generated information are presented as HTML page on a Web server. Because this environment increased the stability and quality of the software of the Geodetic Observatory Wettzell tremendously, it is now also available for scientific communities. One regular customer is already the developer group of the DiFX software correlator project.

  15. Bricklayer Static Analysis

    NASA Astrophysics Data System (ADS)

    Harris, Christopher

    In the U.S., science and math are taking spotlight in education, and rightfully so as they directly impact economic progression. Curiously absent is computer science, which despite its numerous job opportunities and growth does not have as much focus. This thesis develops a source code analysis framework using language translation, and machine learning classifiers to analyze programs written in Bricklayer for the purposes of programmatically identifying relative success or failure of a students Bricklayer program, helping teachers scale in the amount of students they can support, and providing better messaging. The thesis uses as a case study a set of student programs to demonstrate the possibilities of the framework.

  16. Analysis of the effects of non-supine sleeping positions on the stress, strain, deformation and intraocular pressure of the human eye

    NASA Astrophysics Data System (ADS)

    Volpe, Peter A.

    This thesis presents analytical models, finite element models and experimental data to investigate the response of the human eye to loads that can be experienced when in a non-supine sleeping position. The hypothesis being investigated is that non-supine sleeping positions can lead to stress, strain and deformation of the eye as well as changes in intraocular pressure (IOP) that may exacerbate vision loss in individuals who have glaucoma. To investigate the quasi-static changes in stress and internal pressure, a Fluid-Structure Interaction simulation was performed on an axisymmetrical model of an eye. Common Aerospace Engineering methods for analyzing pressure vessels and hyperelastic structural walls are applied to developing a suitable model. The quasi-static pressure increase was used in an iterative code to analyze changes in IOP over time.

  17. Assessment of ALEGRA Computation for Magnetostatic Configurations

    DOE PAGES

    Grinfeld, Michael; Niederhaus, John Henry; Porwitzky, Andrew

    2016-03-01

    Here, a closed-form solution is described here for the equilibrium configurations of the magnetic field in a simple heterogeneous domain. This problem and its solution are used for rigorous assessment of the accuracy of the ALEGRA code in the quasistatic limit. By the equilibrium configuration we understand the static condition, or the stationary states without macroscopic current. The analysis includes quite a general class of 2D solutions for which a linear isotropic metallic matrix is placed inside a stationary magnetic field approaching a constant value H i° at infinity. The process of evolution of the magnetic fields inside and outsidemore » the inclusion and the parameters for which the quasi-static approach provides for self-consistent results is also explored. Lastly, it is demonstrated that under spatial mesh refinement, ALEGRA converges to the analytic solution for the interior of the inclusion at the expected rate, for both body-fitted and regular rectangular meshes.« less

  18. An approximate theoretical method for modeling the static thrust performance of non-axisymmetric two-dimensional convergent-divergent nozzles. M.S. Thesis - George Washington Univ.

    NASA Technical Reports Server (NTRS)

    Hunter, Craig A.

    1995-01-01

    An analytical/numerical method has been developed to predict the static thrust performance of non-axisymmetric, two-dimensional convergent-divergent exhaust nozzles. Thermodynamic nozzle performance effects due to over- and underexpansion are modeled using one-dimensional compressible flow theory. Boundary layer development and skin friction losses are calculated using an approximate integral momentum method based on the classic karman-Polhausen solution. Angularity effects are included with these two models in a computational Nozzle Performance Analysis Code, NPAC. In four different case studies, results from NPAC are compared to experimental data obtained from subscale nozzle testing to demonstrate the capabilities and limitations of the NPAC method. In several cases, the NPAC prediction matched experimental gross thrust efficiency data to within 0.1 percent at a design NPR, and to within 0.5 percent at off-design conditions.

  19. Recent advances in the modelling of crack growth under fatigue loading conditions

    NASA Technical Reports Server (NTRS)

    Dekoning, A. U.; Tenhoeve, H. J.; Henriksen, T. K.

    1994-01-01

    Fatigue crack growth associated with cyclic (secondary) plastic flow near a crack front is modelled using an incremental formulation. A new description of threshold behaviour under small load cycles is included. Quasi-static crack extension under high load excursions is described using an incremental formulation of the R-(crack growth resistance)- curve concept. The integration of the equations is discussed. For constant amplitude load cycles the results will be compared with existing crack growth laws. It will be shown that the model also properly describes interaction effects of fatigue crack growth and quasi-static crack extension. To evaluate the more general applicability the model is included in the NASGRO computer code for damage tolerance analysis. For this purpose the NASGRO program was provided with the CORPUS and the STRIP-YIELD models for computation of the crack opening load levels. The implementation is discussed and recent results of the verification are presented.

  20. EBR-II Static Neutronic Calculations by PHISICS / MCNP6 codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paolo Balestra; Carlo Parisi; Andrea Alfonsi

    2016-02-01

    The International Atomic Energy Agency (IAEA) launched a Coordinated Research Project (CRP) on the Shutdown Heat Removal Tests (SHRT) performed in the '80s at the Experimental fast Breeder Reactor EBR-II, USA. The scope of the CRP is to improve and validate the simulation tools for the study and the design of the liquid metal cooled fast reactors. Moreover, training of the next generation of fast reactor analysts is being also considered the other scope of the CRP. In this framework, a static neutronic model was developed, using state-of-the art neutron transport codes like SCALE/PHISICS (deterministic solution) and MCNP6 (stochastic solution).more » Comparison between both solutions is briefly illustrated in this summary.« less

  1. From Verified Models to Verifiable Code

    NASA Technical Reports Server (NTRS)

    Lensink, Leonard; Munoz, Cesar A.; Goodloe, Alwyn E.

    2009-01-01

    Declarative specifications of digital systems often contain parts that can be automatically translated into executable code. Automated code generation may reduce or eliminate the kinds of errors typically introduced through manual code writing. For this approach to be effective, the generated code should be reasonably efficient and, more importantly, verifiable. This paper presents a prototype code generator for the Prototype Verification System (PVS) that translates a subset of PVS functional specifications into an intermediate language and subsequently to multiple target programming languages. Several case studies are presented to illustrate the tool's functionality. The generated code can be analyzed by software verification tools such as verification condition generators, static analyzers, and software model-checkers to increase the confidence that the generated code is correct.

  2. A computer program for the geometrically nonlinear static and dynamic analysis of arbitrarily loaded shells of revolution, theory and users manual

    NASA Technical Reports Server (NTRS)

    Ball, R. E.

    1972-01-01

    A digital computer program known as SATANS (static and transient analysis, nonlinear, shells) for the geometrically nonlinear static and dynamic response of arbitrarily loaded shells of revolution is presented. Instructions for the preparation of the input data cards and other information necessary for the operation of the program are described in detail and two sample problems are included. The governing partial differential equations are based upon Sanders' nonlinear thin shell theory for the conditions of small strains and moderately small rotations. The governing equations are reduced to uncoupled sets of four linear, second order, partial differential equations in the meridional and time coordinates by expanding the dependent variables in a Fourier sine or cosine series in the circumferential coordinate and treating the nonlinear modal coupling terms as pseudo loads. The derivatives with respect to the meridional coordinate are approximated by central finite differences, and the displacement accelerations are approximated by the implicit Houbolt backward difference scheme with a constant time interval. The boundaries of the shell may be closed, free, fixed, or elastically restrained. The program is coded in the FORTRAN 4 language and is dimensioned to allow a maximum of 10 arbitrary Fourier harmonics and a maximum product of the total number of meridional stations and the total number of Fourier harmonics of 200. The program requires 155,000 bytes of core storage.

  3. Heat transfer, thermal stress analysis and the dynamic behaviour of high power RF structures. [MARC and SUPERFISH codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKeown, J.; Labrie, J.P.

    1983-08-01

    A general purpose finite element computer code called MARC is used to calculate the temperature distribution and dimensional changes in linear accelerator rf structures. Both steady state and transient behaviour are examined with the computer model. Combining results from MARC with the cavity evaluation computer code SUPERFISH, the static and dynamic behaviour of a structure under power is investigated. Structure cooling is studied to minimize loss in shunt impedance and frequency shifts during high power operation. Results are compared with an experimental test carried out on a cw 805 MHz on-axis coupled structure at an energy gradient of 1.8 MeV/m.more » The model has also been used to compare the performance of on-axis and coaxial structures and has guided the mechanical design of structures suitable for average gradients in excess of 2.0 MeV/m at 2.45 GHz.« less

  4. NRA8-21 Cycle 2 RBCC Turbopump Risk Reduction

    NASA Technical Reports Server (NTRS)

    Ferguson, Thomas V.; Williams, Morgan; Marcu, Bogdan

    2004-01-01

    This project was composed of three sub-tasks. The objective of the first task was to use the CFD code INS3D to generate both on- and off-design predictions for the consortium optimized impeller flowfield. The results of the flow simulations are given in the first section. The objective of the second task was to construct a turbomachinery testing database comprised of measurements made on several different impellers, an inducer and a diffuser. The data was in the form of static pressure measurements as well as laser velocimeter measurements of velocities and flow angles within the stated components. Several databases with this information were created for these components. The third subtask objective was two-fold: first, to validate the Enigma CFD code for pump diffuser analysis, and secondly, to perform steady and unsteady analyses on some wide flow range diffuser concepts using Enigma. The code was validated using the consortium optimized impeller database and then applied to two different concepts for wide flow diffusers.

  5. Numerical analysis of stiffened shells of revolution. Volume 4: Engineer's program manual for STARS-2S shell theory automated for rotational structures - 2 (statics) digital computer program

    NASA Technical Reports Server (NTRS)

    Svalbonas, V.; Ogilvie, P.

    1973-01-01

    The engineering programming information for the digital computer program for analyzing shell structures is presented. The program is designed to permit small changes such as altering the geometry or a table size to fit the specific requirements. Each major subroutine is discussed and the following subjects are included: (1) subroutine description, (2) pertinent engineering symbols and the FORTRAN coded counterparts, (3) subroutine flow chart, and (4) subroutine FORTRAN listing.

  6. Preventing SQL Code Injection by Combining Static and Runtime Analysis

    DTIC Science & Technology

    2008-05-01

    attacker changes the developer’s intended structure of an SQ L com- mand by inserting new SQ L keywords or operators. (Su and Wasser - mann provide a...FROM b o o k s WHERE a u t h o r = ’ ’ GROUP BY r a t i n g We use symbol as a placeholder for the indeterminate part of the command (in this...dialects of SQL.) In our model, we mark transitions that correspond to externally defined strings with the symbol . To illustrate, Figure 2 shows the SQL

  7. Effects of cosmic rays on single event upsets

    NASA Technical Reports Server (NTRS)

    Venable, D. D.; Zajic, V.; Lowe, C. W.; Olidapupo, A.; Fogarty, T. N.

    1989-01-01

    Assistance was provided to the Brookhaven Single Event Upset (SEU) Test Facility. Computer codes were developed for fragmentation and secondary radiation affecting Very Large Scale Integration (VLSI) in space. A computer controlled CV (HP4192) test was developed for Terman analysis. Also developed were high speed parametric tests which are independent of operator judgment and a charge pumping technique for measurement of D(sub it) (E). The X-ray secondary effects, and parametric degradation as a function of dose rate were simulated. The SPICE simulation of static RAMs with various resistor filters was tested.

  8. Custom Coordination Environments for Lanthanoids: Tripodal Ligands Achieve Near-Perfect Octahedral Coordination for Two Dysprosium-Based Molecular Nanomagnets.

    PubMed

    Lim, Kwang Soo; Baldoví, José J; Jiang, ShangDa; Koo, Bong Ho; Kang, Dong Won; Lee, Woo Ram; Koh, Eui Kwan; Gaita-Ariño, Alejandro; Coronado, Eugenio; Slota, Michael; Bogani, Lapo; Hong, Chang Seop

    2017-05-01

    Controlling the coordination sphere of lanthanoid complexes is a challenging critical step toward controlling their relaxation properties. Here we present the synthesis of hexacoordinated dysprosium single-molecule magnets, where tripodal ligands achieve a near-perfect octahedral coordination. We perform a complete experimental and theoretical investigation of their magnetic properties, including a full single-crystal magnetic anisotropy analysis. The combination of electrostatic and crystal-field computational tools (SIMPRE and CONDON codes) allows us to explain the static behavior of these systems in detail.

  9. Finite element analyses for seismic shear wall international standard problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Y.J.; Hofmayer, C.H.

    Two identical reinforced concrete (RC) shear walls, which consist of web, flanges and massive top and bottom slabs, were tested up to ultimate failure under earthquake motions at the Nuclear Power Engineering Corporation`s (NUPEC) Tadotsu Engineering Laboratory, Japan. NUPEC provided the dynamic test results to the OECD (Organization for Economic Cooperation and Development), Nuclear Energy Agency (NEA) for use as an International Standard Problem (ISP). The shear walls were intended to be part of a typical reactor building. One of the major objectives of the Seismic Shear Wall ISP (SSWISP) was to evaluate various seismic analysis methods for concrete structuresmore » used for design and seismic margin assessment. It also offered a unique opportunity to assess the state-of-the-art in nonlinear dynamic analysis of reinforced concrete shear wall structures under severe earthquake loadings. As a participant of the SSWISP workshops, Brookhaven National Laboratory (BNL) performed finite element analyses under the sponsorship of the U.S. Nuclear Regulatory Commission (USNRC). Three types of analysis were performed, i.e., monotonic static (push-over), cyclic static and dynamic analyses. Additional monotonic static analyses were performed by two consultants, F. Vecchio of the University of Toronto (UT) and F. Filippou of the University of California at Berkeley (UCB). The analysis results by BNL and the consultants were presented during the second workshop in Yokohama, Japan in 1996. A total of 55 analyses were presented during the workshop by 30 participants from 11 different countries. The major findings on the presented analysis methods, as well as engineering insights regarding the applicability and reliability of the FEM codes are described in detail in this report. 16 refs., 60 figs., 16 tabs.« less

  10. On Flowfield Periodicity in the NASA Transonic Flutter Cascade. Part 2; Numerical Study

    NASA Technical Reports Server (NTRS)

    Chima, Rodrick V.; McFarland, Eric R.; Wood, Jerry R.; Lepicovsky, Jan

    2000-01-01

    The transonic flutter cascade facility at NASA Glenn Research Center was redesigned based on a combined program of experimental measurements and numerical analyses. The objectives of the redesign were to improve the periodicity of the cascade in steady operation, and to better quantify the inlet and exit flow conditions needed for CFD predictions. Part I of this paper describes the experimental measurements, which included static pressure measurements on the blade and endwalls made using both static taps and pressure sensitive paints, cobra probe measurements of the endwall boundary layers and blade wakes, and shadowgraphs of the wave structure. Part II of this paper describes three CFD codes used to analyze the facility, including a multibody panel code, a quasi-three-dimensional viscous code, and a fully three-dimensional viscous code. The measurements and analyses both showed that the operation of the cascade was heavily dependent on the configuration of the sidewalls. Four configurations of the sidewalls were studied and the results are described. For the final configuration, the quasi-three-dimensional viscous code was used to predict the location of mid-passage streamlines for a perfectly periodic cascade. By arranging the tunnel sidewalls to approximate these streamlines, sidewall interference was minimized and excellent periodicity was obtained.

  11. Reliability Analysis of Brittle Material Structures - Including MEMS(?) - With the CARES/Life Program

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.

    2002-01-01

    Brittle materials are being used, or considered, for a wide variety of high tech applications that operate in harsh environments, including static and rotating turbine parts. thermal protection systems, dental prosthetics, fuel cells, oxygen transport membranes, radomes, and MEMS. Designing components to sustain repeated load without fracturing while using the minimum amount of material requires the use of a probabilistic design methodology. The CARES/Life code provides a general-purpose analysis tool that predicts the probability of failure of a ceramic component as a function of its time in service. For this presentation an interview of the CARES/Life program will be provided. Emphasis will be placed on describing the latest enhancements to the code for reliability analysis with time varying loads and temperatures (fully transient reliability analysis). Also, early efforts in investigating the validity of using Weibull statistics, the basis of the CARES/Life program, to characterize the strength of MEMS structures will be described as as well as the version of CARES/Life for MEMS (CARES/MEMS) being prepared which incorporates single crystal and edge flaw reliability analysis capability. It is hoped this talk will open a dialog for potential collaboration in the area of MEMS testing and life prediction.

  12. Status and future plans for open source QuickPIC

    NASA Astrophysics Data System (ADS)

    An, Weiming; Decyk, Viktor; Mori, Warren

    2017-10-01

    QuickPIC is a three dimensional (3D) quasi-static particle-in-cell (PIC) code developed based on the UPIC framework. It can be used for efficiently modeling plasma based accelerator (PBA) problems. With quasi-static approximation, QuickPIC can use different time scales for calculating the beam (or laser) evolution and the plasma response, and a 3D plasma wake field can be simulated using a two-dimensional (2D) PIC code where the time variable is ξ = ct - z and z is the beam propagation direction. QuickPIC can be thousand times faster than the normal PIC code when simulating the PBA. It uses an MPI/OpenMP hybrid parallel algorithm, which can be run on either a laptop or the largest supercomputer. The open source QuickPIC is an object-oriented program with high level classes written in Fortran 2003. It can be found at https://github.com/UCLA-Plasma-Simulation-Group/QuickPIC-OpenSource.git

  13. Security Enhancement Mechanism Based on Contextual Authentication and Role Analysis for 2G-RFID Systems

    PubMed Central

    Tang, Wan; Chen, Min; Ni, Jin; Yang, Ximin

    2011-01-01

    The traditional Radio Frequency Identification (RFID) system, in which the information maintained in tags is passive and static, has no intelligent decision-making ability to suit application and environment dynamics. The Second-Generation RFID (2G-RFID) system, referred as 2G-RFID-sys, is an evolution of the traditional RFID system to ensure better quality of service in future networks. Due to the openness of the active mobile codes in the 2G-RFID system, the realization of conveying intelligence brings a critical issue: how can we make sure the backend system will interpret and execute mobile codes in the right way without misuse so as to avoid malicious attacks? To address this issue, this paper expands the concept of Role-Based Access Control (RBAC) by introducing context-aware computing, and then designs a secure middleware for backend systems, named Two-Level Security Enhancement Mechanism or 2L-SEM, in order to ensure the usability and validity of the mobile code through contextual authentication and role analysis. According to the given contextual restrictions, 2L-SEM can filtrate the illegal and invalid mobile codes contained in tags. Finally, a reference architecture and its typical application are given to illustrate the implementation of 2L-SEM in a 2G-RFID system, along with the simulation results to evaluate how the proposed mechanism can guarantee secure execution of mobile codes for the system. PMID:22163983

  14. Security enhancement mechanism based on contextual authentication and role analysis for 2G-RFID systems.

    PubMed

    Tang, Wan; Chen, Min; Ni, Jin; Yang, Ximin

    2011-01-01

    The traditional Radio Frequency Identification (RFID) system, in which the information maintained in tags is passive and static, has no intelligent decision-making ability to suit application and environment dynamics. The Second-Generation RFID (2G-RFID) system, referred as 2G-RFID-sys, is an evolution of the traditional RFID system to ensure better quality of service in future networks. Due to the openness of the active mobile codes in the 2G-RFID system, the realization of conveying intelligence brings a critical issue: how can we make sure the backend system will interpret and execute mobile codes in the right way without misuse so as to avoid malicious attacks? To address this issue, this paper expands the concept of Role-Based Access Control (RBAC) by introducing context-aware computing, and then designs a secure middleware for backend systems, named Two-Level Security Enhancement Mechanism or 2L-SEM, in order to ensure the usability and validity of the mobile code through contextual authentication and role analysis. According to the given contextual restrictions, 2L-SEM can filtrate the illegal and invalid mobile codes contained in tags. Finally, a reference architecture and its typical application are given to illustrate the implementation of 2L-SEM in a 2G-RFID system, along with the simulation results to evaluate how the proposed mechanism can guarantee secure execution of mobile codes for the system.

  15. Transient analysis techniques in performing impact and crash dynamic studies

    NASA Technical Reports Server (NTRS)

    Pifko, A. B.; Winter, R.

    1989-01-01

    Because of the emphasis being placed on crashworthiness as a design requirement, increasing demands are being made by various organizations to analyze a wide range of complex structures that must perform safely when subjected to severe impact loads, such as those generated in a crash event. The ultimate goal of crashworthiness design and analysis is to produce vehicles with the ability to reduce the dynamic forces experienced by the occupants to specified levels, while maintaining a survivable envelope around them during a specified crash event. DYCAST is a nonlinear structural dynamic finite element computer code that started from the plans systems of a finite element program for static nonlinear structural analysis. The essential features of DYCAST are outlined.

  16. Parallel Numerical Simulations of Water Reservoirs

    NASA Astrophysics Data System (ADS)

    Torres, Pedro; Mangiavacchi, Norberto

    2010-11-01

    The study of the water flow and scalar transport in water reservoirs is important for the determination of the water quality during the initial stages of the reservoir filling and during the life of the reservoir. For this scope, a parallel 2D finite element code for solving the incompressible Navier-Stokes equations coupled with scalar transport was implemented using the message-passing programming model, in order to perform simulations of hidropower water reservoirs in a computer cluster environment. The spatial discretization is based on the MINI element that satisfies the Babuska-Brezzi (BB) condition, which provides sufficient conditions for a stable mixed formulation. All the distributed data structures needed in the different stages of the code, such as preprocessing, solving and post processing, were implemented using the PETSc library. The resulting linear systems for the velocity and the pressure fields were solved using the projection method, implemented by an approximate block LU factorization. In order to increase the parallel performance in the solution of the linear systems, we employ the static condensation method for solving the intermediate velocity at vertex and centroid nodes separately. We compare performance results of the static condensation method with the approach of solving the complete system. In our tests the static condensation method shows better performance for large problems, at the cost of an increased memory usage. Performance results for other intensive parts of the code in a computer cluster are also presented.

  17. Methods, media, and systems for detecting attack on a digital processing device

    DOEpatents

    Stolfo, Salvatore J.; Li, Wei-Jen; Keromylis, Angelos D.; Androulaki, Elli

    2014-07-22

    Methods, media, and systems for detecting attack are provided. In some embodiments, the methods include: comparing at least part of a document to a static detection model; determining whether attacking code is included in the document based on the comparison of the document to the static detection model; executing at least part of the document; determining whether attacking code is included in the document based on the execution of the at least part of the document; and if attacking code is determined to be included in the document based on at least one of the comparison of the document to the static detection model and the execution of the at least part of the document, reporting the presence of an attack. In some embodiments, the methods include: selecting a data segment in at least one portion of an electronic document; determining whether the arbitrarily selected data segment can be altered without causing the electronic document to result in an error when processed by a corresponding program; in response to determining that the arbitrarily selected data segment can be altered, arbitrarily altering the data segment in the at least one portion of the electronic document to produce an altered electronic document; and determining whether the corresponding program produces an error state when the altered electronic document is processed by the corresponding program.

  18. Methods, media, and systems for detecting attack on a digital processing device

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stolfo, Salvatore J.; Li, Wei-Jen; Keromytis, Angelos D.

    Methods, media, and systems for detecting attack are provided. In some embodiments, the methods include: comparing at least part of a document to a static detection model; determining whether attacking code is included in the document based on the comparison of the document to the static detection model; executing at least part of the document; determining whether attacking code is included in the document based on the execution of the at least part of the document; and if attacking code is determined to be included in the document based on at least one of the comparison of the document tomore » the static detection model and the execution of the at least part of the document, reporting the presence of an attack. In some embodiments, the methods include: selecting a data segment in at least one portion of an electronic document; determining whether the arbitrarily selected data segment can be altered without causing the electronic document to result in an error when processed by a corresponding program; in response to determining that the arbitrarily selected data segment can be altered, arbitrarily altering the data segment in the at least one portion of the electronic document to produce an altered electronic document; and determining whether the corresponding program produces an error state when the altered electronic document is processed by the corresponding program.« less

  19. Fluid Film Bearing Code Development

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The next generation of rocket engine turbopumps is being developed by industry through Government-directed contracts. These turbopumps will use fluid film bearings because they eliminate the life and shaft-speed limitations of rolling-element bearings, increase turbopump design flexibility, and reduce the need for turbopump overhauls and maintenance. The design of the fluid film bearings for these turbopumps, however, requires sophisticated analysis tools to model the complex physical behavior characteristic of fluid film bearings operating at high speeds with low viscosity fluids. State-of-the-art analysis and design tools are being developed at the Texas A&M University under a grant guided by the NASA Lewis Research Center. The latest version of the code, HYDROFLEXT, is a thermohydrodynamic bulk flow analysis with fluid compressibility, full inertia, and fully developed turbulence models. It can predict the static and dynamic force response of rigid and flexible pad hydrodynamic bearings and of rigid and tilting pad hydrostatic bearings. The Texas A&M code is a comprehensive analysis tool, incorporating key fluid phenomenon pertinent to bearings that operate at high speeds with low-viscosity fluids typical of those used in rocket engine turbopumps. Specifically, the energy equation was implemented into the code to enable fluid properties to vary with temperature and pressure. This is particularly important for cryogenic fluids because their properties are sensitive to temperature as well as pressure. As shown in the figure, predicted bearing mass flow rates vary significantly depending on the fluid model used. Because cryogens are semicompressible fluids and the bearing dynamic characteristics are highly sensitive to fluid compressibility, fluid compressibility effects are also modeled. The code contains fluid properties for liquid hydrogen, liquid oxygen, and liquid nitrogen as well as for water and air. Other fluids can be handled by the code provided that the user inputs information that relates the fluid transport properties to the temperature.

  20. Active magnetic bearing control loop modeling for a finite element rotordynamics code

    NASA Technical Reports Server (NTRS)

    Genta, Giancarlo; Delprete, Cristiana; Carabelli, Stefano

    1994-01-01

    A mathematical model of an active electromagnetic bearing which includes the actuator, the sensor and the control system is developed and implemented in a specialized finite element code for rotordynamic analysis. The element formulation and its incorporation in the model of the machine are described in detail. A solution procedure, based on a modal approach in which the number of retained modes is controlled by the user, is then shown together with other procedures for computing the steady-state response to both static and unbalance forces. An example of application shows the numerical results obtained on a model of an electric motor suspended on a five active-axis magnetic suspension. The comparison of some of these results with the experimental characteristics of the actual system shows the ability of the present model to predict its performance.

  1. MATLAB Stability and Control Toolbox Trim and Static Stability Module

    NASA Technical Reports Server (NTRS)

    Kenny, Sean P.; Crespo, Luis

    2012-01-01

    MATLAB Stability and Control Toolbox (MASCOT) utilizes geometric, aerodynamic, and inertial inputs to calculate air vehicle stability in a variety of critical flight conditions. The code is based on fundamental, non-linear equations of motion and is able to translate results into a qualitative, graphical scale useful to the non-expert. MASCOT was created to provide the conceptual aircraft designer accurate predictions of air vehicle stability and control characteristics. The code takes as input mass property data in the form of an inertia tensor, aerodynamic loading data, and propulsion (i.e. thrust) loading data. Using fundamental nonlinear equations of motion, MASCOT then calculates vehicle trim and static stability data for the desired flight condition(s). Available flight conditions include six horizontal and six landing rotation conditions with varying options for engine out, crosswind, and sideslip, plus three take-off rotation conditions. Results are displayed through a unique graphical interface developed to provide the non-stability and control expert conceptual design engineer a qualitative scale indicating whether the vehicle has acceptable, marginal, or unacceptable static stability characteristics. If desired, the user can also examine the detailed, quantitative results.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brochard, J.; Charras, T.; Ghoudi, M.

    Modifications to a computer code for ductile fracture assessment of piping systems with postulated circumferential through-wall cracks under static or dynamic loading are very briefly described. The modifications extend the capabilities of the CASTEM2000 code to the determination of fracture parameters under creep conditions. The main advantage of the approach is that thermal loads can be evaluated as secondary stresses. The code is applicable to piping systems for which crack propagation predictions differ significantly depending on whether thermal stresses are considered as primary or secondary stresses.

  3. DECEL1 Users Manual. A Fortran IV Program for Computing the Static Deflections of Structural Cable Arrays.

    DTIC Science & Technology

    1980-08-01

    knots Figure 14. Current profile. 84 6; * .4. 0 E U U U -~ U U (.4 U @0 85 I UECfLI ?E)r eAtE NjKC 7 frCAd I o .,01 U.I 75o* ANL I U,) I000. 0.) AKC 3 U...NAVSCOLCECOFF C35 Port Hueneme, CA NAVSEASYSCOM Code SEA OOC Washington. DC NAVSEC Code 6034 (Library), Washington DC NAVSHIPREPFAC Library. Guam NAVSHIPYD Code

  4. A domain decomposition approach to implementing fault slip in finite-element models of quasi-static and dynamic crustal deformation

    USGS Publications Warehouse

    Aagaard, Brad T.; Knepley, M.G.; Williams, C.A.

    2013-01-01

    We employ a domain decomposition approach with Lagrange multipliers to implement fault slip in a finite-element code, PyLith, for use in both quasi-static and dynamic crustal deformation applications. This integrated approach to solving both quasi-static and dynamic simulations leverages common finite-element data structures and implementations of various boundary conditions, discretization schemes, and bulk and fault rheologies. We have developed a custom preconditioner for the Lagrange multiplier portion of the system of equations that provides excellent scalability with problem size compared to conventional additive Schwarz methods. We demonstrate application of this approach using benchmarks for both quasi-static viscoelastic deformation and dynamic spontaneous rupture propagation that verify the numerical implementation in PyLith.

  5. Scaling effects in the static and dynamic response of graphite-epoxy beam-columns. Ph.D. Thesis - Virginia Polytechnic Inst. and State Univ.

    NASA Technical Reports Server (NTRS)

    Jackson, Karen E.

    1990-01-01

    Scale model technology represents one method of investigating the behavior of advanced, weight-efficient composite structures under a variety of loading conditions. It is necessary, however, to understand the limitations involved in testing scale model structures before the technique can be fully utilized. These limitations, or scaling effects, are characterized. in the large deflection response and failure of composite beams. Scale model beams were loaded with an eccentric axial compressive load designed to produce large bending deflections and global failure. A dimensional analysis was performed on the composite beam-column loading configuration to determine a model law governing the system response. An experimental program was developed to validate the model law under both static and dynamic loading conditions. Laminate stacking sequences including unidirectional, angle ply, cross ply, and quasi-isotropic were tested to examine a diversity of composite response and failure modes. The model beams were loaded under scaled test conditions until catastrophic failure. A large deflection beam solution was developed to compare with the static experimental results and to analyze beam failure. Also, the finite element code DYCAST (DYnamic Crash Analysis of STructure) was used to model both the static and impulsive beam response. Static test results indicate that the unidirectional and cross ply beam responses scale as predicted by the model law, even under severe deformations. In general, failure modes were consistent between scale models within a laminate family; however, a significant scale effect was observed in strength. The scale effect in strength which was evident in the static tests was also observed in the dynamic tests. Scaling of load and strain time histories between the scale model beams and the prototypes was excellent for the unidirectional beams, but inconsistent results were obtained for the angle ply, cross ply, and quasi-isotropic beams. Results show that valuable information can be obtained from testing on scale model composite structures, especially in the linear elastic response region. However, due to scaling effects in the strength behavior of composite laminates, caution must be used in extrapolating data taken from a scale model test when that test involves failure of the structure.

  6. Influence of Waste Tyre Crumb Rubber on Compressive Strength, Static Modulus of Elasticity and Flexural Strength of Concrete

    NASA Astrophysics Data System (ADS)

    Haridharan, M. K.; Bharathi Murugan, R.; Natarajan, C.; Muthukannan, M.

    2017-07-01

    In this paper, the experimental investigations was carried out to find the compressive strength, static modulus of elasticity and flexural strength of concrete mixtures, in which natural sand was partially replaced with Waste Tyre Crumb Rubber (WTCR). River sand was replaced with five different percentages (5%, 10%, 15%, 20% and 25%) of WTCR by volume. The main objective of the experimental investigation is to find the relationship between static modulus of elasticity and flexural strength with compressive strength of concrete with WTCR. The experimentally obtainedstatic modulus of elasticity and flexural strength results comparing with the theoretical values (various country codes recommendations).

  7. An Expert Assistant for Computer Aided Parallelization

    NASA Technical Reports Server (NTRS)

    Jost, Gabriele; Chun, Robert; Jin, Haoqiang; Labarta, Jesus; Gimenez, Judit

    2004-01-01

    The prototype implementation of an expert system was developed to assist the user in the computer aided parallelization process. The system interfaces to tools for automatic parallelization and performance analysis. By fusing static program structure information and dynamic performance analysis data the expert system can help the user to filter, correlate, and interpret the data gathered by the existing tools. Sections of the code that show poor performance and require further attention are rapidly identified and suggestions for improvements are presented to the user. In this paper we describe the components of the expert system and discuss its interface to the existing tools. We present a case study to demonstrate the successful use in full scale scientific applications.

  8. Numerical investigation of galloping instabilities in Z-shaped profiles.

    PubMed

    Gomez, Ignacio; Chavez, Miguel; Alonso, Gustavo; Valero, Eusebio

    2014-01-01

    Aeroelastic effects are relatively common in the design of modern civil constructions such as office blocks, airport terminal buildings, and factories. Typical flexible structures exposed to the action of wind are shading devices, normally slats or louvers. A typical cross-section for such elements is a Z-shaped profile, made out of a central web and two-side wings. Galloping instabilities are often determined in practice using the Glauert-Den Hartog criterion. This criterion relies on accurate predictions of the dependence of the aerodynamic force coefficients with the angle of attack. The results of a parametric analysis based on a numerical analysis and performed on different Z-shaped louvers to determine translational galloping instability regions are presented in this paper. These numerical analysis results have been validated with a parametric analysis of Z-shaped profiles based on static wind tunnel tests. In order to perform this validation, the DLR TAU Code, which is a standard code within the European aeronautical industry, has been used. This study highlights the focus on the numerical prediction of the effect of galloping, which is shown in a visible way, through stability maps. Comparisons between numerical and experimental data are presented with respect to various meshes and turbulence models.

  9. Characterization, Modeling, and Failure Analysis of Composite Structure Materials under Static and Dynamic Loading

    NASA Astrophysics Data System (ADS)

    Werner, Brian Thomas

    Composite structures have long been used in many industries where it is advantageous to reduce weight while maintaining high stiffness and strength. Composites can now be found in an ever broadening range of applications: sporting equipment, automobiles, marine and aerospace structures, and energy production. These structures are typically sandwich panels composed of fiber reinforced polymer composite (FRPC) facesheets which provide the stiffness and the strength and a low density polymeric foam core that adds bending rigidity with little additional weight. The expanding use of composite structures exposes them to high energy, high velocity dynamic loadings which produce multi-axial dynamic states of stress. This circumstance can present quite a challenge to designers, as composite structures are highly anisotropic and display properties that are sensitive to loading rates. Computer codes are continually in development to assist designers in the creation of safe, efficient structures. While the design of an optimal composite structure is more complex, engineers can take advantage of the effect of enhanced energy dissipation displayed by a composite when loaded at high strain rates. In order to build and verify effective computer codes, the underlying assumptions must be verified by laboratory experiments. Many of these codes look to use a micromechanical approach to determine the response of the structure. For this, the material properties of the constituent materials must be verified, three-dimensional constitutive laws must be developed, and failure of these materials must be investigated under static and dynamic loading conditions. In this study, simple models are sought not only to ease their implementation into such codes, but to allow for efficient characterization of new materials that may be developed. Characterization of composite materials and sandwich structures is a costly, time intensive process. A constituent based design approach evaluates potential combinations of materials in a much faster and more efficient manner.

  10. An Experiment in Scientific Program Understanding

    NASA Technical Reports Server (NTRS)

    Stewart, Mark E. M.; Owen, Karl (Technical Monitor)

    2000-01-01

    This paper concerns a procedure that analyzes aspects of the meaning or semantics of scientific and engineering code. This procedure involves taking a user's existing code, adding semantic declarations for some primitive variables, and parsing this annotated code using multiple, independent expert parsers. These semantic parsers encode domain knowledge and recognize formulae in different disciplines including physics, numerical methods, mathematics, and geometry. The parsers will automatically recognize and document some static, semantic concepts and help locate some program semantic errors. Results are shown for three intensively studied codes and seven blind test cases; all test cases are state of the art scientific codes. These techniques may apply to a wider range of scientific codes. If so, the techniques could reduce the time, risk, and effort required to develop and modify scientific codes.

  11. Integrated verification and testing system (IVTS) for HAL/S programs

    NASA Technical Reports Server (NTRS)

    Senn, E. H.; Ames, K. R.; Smith, K. A.

    1983-01-01

    The IVTS is a large software system designed to support user-controlled verification analysis and testing activities for programs written in the HAL/S language. The system is composed of a user interface and user command language, analysis tools and an organized data base of host system files. The analysis tools are of four major types: (1) static analysis, (2) symbolic execution, (3) dynamic analysis (testing), and (4) documentation enhancement. The IVTS requires a split HAL/S compiler, divided at the natural separation point between the parser/lexical analyzer phase and the target machine code generator phase. The IVTS uses the internal program form (HALMAT) between these two phases as primary input for the analysis tools. The dynamic analysis component requires some way to 'execute' the object HAL/S program. The execution medium may be an interpretive simulation or an actual host or target machine.

  12. THRSTER: A THRee-STream Ejector Ramjet Analysis and Design Tool

    NASA Technical Reports Server (NTRS)

    Chue, R. S.; Sabean, J.; Tyll, J.; Bakos, R. J.

    2000-01-01

    An engineering tool for analyzing ejectors in rocket based combined cycle (RBCC) engines has been developed. A key technology for multi-cycle RBCC propulsion systems is the ejector which functions as the compression stage of the ejector ramjet cycle. The THRee STream Ejector Ramjet analysis tool was developed to analyze the complex aerothermodynamic and combustion processes that occur in this device. The formulated model consists of three quasi-one-dimensional streams, one each for the ejector primary flow, the secondary flow, and the mixed region. The model space marches through the mixer, combustor, and nozzle to evaluate the solution along the engine. In its present form, the model is intended for an analysis mode in which the diffusion rates of the primary and secondary into the mixed stream are stipulated. The model offers the ability to analyze the highly two-dimensional ejector flowfield while still benefits from the simplicity and speed of an engineering tool. To validate the developed code, wall static pressure measurements from the Penn-State and NASA-ART RBCC experiments were used to compare with the results generated by the code. The calculated solutions were generally found to have satisfactory agreement with the pressure measurements along the engines, although further modeling effort may be required when a strong shock train is formed at the rocket exhaust. The range of parameters in which the code would generate valid results are presented and discussed.

  13. THRSTER: A Three-Stream Ejector Ramjet Analysis and Design Tool

    NASA Technical Reports Server (NTRS)

    Chue, R. S.; Sabean, J.; Tyll, J.; Bakos, R. J.; Komar, D. R. (Technical Monitor)

    2000-01-01

    An engineering tool for analyzing ejectors in rocket based combined cycle (RBCC) engines has been developed. A key technology for multi-cycle RBCC propulsion systems is the ejector which functions as the compression stage of the ejector ramjet cycle. The THRee STream Ejector Ramjet analysis tool was developed to analyze the complex aerothermodynamic and combustion processes that occur in this device. The formulated model consists of three quasi-one-dimensional streams, one each for the ejector primary flow, the secondary flow, and the mixed region. The model space marches through the mixer, combustor, and nozzle to evaluate the solution along the engine. In its present form, the model is intended for an analysis mode in which the diffusion rates of the primary and secondary into the mixed stream are stipulated. The model offers the ability to analyze the highly two-dimensional ejector flowfield while still benefits from the simplicity and speed of an engineering tool. To validate the developed code, wall static pressure measurements from the Penn-State and NASA-ART RBCC experiments were used to compare with the results generated by the code. The calculated solutions were generally found to have satisfactory agreement with the pressure measurements along the engines, although further modeling effort may be required when a strong shock train is formed at the rocket exhaust. The range of parameters in which the code would generate valid results are presented and discussed.

  14. Predicting the Reliability of Brittle Material Structures Subjected to Transient Proof Test and Service Loading

    NASA Astrophysics Data System (ADS)

    Nemeth, Noel N.; Jadaan, Osama M.; Palfi, Tamas; Baker, Eric H.

    Brittle materials today are being used, or considered, for a wide variety of high tech applications that operate in harsh environments, including static and rotating turbine parts, thermal protection systems, dental prosthetics, fuel cells, oxygen transport membranes, radomes, and MEMS. Designing brittle material components to sustain repeated load without fracturing while using the minimum amount of material requires the use of a probabilistic design methodology. The NASA CARES/Life 1 (Ceramic Analysis and Reliability Evaluation of Structure/Life) code provides a general-purpose analysis tool that predicts the probability of failure of a ceramic component as a function of its time in service. This capability includes predicting the time-dependent failure probability of ceramic components against catastrophic rupture when subjected to transient thermomechanical loads (including cyclic loads). The developed methodology allows for changes in material response that can occur with temperature or time (i.e. changing fatigue and Weibull parameters with temperature or time). For this article an overview of the transient reliability methodology and how this methodology is extended to account for proof testing is described. The CARES/Life code has been modified to have the ability to interface with commercially available finite element analysis (FEA) codes executed for transient load histories. Examples are provided to demonstrate the features of the methodology as implemented in the CARES/Life program.

  15. A New Modular Approach for Tightly Coupled Fluid/Structure Analysis

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru

    2003-01-01

    Static aeroelastic computations are made using a C++ executive suitable for closely coupled fluid/structure interaction studies. The fluid flow is modeled using the Euler/Navier Stokes equations and the structure is modeled using finite elements. FORTRAN based fluids and structures codes are integrated under C++ environment. The flow and structural solvers are treated as separate object files. The data flow between fluids and structures is accomplished using I/O. Results are demonstrated for transonic flow over partially flexible surface that is important for aerospace vehicles. Use of this development to accurately predict flow induced structural failure will be demonstrated.

  16. Static Analysis Using Abstract Interpretation

    NASA Technical Reports Server (NTRS)

    Arthaud, Maxime

    2017-01-01

    Short presentation about static analysis and most particularly abstract interpretation. It starts with a brief explanation on why static analysis is used at NASA. Then, it describes the IKOS (Inference Kernel for Open Static Analyzers) tool chain. Results on NASA projects are shown. Several well known algorithms from the static analysis literature are then explained (such as pointer analyses, memory analyses, weak relational abstract domains, function summarization, etc.). It ends with interesting problems we encountered (such as C++ analysis with exception handling, or the detection of integer overflow).

  17. Measurement and Prediction of the Thermomechanical Response of Shape Memory Alloy Hybrid Composite Beams

    NASA Technical Reports Server (NTRS)

    Davis, Brian; Turner, Travis L.; Seelecke, Stefan

    2008-01-01

    An experimental and numerical investigation into the static and dynamic responses of shape memory alloy hybrid composite (SMAHC) beams is performed to provide quantitative validation of a recently commercialized numerical analysis/design tool for SMAHC structures. The SMAHC beam specimens consist of a composite matrix with embedded pre-strained SMA actuators, which act against the mechanical boundaries of the structure when thermally activated to adaptively stiffen the structure. Numerical results are produced from the numerical model as implemented into the commercial finite element code ABAQUS. A rigorous experimental investigation is undertaken to acquire high fidelity measurements including infrared thermography and projection moire interferometry for full-field temperature and displacement measurements, respectively. High fidelity numerical results are also obtained from the numerical model and include measured parameters, such as geometric imperfection and thermal load. Excellent agreement is achieved between the predicted and measured results of the static and dynamic thermomechanical response, thereby providing quantitative validation of the numerical tool.

  18. SEQADAPT: an adaptable system for the tracking, storage and analysis of high throughput sequencing experiments.

    PubMed

    Burdick, David B; Cavnor, Chris C; Handcock, Jeremy; Killcoyne, Sarah; Lin, Jake; Marzolf, Bruz; Ramsey, Stephen A; Rovira, Hector; Bressler, Ryan; Shmulevich, Ilya; Boyle, John

    2010-07-14

    High throughput sequencing has become an increasingly important tool for biological research. However, the existing software systems for managing and processing these data have not provided the flexible infrastructure that research requires. Existing software solutions provide static and well-established algorithms in a restrictive package. However as high throughput sequencing is a rapidly evolving field, such static approaches lack the ability to readily adopt the latest advances and techniques which are often required by researchers. We have used a loosely coupled, service-oriented infrastructure to develop SeqAdapt. This system streamlines data management and allows for rapid integration of novel algorithms. Our approach also allows computational biologists to focus on developing and applying new methods instead of writing boilerplate infrastructure code. The system is based around the Addama service architecture and is available at our website as a demonstration web application, an installable single download and as a collection of individual customizable services.

  19. SEQADAPT: an adaptable system for the tracking, storage and analysis of high throughput sequencing experiments

    PubMed Central

    2010-01-01

    Background High throughput sequencing has become an increasingly important tool for biological research. However, the existing software systems for managing and processing these data have not provided the flexible infrastructure that research requires. Results Existing software solutions provide static and well-established algorithms in a restrictive package. However as high throughput sequencing is a rapidly evolving field, such static approaches lack the ability to readily adopt the latest advances and techniques which are often required by researchers. We have used a loosely coupled, service-oriented infrastructure to develop SeqAdapt. This system streamlines data management and allows for rapid integration of novel algorithms. Our approach also allows computational biologists to focus on developing and applying new methods instead of writing boilerplate infrastructure code. Conclusion The system is based around the Addama service architecture and is available at our website as a demonstration web application, an installable single download and as a collection of individual customizable services. PMID:20630057

  20. Balancing Dynamic Strength of Spur Gears Operated at Extended Center Distance

    NASA Technical Reports Server (NTRS)

    Lin, Hsiang Hsi; Liou, Chuen-Huei; Oswald, Fred B.; Townsend, Dennis P.

    1996-01-01

    This paper presents an analytical study on using hob offset to balance the dynamic tooth strength of spur gears operated at a center distance greater than the standard value. This study is an extension of a static study by Mabie and others. The study was limited to the offset values that assure the pinion and gear teeth will neither be undercut nor become pointed. The analysis presented in this paper was performed using DANST-PC, a new version of the NASA gear dynamics code. The operating speed of the transmission influences the amount of hob offset required to equalize the dynamic stresses in the pinion and gear. The optimum hob offset for the pinion was found to vary within a small range as the speed changes. The optimum value is generally greater than the optimum value found by static procedures. For gears that must operate over a wide range of speeds, an average offset value may be used.

  1. Description of a Pressure Measurement Technique for Obtaining Surface Static Pressures of a Radial Turbine

    NASA Technical Reports Server (NTRS)

    Dicicco, L. Danielle; Nowlin, Brent C.; Tirres, Lizet

    1992-01-01

    The aerodynamic performance of a solid uncooled version of a cooled radial turbine was evaluated in the Small Engine Components Test Facility Turbine rig at the NASA Lewis Research Center. Specifically, an experiment was conducted to rotor surface static pressures. This was the first time surface static pressures had been measured on a radial turbine at NASA Lewis. These pressures were measured by a modified Rotating Data Package (RDP), a standard product manufactured by Scanivalve, Inc. Described here are the RDP, and the modifications that were made, as well as the checkout, installation, and testing procedures. The data presented are compared to analytical results obtained from NASA's MERIDL TSONIC BLAYER (MTSB) code.

  2. Description of a pressure measurement technique for obtaining surface static pressures of a radial turbine

    NASA Technical Reports Server (NTRS)

    Dicicco, L. D.; Nowlin, Brent C.; Tirres, Lizet

    1992-01-01

    The aerodynamic performance of a solid uncooled version of a cooled radial turbine was evaluated in the Small Engine Components Test Facility Turbine rig at the NASA Lewis Research Center. Specifically, an experiment was conducted to rotor surface static pressures. This was the first time surface static pressures had been measured on a radial turbine at NASA Lewis. These pressures were measured by a modified Rotating Data Package (RDP), a standard product manufactured by Scanivalve, Inc. Described here are the RDP, and the modifications that were made, as well as the checkout, installation, and testing procedures. The data presented are compared to analytical results obtained from NASA's MERIDL TSONIC BLAYER (MTSB) code.

  3. Langley 14- by 22-foot subsonic tunnel test engineer's data acquisition and reduction manual

    NASA Technical Reports Server (NTRS)

    Quinto, P. Frank; Orie, Nettie M.

    1994-01-01

    The Langley 14- by 22-Foot Subsonic Tunnel is used to test a large variety of aircraft and nonaircraft models. To support these investigations, a data acquisition system has been developed that has both static and dynamic capabilities. The static data acquisition and reduction system is described; the hardware and software of this system are explained. The theory and equations used to reduce the data obtained in the wind tunnel are presented; the computer code is not included.

  4. Static-stress analysis of dual-axis safety vessel

    NASA Astrophysics Data System (ADS)

    Bultman, D. H.

    1992-11-01

    An 8 ft diameter safety vessel, made of HSLA-100 steel, is evaluated to determine its ability to contain the quasi-static residual pressure from a high explosive (HE) blast. The safety vessel is designed for use with the Dual-Axis Radiographic Hydrotest (DARHT) facility being developed at Los Alamos National Laboratory. A smaller confinement vessel fits inside the safety vessel and contains the actual explosion, and the safety vessel functions as a second layer of containment in the unlikely case of a confinement vessel leak. The safety vessel is analyzed as a pressure vessel based on the ASME Boiler and Pressure Vessel Code, Section 8, Division 1, and the Welding Research Council Bulletin, WRC107. Combined stresses that result from internal pressure and external loads on nozzles are calculated and compared to the allowable stresses for HSLA-100 steel. Results confirm that the shell and nozzle components are adequately designed for a static pressure of 830 psi, plus the maximum expected external loads. Shell stresses at the 'shell to nozzle' interface, produced from external loads on the nozzles, were less than 700 psi. The maximum combined stress resulting from the internal pressure plus external loads was 17,384 psi, which is significantly less than the allowable stress of 42,375 psi for HSLA-100 steel.

  5. Theta phase precession and phase selectivity: a cognitive device description of neural coding

    NASA Astrophysics Data System (ADS)

    Zalay, Osbert C.; Bardakjian, Berj L.

    2009-06-01

    Information in neural systems is carried by way of phase and rate codes. Neuronal signals are processed through transformative biophysical mechanisms at the cellular and network levels. Neural coding transformations can be represented mathematically in a device called the cognitive rhythm generator (CRG). Incoming signals to the CRG are parsed through a bank of neuronal modes that orchestrate proportional, integrative and derivative transformations associated with neural coding. Mode outputs are then mixed through static nonlinearities to encode (spatio) temporal phase relationships. The static nonlinear outputs feed and modulate a ring device (limit cycle) encoding output dynamics. Small coupled CRG networks were created to investigate coding functionality associated with neuronal phase preference and theta precession in the hippocampus. Phase selectivity was found to be dependent on mode shape and polarity, while phase precession was a product of modal mixing (i.e. changes in the relative contribution or amplitude of mode outputs resulted in shifting phase preference). Nonlinear system identification was implemented to help validate the model and explain response characteristics associated with modal mixing; in particular, principal dynamic modes experimentally derived from a hippocampal neuron were inserted into a CRG and the neuron's dynamic response was successfully cloned. From our results, small CRG networks possessing disynaptic feedforward inhibition in combination with feedforward excitation exhibited frequency-dependent inhibitory-to-excitatory and excitatory-to-inhibitory transitions that were similar to transitions seen in a single CRG with quadratic modal mixing. This suggests nonlinear modal mixing to be a coding manifestation of the effect of network connectivity in shaping system dynamic behavior. We hypothesize that circuits containing disynaptic feedforward inhibition in the nervous system may be candidates for interpreting upstream rate codes to guide downstream processes such as phase precession, because of their demonstrated frequency-selective properties.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dokhane, A.; Canepa, S.; Ferroukhi, H.

    For stability analyses of the Swiss operating Boiling-Water-Reactors (BWRs), the methodology employed and validated so far at the Paul Scherrer Inst. (PSI) was based on the RAMONA-3 code with a hybrid upstream static lattice/core analysis approach using CASMO-4 and PRESTO-2. More recently, steps were undertaken towards a new methodology based on the SIMULATE-3K (S3K) code for the dynamical analyses combined with the CMSYS system relying on the CASMO/SIMULATE-3 suite of codes and which was established at PSI to serve as framework for the development and validation of reference core models of all the Swiss reactors and operated cycles. This papermore » presents a first validation of the new methodology on the basis of a benchmark recently organised by a Swiss utility and including the participation of several international organisations with various codes/methods. Now in parallel, a transition from CASMO-4E (C4E) to CASMO-5M (C5M) as basis for the CMSYS core models was also recently initiated at PSI. Consequently, it was considered adequate to address the impact of this transition both for the steady-state core analyses as well as for the stability calculations and to achieve thereby, an integral approach for the validation of the new S3K methodology. Therefore, a comparative assessment of C4 versus C5M is also presented in this paper with particular emphasis on the void coefficients and their impact on the downstream stability analysis results. (authors)« less

  7. Multiscale Static Analysis of Notched and Unnotched Laminates Using the Generalized Method of Cells

    NASA Technical Reports Server (NTRS)

    Naghipour Ghezeljeh, Paria; Arnold, Steven M.; Pineda, Evan J.; Stier, Bertram; Hansen, Lucas; Bednarcyk, Brett A.; Waas, Anthony M.

    2016-01-01

    The generalized method of cells (GMC) is demonstrated to be a viable micromechanics tool for predicting the deformation and failure response of laminated composites, with and without notches, subjected to tensile and compressive static loading. Given the axial [0], transverse [90], and shear [+45/-45] response of a carbon/epoxy (IM7/977-3) system, the unnotched and notched behavior of three multidirectional layups (Layup 1: [0,45,90,-45](sub 2S), Layup 2: [0,60,0](sub 3S), and Layup 3: [30,60,90,-30, -60](sub 2S)) are predicted under both tensile and compressive static loading. Matrix nonlinearity is modeled in two ways. The first assumes all nonlinearity is due to anisotropic progressive damage of the matrix only, which is modeled, using the multiaxial mixed-mode continuum damage model (MMCDM) within GMC. The second utilizes matrix plasticity coupled with brittle final failure based on the maximum principle strain criteria to account for matrix nonlinearity and failure within the Finite Element Analysis--Micromechanics Analysis Code (FEAMAC) software multiscale framework. Both MMCDM and plasticity models incorporate brittle strain- and stress-based failure criteria for the fiber. Upon satisfaction of these criteria, the fiber properties are immediately reduced to a nominal value. The constitutive response for each constituent (fiber and matrix) is characterized using a combination of vendor data and the axial, transverse, and shear responses of unnotched laminates. Then, the capability of the multiscale methodology is assessed by performing blind predictions of the mentioned notched and unnotched composite laminates response under tensile and compressive loading. Tabulated data along with the detailed results (i.e., stress-strain curves as well as damage evolution states at various ratios of strain to failure) for all laminates are presented.

  8. Time Dependent Data Mining in RAVEN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cogliati, Joshua Joseph; Chen, Jun; Patel, Japan Ketan

    RAVEN is a generic software framework to perform parametric and probabilistic analysis based on the response of complex system codes. The goal of this type of analyses is to understand the response of such systems in particular with respect their probabilistic behavior, to understand their predictability and drivers or lack of thereof. Data mining capabilities are the cornerstones to perform such deep learning of system responses. For this reason static data mining capabilities were added last fiscal year (FY 15). In real applications, when dealing with complex multi-scale, multi-physics systems it seems natural that, during transients, the relevance of themore » different scales, and physics, would evolve over time. For these reasons the data mining capabilities have been extended allowing their application over time. In this writing it is reported a description of the new RAVEN capabilities implemented with several simple analytical tests to explain their application and highlight the proper implementation. The report concludes with the application of those newly implemented capabilities to the analysis of a simulation performed with the Bison code.« less

  9. Cold flow testing of the Space Shuttle Main Engine high pressure fuel turbine model

    NASA Technical Reports Server (NTRS)

    Hudson, Susan T.; Gaddis, Stephen W.; Johnson, P. D.; Boynton, James L.

    1991-01-01

    In order to experimentally determine the performance of the Space Shuttle Main Engine (SSME) High Pressure Fuel Turbopump (HPFTP) turbine, a 'cold' air flow turbine test program was established at NASA's Marshall Space Flight Center. As part of this test program, a baseline test of Rocketdyne's HPFTP turbine has been completed. The turbine performance and turbine diagnostics such as airfoil surface static pressure distributions, static pressure drops through the turbine, and exit swirl angles were investigated at the turbine design point, over its operating range, and at extreme off-design points. The data was compared to pretest predictions with good results. The test data has been used to improve meanline prediction codes and is now being used to validate various three-dimensional codes. The data will also be scaled to engine conditions and used to improve the SSME steady-state performance model.

  10. Applied Computational Transonic Aerodynamics,

    DTIC Science & Technology

    1982-08-01

    contributions. Considering first the body integral (2.95) we now have the situation that, with the effect of the boundary layer represented, e.g. through... effects , (3) static aeroelastic distortion, (4) up to three interfering bodies of nacelle or store type, and (5) an improved method of treating...tip. To date, no modeling of nacelle or store pylons has been included in this code. In the NLR code [641, the effect of (finite) bodies and wing

  11. Sequence-dependent modelling of local DNA bending phenomena: curvature prediction and vibrational analysis.

    PubMed

    Vlahovicek, K; Munteanu, M G; Pongor, S

    1999-01-01

    Bending is a local conformational micropolymorphism of DNA in which the original B-DNA structure is only distorted but not extensively modified. Bending can be predicted by simple static geometry models as well as by a recently developed elastic model that incorporate sequence dependent anisotropic bendability (SDAB). The SDAB model qualitatively explains phenomena including affinity of protein binding, kinking, as well as sequence-dependent vibrational properties of DNA. The vibrational properties of DNA segments can be studied by finite element analysis of a model subjected to an initial bending moment. The frequency spectrum is obtained by applying Fourier analysis to the displacement values in the time domain. This analysis shows that the spectrum of the bending vibrations quite sensitively depends on the sequence, for example the spectrum of a curved sequence is characteristically different from the spectrum of straight sequence motifs of identical basepair composition. Curvature distributions are genome-specific, and pronounced differences are found between protein-coding and regulatory regions, respectively, that is, sites of extreme curvature and/or bendability are less frequent in protein-coding regions. A WWW server is set up for the prediction of curvature and generation of 3D models from DNA sequences (http:@www.icgeb.trieste.it/dna).

  12. N-MODY: a code for collisionless N-body simulations in modified Newtonian dynamics.

    NASA Astrophysics Data System (ADS)

    Londrillo, P.; Nipoti, C.

    We describe the numerical code N-MODY, a parallel particle-mesh code for collisionless N-body simulations in modified Newtonian dynamics (MOND). N-MODY is based on a numerical potential solver in spherical coordinates that solves the non-linear MOND field equation, and is ideally suited to simulate isolated stellar systems. N-MODY can be used also to compute the MOND potential of arbitrary static density distributions. A few applications of N-MODY indicate that some astrophysically relevant dynamical processes are profoundly different in MOND and in Newtonian gravity with dark matter.

  13. Parallel-vector computation for structural analysis and nonlinear unconstrained optimization problems

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.

    1990-01-01

    Practical engineering application can often be formulated in the form of a constrained optimization problem. There are several solution algorithms for solving a constrained optimization problem. One approach is to convert a constrained problem into a series of unconstrained problems. Furthermore, unconstrained solution algorithms can be used as part of the constrained solution algorithms. Structural optimization is an iterative process where one starts with an initial design, a finite element structure analysis is then performed to calculate the response of the system (such as displacements, stresses, eigenvalues, etc.). Based upon the sensitivity information on the objective and constraint functions, an optimizer such as ADS or IDESIGN, can be used to find the new, improved design. For the structural analysis phase, the equation solver for the system of simultaneous, linear equations plays a key role since it is needed for either static, or eigenvalue, or dynamic analysis. For practical, large-scale structural analysis-synthesis applications, computational time can be excessively large. Thus, it is necessary to have a new structural analysis-synthesis code which employs new solution algorithms to exploit both parallel and vector capabilities offered by modern, high performance computers such as the Convex, Cray-2 and Cray-YMP computers. The objective of this research project is, therefore, to incorporate the latest development in the parallel-vector equation solver, PVSOLVE into the widely popular finite-element production code, such as the SAP-4. Furthermore, several nonlinear unconstrained optimization subroutines have also been developed and tested under a parallel computer environment. The unconstrained optimization subroutines are not only useful in their own right, but they can also be incorporated into a more popular constrained optimization code, such as ADS.

  14. Probabilistic Structural Analysis Methods (PSAM) for Select Space Propulsion System Components

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Probabilistic Structural Analysis Methods (PSAM) are described for the probabilistic structural analysis of engine components for current and future space propulsion systems. Components for these systems are subjected to stochastic thermomechanical launch loads. Uncertainties or randomness also occurs in material properties, structural geometry, and boundary conditions. Material property stochasticity, such as in modulus of elasticity or yield strength, exists in every structure and is a consequence of variations in material composition and manufacturing processes. Procedures are outlined for computing the probabilistic structural response or reliability of the structural components. The response variables include static or dynamic deflections, strains, and stresses at one or several locations, natural frequencies, fatigue or creep life, etc. Sample cases illustrates how the PSAM methods and codes simulate input uncertainties and compute probabilistic response or reliability using a finite element model with probabilistic methods.

  15. Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.

    PubMed

    Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen

    2014-02-01

    The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.

  16. Galileo spacecraft modal test and evaluation of testing techniques

    NASA Technical Reports Server (NTRS)

    Chen, J.-C.

    1984-01-01

    The structural configuration, modal test requirements and pre-test activities involved in modeling the expected dynamic environment and responses of the Galileo spacecraft are discussed. The probe will be Shuttle-launched in 1986 and will gather data on the Jupiter system. Loads analysis for the 5300 lb spacecraft were performed with the NASTRAN code, and covered 10,000 static degrees of freedom and 1600 mass degrees of freedom. A modal analysis will be used to verify the predictions for natural frequencies, mode shapes, orthogonality checks, residual mass, modal damping and forces, and generalized forces. Verification of the validity of considering only 70 natural modes in the numerical simulation is being performed by examining the forcing functions of the analysis. The analysis led to requirements that 162 channels of accelerometer data and 118 channels of strain gage data be recorded during shaker tests to reveal areas where design changes will be needed to eliminate vibration peaks.

  17. 17 CFR 232.11 - Definition of terms used in part 232.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ..., PDF, and static graphic files. Such code may be in binary (machine language) or in script form... Act means the Trust Indenture Act of 1939. Unofficial PDF copy. The term unofficial PDF copy means an...

  18. NYU Ada/Ed User’s Guide - Version 1.4 for VAX/VMS Systems.

    DTIC Science & Technology

    1984-07-01

    Executble fie).M h le contains the Intermediate code from the alajie, bound togethr with the scomsary compilaton units bum libaries rquested by the pmn...3.5.4 bounds in -n integer type definition must be of sme intge type 3.5.7 Expect static expression for digits 3.5.7 Expect integer expression for...DI&13 3.5.7 Invalid digits value in real type dedwaation 3.5.7 Expect static expression for delta 3.5.9 Expression for delta must be of some real type

  19. Current Development Status of an Integrated Tool for Modeling Quasi-static Deformation in the Solid Earth

    NASA Astrophysics Data System (ADS)

    Williams, C. A.; Dicaprio, C.; Simons, M.

    2003-12-01

    With the advent of projects such as the Plate Boundary Observatory and future InSAR missions, spatially dense geodetic data of high quality will provide an increasingly detailed picture of the movement of the earth's surface. To interpret such information, powerful and easily accessible modeling tools are required. We are presently developing such a tool that we feel will meet many of the needs for evaluating quasi-static earth deformation. As a starting point, we begin with a modified version of the finite element code TECTON, which has been specifically designed to solve tectonic problems involving faulting and viscoelastic/plastic earth behavior. As our first priority, we are integrating the code into the GeoFramework, which is an extension of the Python-based Pyre modeling framework. The goal of this framework is to provide simplified user interfaces for powerful modeling codes, to provide easy access to utilities such as meshers and visualization tools, and to provide a tight integration between different modeling tools so they can interact with each other. The initial integration of the code into this framework is essentially complete, and a more thorough integration, where Python-based drivers control the entire solution, will be completed in the near future. We have an evolving set of priorities that we expect to solidify as we receive more input from the modeling community. Current priorities include the development of linear and quadratic tetrahedral elements, the development of a parallelized version of the code using the PETSc libraries, the addition of more complex rheologies, realistic fault friction models, adaptive time stepping, and spherical geometries. In this presentation we describe current progress toward our various priorities, briefly describe the structure of the code within the GeoFramework, and demonstrate some sample applications.

  20. Limit analysis of hollow spheres or spheroids with Hill orthotropic matrix

    NASA Astrophysics Data System (ADS)

    Pastor, Franck; Pastor, Joseph; Kondo, Djimedo

    2012-03-01

    Recent theoretical studies of the literature are concerned by the hollow sphere or spheroid (confocal) problems with orthotropic Hill type matrix. They have been developed in the framework of the limit analysis kinematical approach by using very simple trial velocity fields. The present Note provides, through numerical upper and lower bounds, a rigorous assessment of the approximate criteria derived in these theoretical works. To this end, existing static 3D codes for a von Mises matrix have been easily extended to the orthotropic case. Conversely, instead of the non-obvious extension of the existing kinematic codes, a new original mixed approach has been elaborated on the basis of the plane strain structure formulation earlier developed by F. Pastor (2007). Indeed, such a formulation does not need the expressions of the unit dissipated powers. Interestingly, it delivers a numerical code better conditioned and notably more rapid than the previous one, while preserving the rigorous upper bound character of the corresponding numerical results. The efficiency of the whole approach is first demonstrated through comparisons of the results to the analytical upper bounds of Benzerga and Besson (2001) or Monchiet et al. (2008) in the case of spherical voids in the Hill matrix. Moreover, we provide upper and lower bounds results for the hollow spheroid with the Hill matrix which are compared to those of Monchiet et al. (2008).

  1. Spectroscopic analysis of Cepheid variables with 2D radiation-hydrodynamic simulations

    NASA Astrophysics Data System (ADS)

    Vasilyev, Valeriy

    2018-06-01

    The analysis of chemical enrichment history of dwarf galaxies allows to derive constraints on their formation and evolution. In this context, Cepheids play a very important role, as these periodically variable stars provide a means to obtain accurate distances. Besides, chemical composition of Cepheids can provide a strong constraint on the chemical evolution of the system. Standard spectroscopic analysis of Cepheids is based on using one-dimensional (1D) hydrostatic model atmospheres, with convection parametrised using the mixing-length theory. However, this quasi-static approach has theoretically not been validated. In my talk, I will discuss the validity of the quasi-static approximation in spectroscopy of short-periodic Cepheids. I will show the results obtained using a 2D time-dependent envelope model of a pulsating star computed with the radiation-hydrodynamics code CO5BOLD. I will then describe the impact of new models on the spectroscopic diagnostic of the effective temperature, surface gravity, microturbulent velocity, and metallicity. One of the interesting findings of my work is that 1D model atmospheres provide unbiased estimates of stellar parameters and abundances of Cepheid variables for certain phases of their pulsations. Convective inhomogeneities, however, also introduce biases. I will then discuss how these results can be used in a wider parameter space of pulsating stars and present an outlook for the future studies.

  2. Combining Static Analysis and Model Checking for Software Analysis

    NASA Technical Reports Server (NTRS)

    Brat, Guillaume; Visser, Willem; Clancy, Daniel (Technical Monitor)

    2003-01-01

    We present an iterative technique in which model checking and static analysis are combined to verify large software systems. The role of the static analysis is to compute partial order information which the model checker uses to reduce the state space. During exploration, the model checker also computes aliasing information that it gives to the static analyzer which can then refine its analysis. The result of this refined analysis is then fed back to the model checker which updates its partial order reduction. At each step of this iterative process, the static analysis computes optimistic information which results in an unsafe reduction of the state space. However we show that the process converges to a fired point at which time the partial order information is safe and the whole state space is explored.

  3. The Pan-STARRS PS1 Image Processing Pipeline

    NASA Astrophysics Data System (ADS)

    Magnier, E.

    The Pan-STARRS PS1 Image Processing Pipeline (IPP) performs the image processing and data analysis tasks needed to enable the scientific use of the images obtained by the Pan-STARRS PS1 prototype telescope. The primary goals of the IPP are to process the science images from the Pan-STARRS telescopes and make the results available to other systems within Pan-STARRS. It also is responsible for combining all of the science images in a given filter into a single representation of the non-variable component of the night sky defined as the "Static Sky". To achieve these goals, the IPP also performs other analysis functions to generate the calibrations needed in the science image processing, and to occasionally use the derived data to generate improved astrometric and photometric reference catalogs. It also provides the infrastructure needed to store the incoming data and the resulting data products. The IPP inherits lessons learned, and in some cases code and prototype code, from several other astronomy image analysis systems, including Imcat (Kaiser), the Sloan Digital Sky Survey (REF), the Elixir system (Magnier & Cuillandre), and Vista (Tonry). Imcat and Vista have a large number of robust image processing functions. SDSS has demonstrated a working analysis pipeline and large-scale databasesystem for a dedicated project. The Elixir system has demonstrated an automatic image processing system and an object database system for operational usage. This talk will present an overview of the IPP architecture, functional flow, code development structure, and selected analysis algorithms. Also discussed is the HW highly parallel HW configuration necessary to support PS1 operational requirements. Finally, results are presented of the processing of images collected during PS1 early commissioning tasks utilizing the Pan-STARRS Test Camera #3.

  4. Towards a Certified Lightweight Array Bound Checker for Java Bytecode

    NASA Technical Reports Server (NTRS)

    Pichardie, David

    2009-01-01

    Dynamic array bound checks are crucial elements for the security of a Java Virtual Machines. These dynamic checks are however expensive and several static analysis techniques have been proposed to eliminate explicit bounds checks. Such analyses require advanced numerical and symbolic manipulations that 1) penalize bytecode loading or dynamic compilation, 2) complexify the trusted computing base. Following the Foundational Proof Carrying Code methodology, our goal is to provide a lightweight bytecode verifier for eliminating array bound checks that is both efficient and trustable. In this work, we define a generic relational program analysis for an imperative, stackoriented byte code language with procedures, arrays and global variables and instantiate it with a relational abstract domain as polyhedra. The analysis has automatic inference of loop invariants and method pre-/post-conditions, and efficient checking of analysis results by a simple checker. Invariants, which can be large, can be specialized for proving a safety policy using an automatic pruning technique which reduces their size. The result of the analysis can be checked efficiently by annotating the program with parts of the invariant together with certificates of polyhedral inclusions. The resulting checker is sufficiently simple to be entirely certified within the Coq proof assistant for a simple fragment of the Java bytecode language. During the talk, we will also report on our ongoing effort to scale this approach for the full sequential JVM.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adamek, Julian; Daverio, David; Durrer, Ruth

    We present a new N-body code, gevolution , for the evolution of large scale structure in the Universe. Our code is based on a weak field expansion of General Relativity and calculates all six metric degrees of freedom in Poisson gauge. N-body particles are evolved by solving the geodesic equation which we write in terms of a canonical momentum such that it remains valid also for relativistic particles. We validate the code by considering the Schwarzschild solution and, in the Newtonian limit, by comparing with the Newtonian N-body codes Gadget-2 and RAMSES . We then proceed with a simulation ofmore » large scale structure in a Universe with massive neutrinos where we study the gravitational slip induced by the neutrino shear stress. The code can be extended to include different kinds of dark energy or modified gravity models and going beyond the usually adopted quasi-static approximation. Our code is publicly available.« less

  6. Sensitivity Analysis of the Static Aeroelastic Response of a Wing

    NASA Technical Reports Server (NTRS)

    Eldred, Lloyd B.

    1993-01-01

    A technique to obtain the sensitivity of the static aeroelastic response of a three dimensional wing model is designed and implemented. The formulation is quite general and accepts any aerodynamic and structural analysis capability. A program to combine the discipline level, or local, sensitivities into global sensitivity derivatives is developed. A variety of representations of the wing pressure field are developed and tested to determine the most accurate and efficient scheme for representing the field outside of the aerodynamic code. Chebyshev polynomials are used to globally fit the pressure field. This approach had some difficulties in representing local variations in the field, so a variety of local interpolation polynomial pressure representations are also implemented. These panel based representations use a constant pressure value, a bilinearly interpolated value. or a biquadraticallv interpolated value. The interpolation polynomial approaches do an excellent job of reducing the numerical problems of the global approach for comparable computational effort. Regardless of the pressure representation used. sensitivity and response results with excellent accuracy have been produced for large integrated quantities such as wing tip deflection and trim angle of attack. The sensitivities of such things as individual generalized displacements have been found with fair accuracy. In general, accuracy is found to be proportional to the relative size of the derivatives to the quantity itself.

  7. Multiple Antenna Implementation System (MAntIS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carter, M.D.; Batchelor, D.B.; Jaeger, E.F.

    1993-01-01

    The MAntIS code was developed as an aid to the design of radio frequency (RF) antennas for fusion applications. The code solves for the electromagnetic fields in three dimensions near the antenna structure with a realistic plasma load. Fourier analysis is used in the two dimensions that are tangential to the plasma surface and backwall. The third dimension is handled analytically in a vacuum region with a general impedance match at the plasma-vacuum interface. The impedance tensor is calculated for a slab plasma using the ORION-lD code with all three electric field components included and warm plasma corrections. The codemore » permits the modeling of complicated antenna structures by superposing currents that flow on the surfaces of rectangular parallelepipeds. Specified current elements have feeders that continuously connect the current flowing from the ends of the strap to the feeders. The elements may have an arbitrary orientation with respect to the static magnetic field. Currents are permitted to vary along the length of the current strap and feeders. Parameters that describe this current variation can be adjusted to approximately satisfy boundary conditions on the current elements. The methods used in MAntIS and results for a primary loop antenna design are presented.« less

  8. Static Enforcement of Timing Policies Using Code Certification

    DTIC Science & Technology

    2006-08-07

    Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law...56 5.2 Lilt Example: Recursive Fibonacci . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 5.3 Lilt Example...Iterative Fibonacci . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 5.4 Lilt Example: List Reversal

  9. Hypersonic and Unsteady Flow Science Issues for Explosively Formed Penetrators

    DTIC Science & Technology

    2006-08-01

    under going real- time dynamic deformation. ACCOMPLISHMENTS/RESULTS • Completed initial assessment of flow chemistry • Completed initial stability... flow chemistry during rapid deformation •Cannot use static boundary conditions in CFD codes •Interfaces one approach to coupling with hydrocodes

  10. ROOT.NET: Using ROOT from .NET languages like C# and F#

    NASA Astrophysics Data System (ADS)

    Watts, G.

    2012-12-01

    ROOT.NET provides an interface between Microsoft's Common Language Runtime (CLR) and .NET technology and the ubiquitous particle physics analysis tool, ROOT. ROOT.NET automatically generates a series of efficient wrappers around the ROOT API. Unlike pyROOT, these wrappers are statically typed and so are highly efficient as compared to the Python wrappers. The connection to .NET means that one gains access to the full series of languages developed for the CLR including functional languages like F# (based on OCaml). Many features that make ROOT objects work well in the .NET world are added (properties, IEnumerable interface, LINQ compatibility, etc.). Dynamic languages based on the CLR can be used as well, of course (Python, for example). Additionally it is now possible to access ROOT objects that are unknown to the translation tool. This poster will describe the techniques used to effect this translation, along with performance comparisons, and examples. All described source code is posted on the open source site CodePlex.

  11. Comparisons of a Three-Dimensional, Full Navier Stokes Computer Model with High Mach Number Combuster Test Data

    NASA Technical Reports Server (NTRS)

    Watkins, William B.

    1990-01-01

    Comparisons between scramjet combustor data and a three-dimensional full Navier-Stokes calculation have been made to verify and substantiate computational fluid dynamics (CFD) codes and application procedures. High Mach number scramjet combustor development will rely heavily on CFD applications to provide wind tunnel-equivalent data of quality sufficient to design, build and fly hypersonic aircraft. Therefore. detailed comparisons between CFD results and test data are imperative. An experimental case is presented, for which combustor wall static pressures were measured and flow-fieid interferograms were obtained. A computer model was done of the experiment, and counterpart parameters are compared with experiment. The experiment involved a subscale combustor designed and fabricated for the National Aero-Space Plane Program, and tested in the Calspan Corporation 96" hypersonic shock tunnel. The combustor inlet ramp was inclined at a 20 angle to the shock tunnel nozzle axis, and resulting combustor entrance flow conditions simulated freestream M=10. The combustor body and cowl walls were instrumented with static pressure transducers, and the combustor lateral walls contained windows through which flowfield holographic interferograms were obtained. The CFD calculation involved a three-dimensional time-averaged full Navier-Stokes code applied to the axial flow segment containing fuel injection and combustion. The full Navier-Stokes approach allowed for mixed supersonic and subsonic flow, downstream-upstream communication in subsonic flow regions, and effects of adverse pressure gradients. The code included hydrogen-air chemistry in the combustor segment which begins near fuel injection and continues through combustor exhaust. Combustor ramp and inlet segments on the combustor lateral centerline were modelled as two dimensional. Comparisons to be shown include calculated versus measured wall static pressures as functions of axial flow coordinate, and calculated path-averaged density contours versus an holographic Interferogram.

  12. An Analysis of Ripple and Error Fields Induced by a Blanket in the CFETR

    NASA Astrophysics Data System (ADS)

    Yu, Guanying; Liu, Xufeng; Liu, Songlin

    2016-10-01

    The Chinese Fusion Engineering Tokamak Reactor (CFETR) is an important intermediate device between ITER and DEMO. The Water Cooled Ceramic Breeder (WCCB) blanket whose structural material is mainly made of Reduced Activation Ferritic/Martensitic (RAFM) steel, is one of the candidate conceptual blanket design. An analysis of ripple and error field induced by RAFM steel in WCCB is evaluated with the method of static magnetic analysis in the ANSYS code. Significant additional magnetic field is produced by blanket and it leads to an increased ripple field. Maximum ripple along the separatrix line reaches 0.53% which is higher than 0.5% of the acceptable design value. Simultaneously, one blanket module is taken out for heating purpose and the resulting error field is calculated to be seriously against the requirement. supported by National Natural Science Foundation of China (No. 11175207) and the National Magnetic Confinement Fusion Program of China (No. 2013GB108004)

  13. Comparison of fundamental natural period of masonry and reinforced concrete buildings retrieved from experimental campaigns performed in Italy, Greece and Spain

    NASA Astrophysics Data System (ADS)

    Nigro, Antonella; Ponzo, Felice C.; Ditommaso, Rocco; Auletta, Gianluca; Iacovino, Chiara; Nigro, Domenico S.; Soupios, Pantelis; García-Fernández, Mariano; Jimenez, Maria-Jose

    2017-04-01

    Aim of this study is the experimental estimation of the dynamic characteristics of existing buildings and the comparison of the related fundamental natural period of the buildings (masonry and reinforced concrete) located in Basilicata (Italy), in Madrid (Spain) and in Crete (Greece). Several experimental campaigns, on different kind of structures all over the world, have been performed in the last years with the aim of proposing simplified relationships to evaluate the fundamental period of buildings. Most of formulas retrieved from experimental analyses provide vibration periods smaller than those suggested by the Italian Seismic Code (NTC2008) and the European Seismic Code (EC8). It is known that the fundamental period of a structure play a key role in the correct estimation of the spectral acceleration for seismic static analyses and to detect possible resonance phenomena with the foundation soil. Usually, simplified approaches dictate the use of safety factors greater than those related to in depth dynamic linear and nonlinear analyses with the aim to cover any unexpected uncertainties. The fundamental period calculated with the simplified formula given by both NTC 2008 and EC8 is higher than the fundamental period measured on the investigated structures in Italy, Spain and Greece. The consequence is that the spectral acceleration adopted in the seismic static analysis may be significantly different than real spectral acceleration. This approach could produces a decreasing in safety factors obtained using linear seismic static analyses. Based on numerical and experimental results, in order to confirm the results proposed in this work, authors suggest to increase the number of numerical and experimental tests considering also the effects of non-structural components and soil during small, medium and strong motion earthquakes. Acknowledgements This study was partially funded by the Italian Department of Civil Protection within the project DPC-RELUIS 2016 - RS4 ''Seismic observatory of structures and health monitoring'' and by the "Centre of Integrated Geomorphology for the Mediterranean Area - CGIAM" within the Framework Agreement with the University of Basilicata "Study, Research and Experimentation in the Field of Analysis and Monitoring of Seismic Vulnerability of Strategic and Relevant Buildings for the purposes of Civil Protection and Development of Innovative Strategies of Seismic Reinforcement".

  14. Closed-form Static Analysis with Inertia Relief and Displacement-Dependent Loads Using a MSC/NASTRAN DMAP Alter

    NASA Technical Reports Server (NTRS)

    Barnett, Alan R.; Widrick, Timothy W.; Ludwiczak, Damian R.

    1995-01-01

    Solving for the displacements of free-free coupled systems acted upon by static loads is commonly performed throughout the aerospace industry. Many times, these problems are solved using static analysis with inertia relief. This solution technique allows for a free-free static analysis by balancing the applied loads with inertia loads generated by the applied loads. For some engineering applications, the displacements of the free-free coupled system induce additional static loads. Hence, the applied loads are equal to the original loads plus displacement-dependent loads. Solving for the final displacements of such systems is commonly performed using iterative solution techniques. Unfortunately, these techniques can be time-consuming and labor-intensive. Since the coupled system equations for free-free systems with displacement-dependent loads can be written in closed-form, it is advantageous to solve for the displacements in this manner. Implementing closed-form equations in static analysis with inertia relief is analogous to implementing transfer functions in dynamic analysis. Using a MSC/NASTRAN DMAP Alter, displacement-dependent loads have been included in static analysis with inertia relief. Such an Alter has been used successfully to solve efficiently a common aerospace problem typically solved using an iterative technique.

  15. An Investigation of the Behavior of Vertical Piles in Cohesive Soils Subjected to Repetitive Lateral Loads.

    DTIC Science & Technology

    1988-02-01

    9 cyclic 8980 936 0.2071 0.2472 10 static 13470 --- 0.4606 11 cyclic 13470 337 0.4181 0.4299 12 static 22450 --- 1.2874 13 cyclic 19085 237 0.8992...8217%’. ’ ,i’nch z 10 feet W ’ ’,,....¢." 0-0 0 0 __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 0 04 N 0 0 I- 0 4 CA * 0 - u* -U.. LU C LUL...IDENTIFICATION NUMBER ORGANIZATION j(if applicable) Sc. ADDRESS (City, State, and ZIP Code) 10 . SOURCE OF FUNDING NUMBERS PROGRAM PROJECT TASK WORK UNIT

  16. Validation and Verification of LADEE Models and Software

    NASA Technical Reports Server (NTRS)

    Gundy-Burlet, Karen

    2013-01-01

    The Lunar Atmosphere Dust Environment Explorer (LADEE) mission will orbit the moon in order to measure the density, composition and time variability of the lunar dust environment. The ground-side and onboard flight software for the mission is being developed using a Model-Based Software methodology. In this technique, models of the spacecraft and flight software are developed in a graphical dynamics modeling package. Flight Software requirements are prototyped and refined using the simulated models. After the model is shown to work as desired in this simulation framework, C-code software is automatically generated from the models. The generated software is then tested in real time Processor-in-the-Loop and Hardware-in-the-Loop test beds. Travelling Road Show test beds were used for early integration tests with payloads and other subsystems. Traditional techniques for verifying computational sciences models are used to characterize the spacecraft simulation. A lightweight set of formal methods analysis, static analysis, formal inspection and code coverage analyses are utilized to further reduce defects in the onboard flight software artifacts. These techniques are applied early and often in the development process, iteratively increasing the capabilities of the software and the fidelity of the vehicle models and test beds.

  17. Demonstration of Vibrational Braille Code Display Using Large Displacement Micro-Electro-Mechanical Systems Actuators

    NASA Astrophysics Data System (ADS)

    Watanabe, Junpei; Ishikawa, Hiroaki; Arouette, Xavier; Matsumoto, Yasuaki; Miki, Norihisa

    2012-06-01

    In this paper, we present a vibrational Braille code display with large-displacement micro-electro-mechanical systems (MEMS) actuator arrays. Tactile receptors are more sensitive to vibrational stimuli than to static ones. Therefore, when each cell of the Braille code vibrates at optimal frequencies, subjects can recognize the codes more efficiently. We fabricated a vibrational Braille code display that used actuators consisting of piezoelectric actuators and a hydraulic displacement amplification mechanism (HDAM) as cells. The HDAM that encapsulated incompressible liquids in microchambers with two flexible polymer membranes could amplify the displacement of the MEMS actuator. We investigated the voltage required for subjects to recognize Braille codes when each cell, i.e., the large-displacement MEMS actuator, vibrated at various frequencies. Lower voltages were required at vibration frequencies higher than 50 Hz than at vibration frequencies lower than 50 Hz, which verified that the proposed vibrational Braille code display is efficient by successfully exploiting the characteristics of human tactile receptors.

  18. Pythran: enabling static optimization of scientific Python programs

    NASA Astrophysics Data System (ADS)

    Guelton, Serge; Brunet, Pierrick; Amini, Mehdi; Merlini, Adrien; Corbillon, Xavier; Raynaud, Alan

    2015-01-01

    Pythran is an open source static compiler that turns modules written in a subset of Python language into native ones. Assuming that scientific modules do not rely much on the dynamic features of the language, it trades them for powerful, possibly inter-procedural, optimizations. These optimizations include detection of pure functions, temporary allocation removal, constant folding, Numpy ufunc fusion and parallelization, explicit thread-level parallelism through OpenMP annotations, false variable polymorphism pruning, and automatic vector instruction generation such as AVX or SSE. In addition to these compilation steps, Pythran provides a C++ runtime library that leverages the C++ STL to provide generic containers, and the Numeric Template Toolbox for Numpy support. It takes advantage of modern C++11 features such as variadic templates, type inference, move semantics and perfect forwarding, as well as classical idioms such as expression templates. Unlike the Cython approach, Pythran input code remains compatible with the Python interpreter. Output code is generally as efficient as the annotated Cython equivalent, if not more, but without the backward compatibility loss.

  19. Stability analysis of a reinforced carbon carbon shell

    NASA Technical Reports Server (NTRS)

    Agan, W. E.; Jordan, B. M.

    1977-01-01

    This paper presents the development of a stability analysis for the nose cap of the NASA Space Shuttle Orbiter. Stability is evaluated by the differential stiffness analysis of the NASTRAN finite-element computer code, addressing those nonstandard characteristics in the nose cap such as nonuniform curvature, asymmetrical and nonuniform loads, support fixity, and various combinations of membrane and bending stresses. A full-sized nose cap, thinner than production, was statically tested and stability analyzed. The failing load level correlated to within 30%. The region and mode of buckling that occurred during test was accurately predicted by analysis. The criterion for predicting instability is based on the behavior of the nonlinear deflections. The deflections are nonlinear elastic in that the stresses are well within the elastic range of the material, but the geometry-load relationship produces nonlinear deflections. The load-deflection relationship is well defined by differential stiffness analysis up to the zero-slope portion of the curve, the point of neutral stability or where the shell 'snaps through' just prior to general instability.

  20. Architectural Visualization of C/C++ Source Code for Program Comprehension

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Panas, T; Epperly, T W; Quinlan, D

    2006-09-01

    Structural and behavioral visualization of large-scale legacy systems to aid program comprehension is still a major challenge. The challenge is even greater when applications are implemented in flexible and expressive languages such as C and C++. In this paper, we consider visualization of static and dynamic aspects of large-scale scientific C/C++ applications. For our investigation, we reuse and integrate specialized analysis and visualization tools. Furthermore, we present a novel layout algorithm that permits a compressive architectural view of a large-scale software system. Our layout is unique in that it allows traditional program visualizations, i.e., graph structures, to be seen inmore » relation to the application's file structure.« less

  1. A numerical study of the thermal stability of low-lying coronal loops

    NASA Technical Reports Server (NTRS)

    Klimchuk, J. A.; Antiochos, S. K.; Mariska, J. T.

    1986-01-01

    The nonlinear evolution of loops that are subjected to a variety of small but finite perturbations was studied. Only the low-lying loops are considered. The analysis was performed numerically using a one-dimensional hydrodynamical model developed at the Naval Research Laboratory. The computer codes solve the time-dependent equations for mass, momentum, and energy transport. The primary interest is the active region filaments, hence a geometry appropriate to those structures was considered. The static solutions were subjected to a moderate sized perturbation and allowed to evolve. The results suggest that both hot and cool loops of the geometry considered are thermally stable against amplitude perturbations of all kinds.

  2. Cross-view gait recognition using joint Bayesian

    NASA Astrophysics Data System (ADS)

    Li, Chao; Sun, Shouqian; Chen, Xiaoyu; Min, Xin

    2017-07-01

    Human gait, as a soft biometric, helps to recognize people by walking. To further improve the recognition performance under cross-view condition, we propose Joint Bayesian to model the view variance. We evaluated our prosed method with the largest population (OULP) dataset which makes our result reliable in a statically way. As a result, we confirmed our proposed method significantly outperformed state-of-the-art approaches for both identification and verification tasks. Finally, sensitivity analysis on the number of training subjects was conducted, we find Joint Bayesian could achieve competitive results even with a small subset of training subjects (100 subjects). For further comparison, experimental results, learning models, and test codes are available.

  3. Shock and Static Compression of Nitrobenzene

    NASA Astrophysics Data System (ADS)

    Kozu, Naoshi; Arai, Mitsuru; Tamura, Masamitsu; Fujihisa, Hiroshi; Aoki, Katsutoshi; Yoshida, Masatake

    2000-08-01

    The Hugoniot and static compression curve (isotherm) were investigated using explosive plane wave generators and diamond anvil cells, respectively. The obtained Hugoniot from the shock experiments is represented by two linear lines: Us=2.52+1.23 up (0.8

  4. Simulations to study the static polarization limit for RHIC lattice

    NASA Astrophysics Data System (ADS)

    Duan, Zhe; Qin, Qing

    2016-01-01

    A study of spin dynamics based on simulations with the Polymorphic Tracking Code (PTC) is reported, exploring the dependence of the static polarization limit on various beam parameters and lattice settings for a practical RHIC lattice. It is shown that the behavior of the static polarization limit is dominantly affected by the vertical motion, while the effect of beam-beam interaction is small. In addition, the “nonresonant beam polarization” observed and studied in the lattice-independent model is also observed in this lattice-dependent model. Therefore, this simulation study gives insights of polarization evolution at fixed beam energies, that are not available in simple spin tracking. Supported by the U.S. Department of Energy (DE-AC02-98CH10886), Hundred-Talent Program (Chinese Academy of Sciences), and National Natural Science Foundation of China (11105164)

  5. Using transonic small disturbance theory for predicting the aeroelastic stability of a flexible wind-tunnel model

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.; Bennett, Robert M.

    1990-01-01

    The CAP-TSD (Computational Aeroelasticity Program - Transonic Small Disturbance) code, developed at the NASA - Langley Research Center, is applied to the Active Flexible Wing (AFW) wind tunnel model for prediction of the model's transonic aeroelastic behavior. Static aeroelastic solutions using CAP-TSD are computed. Dynamic (flutter) analyses are then performed as perturbations about the static aeroelastic deformations of the AFW. The accuracy of the static aeroelastic procedure is investigated by comparing analytical results to those from previous AFW wind tunnel experiments. Dynamic results are presented in the form of root loci at different Mach numbers for a heavy gas and air. The resultant flutter boundaries for both gases are also presented. The effects of viscous damping and angle-of-attack, on the flutter boundary in air, are presented as well.

  6. A Comprehensive Structural Dynamic Analysis Approach for Multi Mission Earth Entry Vehicle (MMEEV) Development

    NASA Technical Reports Server (NTRS)

    Perino, Scott; Bayandor, Javid; Siddens, Aaron

    2012-01-01

    The anticipated NASA Mars Sample Return Mission (MSR) requires a simple and reliable method in which to return collected Martian samples back to earth for scientific analysis. The Multi-Mission Earth Entry Vehicle (MMEEV) is NASA's proposed solution to this MSR requirement. Key aspects of the MMEEV are its reliable and passive operation, energy absorbing foam-composite structure, and modular impact sphere (IS) design. To aid in the development of an EEV design that can be modified for various missions requirements, two fully parametric finite element models were developed. The first model was developed in an explicit finite element code and was designed to evaluate the impact response of the vehicle and payload during the final stage of the vehicle's return to earth. The second model was developed in an explicit code and was designed to evaluate the static and dynamic structural response of the vehicle during launch and reentry. In contrast to most other FE models, built through a Graphical User Interface (GUI) pre-processor, the current model was developed using a coding technique that allows the analyst to quickly change nearly all aspects of the model including: geometric dimensions, material properties, load and boundary conditions, mesh properties, and analysis controls. Using the developed design tool, a full range of proposed designs can quickly be analyzed numerically and thus the design trade space for the EEV can be fully understood. An engineer can then quickly reach the best design for a specific mission and also adapt and optimize the general design for different missions.

  7. Subsonic Performance of Ejector Systems

    NASA Astrophysics Data System (ADS)

    Weil, Samuel

    Combined cycle engines combining scramjets with turbo jets or rockets can provide efficient hypersonic flight. Ejectors have the potential to increase the thrust and efficiency of combined cycle engines near static conditions. A computer code was developed to support the design of a small-scale, turbine-based combined cycle demonstrator with an ejector, built around a commercially available turbojet engine. This code was used to analyze the performance of an ejector system built around a micro-turbojet. With the use of a simple ejector, net thrust increases as large as 20% over the base engine were predicted. Additionally the specific fuel consumption was lowered by 10%. Increasing the secondary to primary area ratio of the ejector lead to significant improvements in static thrust, specific fuel consumption (SFC), and propulsive efficiency. Further ejector performance improvements can be achieved by using a diffuser. Ejector performance drops off rapidly with increasing Mach number. The ejector has lower thrust and higher SFC than the turbojet core at Mach numbers above 0.2. When the nozzle chokes a significant drop in ejector performance is seen. When a diffuser is used, higher Mach numbers lead to choking in the mixer and a shock in the nozzle causing a significant decrease in ejector performance. Evaluation of different turbo jets shows that ejector performance depends significantly on the properties of the turbojet. Static thrust and SFC improvements can be achieved with increasing ejector area for all engines, but size of increase and change in performance at higher Mach numbers depend heavily on the turbojet. The use of an ejector in a turbine based combined cycle configuration also increases performance at static conditions with a thrust increase of 5% and SFC decrease of 5% for the tested configuration.

  8. Software Certification - Coding, Code, and Coders

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Holzmann, Gerard J.

    2011-01-01

    We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.

  9. Minimal Increase Network Coding for Dynamic Networks.

    PubMed

    Zhang, Guoyin; Fan, Xu; Wu, Yanxia

    2016-01-01

    Because of the mobility, computing power and changeable topology of dynamic networks, it is difficult for random linear network coding (RLNC) in static networks to satisfy the requirements of dynamic networks. To alleviate this problem, a minimal increase network coding (MINC) algorithm is proposed. By identifying the nonzero elements of an encoding vector, it selects blocks to be encoded on the basis of relationship between the nonzero elements that the controls changes in the degrees of the blocks; then, the encoding time is shortened in a dynamic network. The results of simulations show that, compared with existing encoding algorithms, the MINC algorithm provides reduced computational complexity of encoding and an increased probability of delivery.

  10. Minimal Increase Network Coding for Dynamic Networks

    PubMed Central

    Wu, Yanxia

    2016-01-01

    Because of the mobility, computing power and changeable topology of dynamic networks, it is difficult for random linear network coding (RLNC) in static networks to satisfy the requirements of dynamic networks. To alleviate this problem, a minimal increase network coding (MINC) algorithm is proposed. By identifying the nonzero elements of an encoding vector, it selects blocks to be encoded on the basis of relationship between the nonzero elements that the controls changes in the degrees of the blocks; then, the encoding time is shortened in a dynamic network. The results of simulations show that, compared with existing encoding algorithms, the MINC algorithm provides reduced computational complexity of encoding and an increased probability of delivery. PMID:26867211

  11. MSC/NASTRAN DMAP Alter Used for Closed-Form Static Analysis With Inertia Relief and Displacement-Dependent Loads

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Solving for the displacements of free-free coupled systems acted upon by static loads is a common task in the aerospace industry. Often, these problems are solved by static analysis with inertia relief. This technique allows for a free-free static analysis by balancing the applied loads with the inertia loads generated by the applied loads. For some engineering applications, the displacements of the free-free coupled system induce additional static loads. Hence, the applied loads are equal to the original loads plus the displacement-dependent loads. A launch vehicle being acted upon by an aerodynamic loading can have such applied loads. The final displacements of such systems are commonly determined with iterative solution techniques. Unfortunately, these techniques can be time consuming and labor intensive. Because the coupled system equations for free-free systems with displacement-dependent loads can be written in closed form, it is advantageous to solve for the displacements in this manner. Implementing closed-form equations in static analysis with inertia relief is analogous to implementing transfer functions in dynamic analysis. An MSC/NASTRAN (MacNeal-Schwendler Corporation/NASA Structural Analysis) DMAP (Direct Matrix Abstraction Program) Alter was used to include displacement-dependent loads in static analysis with inertia relief. It efficiently solved a common aerospace problem that typically has been solved with an iterative technique.

  12. Ethics in Psychotherapy and Counseling: A Practical Guide. Second Edition.

    ERIC Educational Resources Information Center

    Pope, Kenneth S.; Vasquez, Melba J. T.

    Although they may be reflected in professional guidelines, formal standards, or law, ethics are not static codes. They are an active process by which the individual therapist or counselor struggles with the sometimes bewildering, always unique constellation of questions, responsibilities, contexts, and competing demands of helping another person.…

  13. Static analysis techniques for semiautomatic synthesis of message passing software skeletons

    DOE PAGES

    Sottile, Matthew; Dagit, Jason; Zhang, Deli; ...

    2015-06-29

    The design of high-performance computing architectures demands performance analysis of large-scale parallel applications to derive various parameters concerning hardware design and software development. The process of performance analysis and benchmarking an application can be done in several ways with varying degrees of fidelity. One of the most cost-effective ways is to do a coarse-grained study of large-scale parallel applications through the use of program skeletons. The concept of a “program skeleton” that we discuss in this article is an abstracted program that is derived from a larger program where source code that is determined to be irrelevant is removed formore » the purposes of the skeleton. In this work, we develop a semiautomatic approach for extracting program skeletons based on compiler program analysis. Finally, we demonstrate correctness of our skeleton extraction process by comparing details from communication traces, as well as show the performance speedup of using skeletons by running simulations in the SST/macro simulator.« less

  14. Designing and maintaining an effective chargemaster.

    PubMed

    Abbey, D C

    2001-03-01

    The chargemaster is the central repository of charges and associated coding information used to develop claims. But this simple description belies the chargemaster's true complexity. The chargemaster's role in the coding process differs from department to department, and not all codes provided on a claim form are necessarily included in the chargemaster, as codes for complex services may need to be developed and reviewed by coding staff. In addition, with the rise of managed care, the chargemaster increasingly is being used to track utilization of supplies and services. To ensure that the chargemaster performs all of its functions effectively, hospitals should appoint a chargemaster coordinator, supported by a chargemaster review team, to oversee the design and maintenance of the chargemaster. Important design issues that should be considered include the principle of "form follows function," static versus dynamic coding, how modifiers should be treated, how charges should be developed, how to incorporate physician fee schedules into the chargemaster, the interface between the chargemaster and cost reports, and how to include statistical information for tracking utilization.

  15. Simulation study of the ionizing front in the critical ionization velocity phenomenon

    NASA Technical Reports Server (NTRS)

    Machida, S.; Goertz, C. K.; Lu, G.

    1988-01-01

    The simulation of the critical ionization velocity for a neutral gas cloud moving across the static magnetic field is presented. A low-beta plasma is studied, using a two and a half-dimensional electrostatic code linked with the Plasma and Neutral Interaction Code (Goertz and Machida, 1987). The physics of the ionizing front and the instabilities which occur there are discussed. Results are presented from four numerical runs designed so that the effects of the charge separation field can be distinguished from the wave heating.

  16. N-MODY: A Code for Collisionless N-body Simulations in Modified Newtonian Dynamics

    NASA Astrophysics Data System (ADS)

    Londrillo, Pasquale; Nipoti, Carlo

    2011-02-01

    N-MODY is a parallel particle-mesh code for collisionless N-body simulations in modified Newtonian dynamics (MOND). N-MODY is based on a numerical potential solver in spherical coordinates that solves the non-linear MOND field equation, and is ideally suited to simulate isolated stellar systems. N-MODY can be used also to compute the MOND potential of arbitrary static density distributions. A few applications of N-MODY indicate that some astrophysically relevant dynamical processes are profoundly different in MOND and in Newtonian gravity with dark matter.

  17. Study of beam optics and beam halo by integrated modeling of negative ion beams from plasma meniscus formation to beam acceleration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miyamoto, K.; Okuda, S.; Hatayama, A.

    2013-01-14

    To understand the physical mechanism of the beam halo formation in negative ion beams, a two-dimensional particle-in-cell code for simulating the trajectories of negative ions created via surface production has been developed. The simulation code reproduces a beam halo observed in an actual negative ion beam. The negative ions extracted from the periphery of the plasma meniscus (an electro-static lens in a source plasma) are over-focused in the extractor due to large curvature of the meniscus.

  18. Subsonic Maneuvering Effectiveness of High Performance Aircraft Which Employ Quasi-Static Shape Change Devices

    NASA Technical Reports Server (NTRS)

    Montgomery, Raymond C.; Scott, Michael A.; Weston, Robert P.

    1998-01-01

    This paper represents an initial study on the use of quasi-static shape change devices in aircraft maneuvering. The macroscopic effects and requirements for these devices in flight control are the focus of this study. Groups of devices are postulated to replace the conventional leading-edge flap (LEF) and the all-moving wing tip (AMT) on the tailless LMTAS-ICE (Lockheed Martin Tactical Aircraft Systems - Innovative Control Effectors) configuration. The maximum quasi-static shape changes are 13.8% and 7.7% of the wing section thickness for the LEF and AMT replacement devices, respectively. A Computational Fluid Dynamics (CFD) panel code is used to determine the control effectiveness of groups of these devices. A preliminary design of a wings-leveler autopilot is presented. Initial evaluation at 0.6 Mach at 15,000 ft. altitude is made through batch simulation. Results show small disturbance stability is achieved, however, an increase in maximum distortion is needed to statically offset five degrees of sideslip. This only applies to the specific device groups studied, encouraging future research on optimal device placement.

  19. Interaction of two-dimensional transverse jet with a supersonic mainstream

    NASA Technical Reports Server (NTRS)

    Kraemer, G. O.; Tiwari, S. N.

    1983-01-01

    The interaction of a two dimensional sonic jet injected transversely into a confined main flow was studied. The main flow consisted of air at a Mach number of 2.9. The effects of varying the jet parameters on the flow field were examined using surface pressure and composition data. Also, the downstream flow field was examined using static pressure, pitot pressure, and composition profile data. The jet parameters varied were gapwidth, jet static pressure, and injectant species of either helium or nitrogen. The values of the jet parameters used were 0.039, 0.056, and 0.109 cm for the gapwidth and 5, 10, and 20 for the jet to mainstream static pressure ratios. The features of the flow field produced by the mixing and interaction of the jet with the mainstream were related to the jet momentum. The data were used to demonstrate the validity of an existing two dimensional elliptic flow code.

  20. Kameleon Live: An Interactive Cloud Based Analysis and Visualization Platform for Space Weather Researchers

    NASA Astrophysics Data System (ADS)

    Pembroke, A. D.; Colbert, J. A.

    2015-12-01

    The Community Coordinated Modeling Center (CCMC) provides hosting for many of the simulations used by the space weather community of scientists, educators, and forecasters. CCMC users may submit model runs through the Runs on Request system, which produces static visualizations of model output in the browser, while further analysis may be performed off-line via Kameleon, CCMC's cross-language access and interpolation library. Off-line analysis may be suitable for power-users, but storage and coding requirements present a barrier to entry for non-experts. Moreover, a lack of a consistent framework for analysis hinders reproducibility of scientific findings. To that end, we have developed Kameleon Live, a cloud based interactive analysis and visualization platform. Kameleon Live allows users to create scientific studies built around selected runs from the Runs on Request database, perform analysis on those runs, collaborate with other users, and disseminate their findings among the space weather community. In addition to showcasing these novel collaborative analysis features, we invite feedback from CCMC users as we seek to advance and improve on the new platform.

  1. High-Content Optical Codes for Protecting Rapid Diagnostic Tests from Counterfeiting.

    PubMed

    Gökçe, Onur; Mercandetti, Cristina; Delamarche, Emmanuel

    2018-06-19

    Warnings and reports on counterfeit diagnostic devices are released several times a year by regulators and public health agencies. Unfortunately, mishandling, altering, and counterfeiting point-of-care diagnostics (POCDs) and rapid diagnostic tests (RDTs) is lucrative, relatively simple and can lead to devastating consequences. Here, we demonstrate how to implement optical security codes in silicon- and nitrocellulose-based flow paths for device authentication using a smartphone. The codes are created by inkjet spotting inks directly on nitrocellulose or on micropillars. Codes containing up to 32 elements per mm 2 and 8 colors can encode as many as 10 45 combinations. Codes on silicon micropillars can be erased by setting a continuous flow path across the entire array of code elements or for nitrocellulose by simply wicking a liquid across the code. Static or labile code elements can further be formed on nitrocellulose to create a hidden code using poly(ethylene glycol) (PEG) or glycerol additives to the inks. More advanced codes having a specific deletion sequence can also be created in silicon microfluidic devices using an array of passive routing nodes, which activate in a particular, programmable sequence. Such codes are simple to fabricate, easy to view, and efficient in coding information; they can be ideally used in combination with information on a package to protect diagnostic devices from counterfeiting.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wayman, E. N.; Sclavounos, P. D.; Butterfield, S.

    This article presents a collaborative research program that the Massachusetts Institute of Technology (MIT) and the National Renewable Energy Laboratory (NREL) have undertaken to develop innovative and cost-effective floating and mooring systems for offshore wind turbines in water depths of 10-200 m. Methods for the coupled structural, hydrodynamic, and aerodynamic analysis of floating wind turbine systems are presented in the frequency domain. This analysis was conducted by coupling the aerodynamics and structural dynamics code FAST [4] developed at NREL with the wave load and response simulation code WAMIT (Wave Analysis at MIT) [15] developed at MIT. Analysis tools were developedmore » to consider coupled interactions between the wind turbine and the floating system. These include the gyroscopic loads of the wind turbine rotor on the tower and floater, the aerodynamic damping introduced by the wind turbine rotor, the hydrodynamic damping introduced by wave-body interactions, and the hydrodynamic forces caused by wave excitation. Analyses were conducted for two floater concepts coupled with the NREL 5-MW Offshore Baseline wind turbine in water depths of 10-200 m: the MIT/NREL Shallow Drafted Barge (SDB) and the MIT/NREL Tension Leg Platform (TLP). These concepts were chosen to represent two different methods of achieving stability to identify differences in performance and cost of the different stability methods. The static and dynamic analyses of these structures evaluate the systems' responses to wave excitation at a range of frequencies, the systems' natural frequencies, and the standard deviations of the systems' motions in each degree of freedom in various wind and wave environments. This article in various wind and wave environments. This article explores the effects of coupling the wind turbine with the floating platform, the effects of water depth, and the effects of wind speed on the systems' performance. An economic feasibility analysis of the two concepts was also performed. Key cost components included the material and construction costs of the buoy; material and installation costs of the tethers, mooring lines, and anchor technologies; costs of transporting and installing the system at the chosen site; and the cost of mounting the wind turbine to the platform. The two systems were evaluated based on their static and dynamic performance and the total system installed cost. Both systems demonstrated acceptable motions, and have estimated costs of $1.4-$1.8 million, not including the cost of the wind turbine, the power electronics, or the electrical transmission.« less

  3. Assessment of current AASHTO LRFD methods for static pile capacity analysis in Rhode Island soils.

    DOT National Transportation Integrated Search

    2013-07-01

    This report presents an assessment of current AASHTO LRFD methods for static pile capacity analysis in Rhode : Island soils. Current static capacity methods and associated resistance factors are based on pile load test data in sands : and clays. Some...

  4. On the Power of Abstract Interpretation

    NASA Technical Reports Server (NTRS)

    Reddy, Uday S.; Kamin, Samuel N.

    1991-01-01

    Increasingly sophisticated applications of static analysis place increased burden on the reliability of the analysis techniques. Often, the failure of the analysis technique to detect some information my mean that the time or space complexity of the generated code would be altered. Thus, it is important to precisely characterize the power of static analysis techniques. We follow the approach of Selur et. al. who studied the power of strictness analysis techniques. Their result can be summarized by saying 'strictness analysis is perfect up to variations in constants.' In other words, strictness analysis is as good as it could be, short of actually distinguishing between concrete values. We use this approach to characterize a broad class of analysis techniques based on abstract interpretation including, but not limited to, strictness analysis. For the first-order case, we consider abstract interpretations where the abstract domain for data values is totally ordered. This condition is satisfied by Mycroft's strictness analysis that of Sekar et. al. and Wadler's analysis of list-strictness. For such abstract interpretations, we show that the analysis is complete in the sense that, short of actually distinguishing between concrete values with the same abstraction, it gives the best possible information. We further generalize these results to typed lambda calculus with pairs and higher-order functions. Note that products and function spaces over totally ordered domains are not totally ordered. In fact, the notion of completeness used in the first-order case fails if product domains or function spaces are added. We formulate a weaker notion of completeness based on observability of values. Two values (including pairs and functions) are considered indistinguishable if their observable components are indistinguishable. We show that abstract interpretation of typed lambda calculus programs is complete up to this notion of indistinguishability. We use denotationally-oriented arguments instead of the detailed operational arguments used by Selur et. al.. Hence, our proofs are much simpler. They should be useful for further future improvements.

  5. Seismic Safety Of Simple Masonry Buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guadagnuolo, Mariateresa; Faella, Giuseppe

    2008-07-08

    Several masonry buildings comply with the rules for simple buildings provided by seismic codes. For these buildings explicit safety verifications are not compulsory if specific code rules are fulfilled. In fact it is assumed that their fulfilment ensures a suitable seismic behaviour of buildings and thus adequate safety under earthquakes. Italian and European seismic codes differ in the requirements for simple masonry buildings, mostly concerning the building typology, the building geometry and the acceleration at site. Obviously, a wide percentage of buildings assumed simple by codes should satisfy the numerical safety verification, so that no confusion and uncertainty have tomore » be given rise to designers who must use the codes. This paper aims at evaluating the seismic response of some simple unreinforced masonry buildings that comply with the provisions of the new Italian seismic code. Two-story buildings, having different geometry, are analysed and results from nonlinear static analyses performed by varying the acceleration at site are presented and discussed. Indications on the congruence between code rules and results of numerical analyses performed according to the code itself are supplied and, in this context, the obtained result can provide a contribution for improving the seismic code requirements.« less

  6. Support for Systematic Code Reviews with the SCRUB Tool

    NASA Technical Reports Server (NTRS)

    Holzmann, Gerald J.

    2010-01-01

    SCRUB is a code review tool that supports both large, team-based software development efforts (e.g., for mission software) as well as individual tasks. The tool was developed at JPL to support a new, streamlined code review process that combines human-generated review reports with program-generated review reports from a customizable range of state-of-the-art source code analyzers. The leading commercial tools include Codesonar, Coverity, and Klocwork, each of which can achieve a reasonably low rate of false-positives in the warnings that they generate. The time required to analyze code with these tools can vary greatly. In each case, however, the tools produce results that would be difficult to realize with human code inspections alone. There is little overlap in the results produced by the different analyzers, and each analyzer used generally increases the effectiveness of the overall effort. The SCRUB tool allows all reports to be accessed through a single, uniform interface (see figure) that facilitates brows ing code and reports. Improvements over existing software include significant simplification, and leveraging of a range of commercial, static source code analyzers in a single, uniform framework. The tool runs as a small stand-alone application, avoiding the security problems related to tools based on Web browsers. A developer or reviewer, for instance, must have already obtained access rights to a code base before that code can be browsed and reviewed with the SCRUB tool. The tool cannot open any files or folders to which the user does not already have access. This means that the tool does not need to enforce or administer any additional security policies. The analysis results presented through the SCRUB tool s user interface are always computed off-line, given that, especially for larger projects, this computation can take longer than appropriate for interactive tool use. The recommended code review process that is supported by the SCRUB tool consists of three phases: Code Review, Developer Response, and Closeout Resolution. In the Code Review phase, all tool-based analysis reports are generated, and specific comments from expert code reviewers are entered into the SCRUB tool. In the second phase, Developer Response, the developer is asked to respond to each comment and tool-report that was produced, either agreeing or disagreeing to provide a fix that addresses the issue that was raised. In the third phase, Closeout Resolution, all disagreements are discussed in a meeting of all parties involved, and a resolution is made for all disagreements. The first two phases generally take one week each, and the third phase is concluded in a single closeout meeting.

  7. Detection Of Malware Collusion With Static Dependence Analysis On Inter-App Communication

    DTIC Science & Technology

    2016-12-08

    DETECTION OF MALWARE COLLUSION WITH STATIC DEPENDENCE ANALYSIS ON INTER-APP COMMUNICATION VIRGINIA TECH DECEMBER 2016 FINAL TECHNICAL REPORT... DEPENDENCE ANALYSIS ON INTER-APP COMMUNICATION 5a. CONTRACT NUMBER FA8750-15-2-0076 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER 61101E 6. AUTHOR(S...exploited. 15. SUBJECT TERMS Malware Collusion; Inter-App Communication; Static Dependence Analysis 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF

  8. Advanced electromagnetic methods for aerospace vehicles

    NASA Technical Reports Server (NTRS)

    Balanis, Constantine A.; El-Sharawy, El-Budawy; Hashemi-Yeganeh, Shahrokh; Aberle, James T.; Birtcher, Craig R.

    1991-01-01

    The Advanced Helicopter Electromagnetics is centered on issues that advance technology related to helicopter electromagnetics. Progress was made on three major topics: composite materials; precipitation static corona discharge; and antenna technology. In composite materials, the research has focused on the measurements of their electrical properties, and the modeling of material discontinuities and their effect on the radiation pattern of antennas mounted on or near material surfaces. The electrical properties were used to model antenna performance when mounted on composite materials. Since helicopter platforms include several antenna systems at VHF and UHF bands, measuring techniques are being explored that can be used to measure the properties at these bands. The effort on corona discharge and precipitation static was directed toward the development of a new two dimensional Voltage Finite Difference Time Domain computer program. Results indicate the feasibility of using potentials for simulating electromagnetic problems in the cases where potentials become primary sources. In antenna technology the focus was on Polarization Diverse Conformal Microstrip Antennas, Cavity Backed Slot Antennas, and Varactor Tuned Circular Patch Antennas. Numerical codes were developed for the analysis of two probe fed rectangular and circular microstrip patch antennas fed by resistive and reactive power divider networks.

  9. Nonlinear Pressurization and Modal Analysis Procedure for Dynamic Modeling of Inflatable Structures

    NASA Technical Reports Server (NTRS)

    Smalley, Kurt B.; Tinker, Michael L.; Saxon, Jeff (Technical Monitor)

    2002-01-01

    An introduction and set of guidelines for finite element dynamic modeling of nonrigidized inflatable structures is provided. A two-step approach is presented, involving 1) nonlinear static pressurization of the structure and updating of the stiffness matrix and 2) hear normal modes analysis using the updated stiffness. Advantages of this approach are that it provides physical realism in modeling of pressure stiffening, and it maintains the analytical convenience of a standard bear eigensolution once the stiffness has been modified. Demonstration of the approach is accomplished through the creation and test verification of an inflated cylinder model using a large commercial finite element code. Good frequency and mode shape comparisons are obtained with test data and previous modeling efforts, verifying the accuracy of the technique. Problems encountered in the application of the approach, as well as their solutions, are discussed in detail.

  10. A Huygens Surface Approach to Antenna Implementation in Near-Field Radar Imaging System Simulations

    DTIC Science & Technology

    2015-08-01

    environment. The model consists of an ultra - wideband , forward-looking radar imaging system, equipped with a multi-static antenna array and mounted on a...of the receiving antenna. 2.2 Huygens Surface Implementation Details The NAFDTD code implements the excitation waveform as a short, ultra - wideband ...

  11. Turbulent Mixing Chemistry in Disks

    NASA Astrophysics Data System (ADS)

    Semenov, D.; Wiebe, D.

    2006-11-01

    A gas-grain chemical model with surface reaction and 1D/2D turbulent mixing is available for protoplanetary disks and molecular clouds. Current version is based on the updated UMIST'95 database with gas-grain interactions (accretion, desorption, photoevaporation, etc.) and modified rate equation approach to surface chemistry (see also abstract for the static chemistry code).

  12. QR Codes in the Library: "It's Not Your Mother's Barcode!"

    ERIC Educational Resources Information Center

    Dobbs, Cheri

    2011-01-01

    Barcode scanning has become more than just fun. Now libraries and businesses are leveraging barcode technology as an innovative tool to market their products and ideas. Developed and popularized in Japan, these Quick Response (QR) or two-dimensional barcodes allow marketers to provide interactive content in an otherwise static environment. In this…

  13. Doubly differential star-16-QAM for fast wavelength switching coherent optical packet transceiver.

    PubMed

    Liu, Fan; Lin, Yi; Walsh, Anthony J; Yu, Yonglin; Barry, Liam P

    2018-04-02

    A coherent optical packet transceiver based on doubly differential star 16-ary quadrature amplitude modulation (DD-star-16-QAM) is presented for spectrally and energy efficient reconfigurable networks. The coding and decoding processes for this new modulation format are presented, simulations and experiments are then performed to investigate the performance of the DD-star-16-QAM in static and dynamic scenarios. The static results show that the influence of frequency offset (FO) can be cancelled out by doubly differential (DD) coding and the correction range is only limited by the electronic bandwidth of the receivers. In the dynamic scenario with a time-varying FO and linewidth, the DD-star-16-QAM can overcome the time-varying FO, and the switching time of around 70 ns is determined by the time it takes the dynamic linewidth to reach the requisite level. This format can thus achieve a shorter waiting time after switching tunable lasers than the commonly used square-16-QAM, in which the transmission performance is limited by the frequency transients after the wavelength switch.

  14. A foundational methodology for determining system static complexity using notional lunar oxygen production processes

    NASA Astrophysics Data System (ADS)

    Long, Nicholas James

    This thesis serves to develop a preliminary foundational methodology for evaluating the static complexity of future lunar oxygen production systems when extensive information is not yet available about the various systems under consideration. Evaluating static complexity, as part of a overall system complexity analysis, is an important consideration in ultimately selecting a process to be used in a lunar base. When system complexity is higher, there is generally an overall increase in risk which could impact the safety of astronauts and the economic performance of the mission. To evaluate static complexity in lunar oxygen production, static complexity is simplified and defined into its essential components. First, three essential dimensions of static complexity are investigated, including interconnective complexity, strength of connections, and complexity in variety. Then a set of methods is developed upon which to separately evaluate each dimension. Q-connectivity analysis is proposed as a means to evaluate interconnective complexity and strength of connections. The law of requisite variety originating from cybernetic theory is suggested to interpret complexity in variety. Secondly, a means to aggregate the results of each analysis is proposed to create holistic measurement for static complexity using the Single Multi-Attribute Ranking Technique (SMART). Each method of static complexity analysis and the aggregation technique is demonstrated using notional data for four lunar oxygen production processes.

  15. ODECS -- A computer code for the optimal design of S.I. engine control strategies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arsie, I.; Pianese, C.; Rizzo, G.

    1996-09-01

    The computer code ODECS (Optimal Design of Engine Control Strategies) for the design of Spark Ignition engine control strategies is presented. This code has been developed starting from the author`s activity in this field, availing of some original contributions about engine stochastic optimization and dynamical models. This code has a modular structure and is composed of a user interface for the definition, the execution and the analysis of different computations performed with 4 independent modules. These modules allow the following calculations: (1) definition of the engine mathematical model from steady-state experimental data; (2) engine cycle test trajectory corresponding to amore » vehicle transient simulation test such as ECE15 or FTP drive test schedule; (3) evaluation of the optimal engine control maps with a steady-state approach; (4) engine dynamic cycle simulation and optimization of static control maps and/or dynamic compensation strategies, taking into account dynamical effects due to the unsteady fluxes of air and fuel and the influences of combustion chamber wall thermal inertia on fuel consumption and emissions. Moreover, in the last two modules it is possible to account for errors generated by a non-deterministic behavior of sensors and actuators and the related influences on global engine performances, and compute robust strategies, less sensitive to stochastic effects. In the paper the four models are described together with significant results corresponding to the simulation and the calculation of optimal control strategies for dynamic transient tests.« less

  16. The effect of incidence angle on the overall three-dimensional aerodynamic performance of a classical annular airfoil cascade

    NASA Technical Reports Server (NTRS)

    Bergsten, D. E.; Fleeter, S.

    1983-01-01

    To be of quantitative value to the designer and analyst, it is necessary to experimentally verify the flow modeling and the numerics inherent in calculation codes being developed to predict the three dimensional flow through turbomachine blade rows. This experimental verification requires that predicted flow fields be correlated with three dimensional data obtained in experiments which model the fundamental phenomena existing in the flow passages of modern turbomachines. The Purdue Annular Cascade Facility was designed specifically to provide these required three dimensional data. The overall three dimensional aerodynamic performance of an instrumented classical airfoil cascade was determined over a range of incidence angle values. This was accomplished utilizing a fully automated exit flow data acquisition and analysis system. The mean wake data, acquired at two downstream axial locations, were analyzed to determine the effect of incidence angle, the three dimensionality of the cascade exit flow field, and the similarity of the wake profiles. The hub, mean, and tip chordwise airfoil surface static pressure distributions determined at each incidence angle are correlated with predictions from the MERIDL and TSONIC computer codes.

  17. Multidisciplinary design optimization of aircraft wing structures with aeroelastic and aeroservoelastic constraints

    NASA Astrophysics Data System (ADS)

    Jung, Sang-Young

    Design procedures for aircraft wing structures with control surfaces are presented using multidisciplinary design optimization. Several disciplines such as stress analysis, structural vibration, aerodynamics, and controls are considered simultaneously and combined for design optimization. Vibration data and aerodynamic data including those in the transonic regime are calculated by existing codes. Flutter analyses are performed using those data. A flutter suppression method is studied using control laws in the closed-loop flutter equation. For the design optimization, optimization techniques such as approximation, design variable linking, temporary constraint deletion, and optimality criteria are used. Sensitivity derivatives of stresses and displacements for static loads, natural frequency, flutter characteristics, and control characteristics with respect to design variables are calculated for an approximate optimization. The objective function is the structural weight. The design variables are the section properties of the structural elements and the control gain factors. Existing multidisciplinary optimization codes (ASTROS* and MSC/NASTRAN) are used to perform single and multiple constraint optimizations of fully built up finite element wing structures. Three benchmark wing models are developed and/or modified for this purpose. The models are tested extensively.

  18. A generalized one-dimensional computer code for turbomachinery cooling passage flow calculations

    NASA Technical Reports Server (NTRS)

    Kumar, Ganesh N.; Roelke, Richard J.; Meitner, Peter L.

    1989-01-01

    A generalized one-dimensional computer code for analyzing the flow and heat transfer in the turbomachinery cooling passages was developed. This code is capable of handling rotating cooling passages with turbulators, 180 degree turns, pin fins, finned passages, by-pass flows, tip cap impingement flows, and flow branching. The code is an extension of a one-dimensional code developed by P. Meitner. In the subject code, correlations for both heat transfer coefficient and pressure loss computations were developed to model each of the above mentioned type of coolant passages. The code has the capability of independently computing the friction factor and heat transfer coefficient on each side of a rectangular passage. Either the mass flow at the inlet to the channel or the exit plane pressure can be specified. For a specified inlet total temperature, inlet total pressure, and exit static pressure, the code computers the flow rates through the main branch and the subbranches, flow through tip cap for impingement cooling, in addition to computing the coolant pressure, temperature, and heat transfer coefficient distribution in each coolant flow branch. Predictions from the subject code for both nonrotating and rotating passages agree well with experimental data. The code was used to analyze the cooling passage of a research cooled radial rotor.

  19. A numerical code for a three-dimensional magnetospheric MHD equilibrium model

    NASA Technical Reports Server (NTRS)

    Voigt, G.-H.

    1992-01-01

    Two dimensional and three dimensional MHD equilibrium models were begun for Earth's magnetosphere. The original proposal was motivated by realizing that global, purely data based models of Earth's magnetosphere are inadequate for studying the underlying plasma physical principles according to which the magnetosphere evolves on the quasi-static convection time scale. Complex numerical grid generation schemes were established for a 3-D Poisson solver, and a robust Grad-Shafranov solver was coded for high beta MHD equilibria. Thus, the effects were calculated of both the magnetopause geometry and boundary conditions on the magnetotail current distribution.

  20. Estimating Equivalency of Explosives Through A Thermochemical Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maienschein, J L

    2002-07-08

    The Cheetah thermochemical computer code provides an accurate method for estimating the TNT equivalency of any explosive, evaluated either with respect to peak pressure or the quasi-static pressure at long time in a confined volume. Cheetah calculates the detonation energy and heat of combustion for virtually any explosive (pure or formulation). Comparing the detonation energy for an explosive with that of TNT allows estimation of the TNT equivalency with respect to peak pressure, while comparison of the heat of combustion allows estimation of TNT equivalency with respect to quasi-static pressure. We discuss the methodology, present results for many explosives, andmore » show comparisons with equivalency data from other sources.« less

  1. Substructure analysis using NICE/SPAR and applications of force to linear and nonlinear structures. [spacecraft masts

    NASA Technical Reports Server (NTRS)

    Razzaq, Zia; Prasad, Venkatesh; Darbhamulla, Siva Prasad; Bhati, Ravinder; Lin, Cai

    1987-01-01

    Parallel computing studies are presented for a variety of structural analysis problems. Included are the substructure planar analysis of rectangular panels with and without a hole, the static analysis of space mast, using NICE/SPAR and FORCE, and substructure analysis of plane rigid-jointed frames using FORCE. The computations are carried out on the Flex/32 MultiComputer using one to eighteen processors. The NICE/SPAR runstream samples are documented for the panel problem. For the substructure analysis of plane frames, a computer program is developed to demonstrate the effectiveness of a substructuring technique when FORCE is enforced. Ongoing research activities for an elasto-plastic stability analysis problem using FORCE, and stability analysis of the focus problem using NICE/SPAR are briefly summarized. Speedup curves for the panel, the mast, and the frame problems provide a basic understanding of the effectiveness of parallel computing procedures utilized or developed, within the domain of the parameters considered. Although the speedup curves obtained exhibit various levels of computational efficiency, they clearly demonstrate the excellent promise which parallel computing holds for the structural analysis problem. Source code is given for the elasto-plastic stability problem and the FORCE program.

  2. Visual saliency in MPEG-4 AVC video stream

    NASA Astrophysics Data System (ADS)

    Ammar, M.; Mitrea, M.; Hasnaoui, M.; Le Callet, P.

    2015-03-01

    Visual saliency maps already proved their efficiency in a large variety of image/video communication application fields, covering from selective compression and channel coding to watermarking. Such saliency maps are generally based on different visual characteristics (like color, intensity, orientation, motion,…) computed from the pixel representation of the visual content. This paper resumes and extends our previous work devoted to the definition of a saliency map solely extracted from the MPEG-4 AVC stream syntax elements. The MPEG-4 AVC saliency map thus defined is a fusion of static and dynamic map. The static saliency map is in its turn a combination of intensity, color and orientation features maps. Despite the particular way in which all these elementary maps are computed, the fusion techniques allowing their combination plays a critical role in the final result and makes the object of the proposed study. A total of 48 fusion formulas (6 for combining static features and, for each of them, 8 to combine static to dynamic features) are investigated. The performances of the obtained maps are evaluated on a public database organized at IRCCyN, by computing two objective metrics: the Kullback-Leibler divergence and the area under curve.

  3. A Comparison of Spectral Element and Finite Difference Methods Using Statically Refined Nonconforming Grids for the MHD Island Coalescence Instability Problem

    NASA Astrophysics Data System (ADS)

    Ng, C. S.; Rosenberg, D.; Pouquet, A.; Germaschewski, K.; Bhattacharjee, A.

    2009-04-01

    A recently developed spectral-element adaptive refinement incompressible magnetohydrodynamic (MHD) code [Rosenberg, Fournier, Fischer, Pouquet, J. Comp. Phys. 215, 59-80 (2006)] is applied to simulate the problem of MHD island coalescence instability (\\ci) in two dimensions. \\ci is a fundamental MHD process that can produce sharp current layers and subsequent reconnection and heating in a high-Lundquist number plasma such as the solar corona [Ng and Bhattacharjee, Phys. Plasmas, 5, 4028 (1998)]. Due to the formation of thin current layers, it is highly desirable to use adaptively or statically refined grids to resolve them, and to maintain accuracy at the same time. The output of the spectral-element static adaptive refinement simulations are compared with simulations using a finite difference method on the same refinement grids, and both methods are compared to pseudo-spectral simulations with uniform grids as baselines. It is shown that with the statically refined grids roughly scaling linearly with effective resolution, spectral element runs can maintain accuracy significantly higher than that of the finite difference runs, in some cases achieving close to full spectral accuracy.

  4. Numerical Assessment of Rockbursting.

    DTIC Science & Technology

    1987-05-27

    static equilibrium, nonlinear elasticity, strain-softening • material , unstable propagation of pre-existing cracks , and finally - surface...structure of LINOS, which is common to most of the large finite element codes, the library of element and material subroutines can be easily expanded... material model subroutines , are tested by comparing finite element results with analytical or numerical results derived for hypo-elastic and

  5. 76 FR 28131 - Federal Motor Vehicle Safety Standards; Motorcycle Helmets

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-13

    ..., this final rule sets a quasi-static load application rate for the helmet retention system; revises the... Analysis and Conclusion e. Quasi-Static Retention Test f. Helmet Conditioning Tolerances g. Other... it as a quasi-static test, instead of a static test. Specifying the application rate will aid...

  6. Computational Fluid Dynamics Analysis Method Developed for Rocket-Based Combined Cycle Engine Inlet

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Renewed interest in hypersonic propulsion systems has led to research programs investigating combined cycle engines that are designed to operate efficiently across the flight regime. The Rocket-Based Combined Cycle Engine is a propulsion system under development at the NASA Lewis Research Center. This engine integrates a high specific impulse, low thrust-to-weight, airbreathing engine with a low-impulse, high thrust-to-weight rocket. From takeoff to Mach 2.5, the engine operates as an air-augmented rocket. At Mach 2.5, the engine becomes a dual-mode ramjet; and beyond Mach 8, the rocket is turned back on. One Rocket-Based Combined Cycle Engine variation known as the "Strut-Jet" concept is being investigated jointly by NASA Lewis, the U.S. Air Force, Gencorp Aerojet, General Applied Science Labs (GASL), and Lockheed Martin Corporation. Work thus far has included wind tunnel experiments and computational fluid dynamics (CFD) investigations with the NPARC code. The CFD method was initiated by modeling the geometry of the Strut-Jet with the GRIDGEN structured grid generator. Grids representing a subscale inlet model and the full-scale demonstrator geometry were constructed. These grids modeled one-half of the symmetric inlet flow path, including the precompression plate, diverter, center duct, side duct, and combustor. After the grid generation, full Navier-Stokes flow simulations were conducted with the NPARC Navier-Stokes code. The Chien low-Reynolds-number k-e turbulence model was employed to simulate the high-speed turbulent flow. Finally, the CFD solutions were postprocessed with a Fortran code. This code provided wall static pressure distributions, pitot pressure distributions, mass flow rates, and internal drag. These results were compared with experimental data from a subscale inlet test for code validation; then they were used to help evaluate the demonstrator engine net thrust.

  7. Conversion of HSPF Legacy Model to a Platform-Independent, Open-Source Language

    NASA Astrophysics Data System (ADS)

    Heaphy, R. T.; Burke, M. P.; Love, J. T.

    2015-12-01

    Since its initial development over 30 years ago, the Hydrologic Simulation Program - FORTAN (HSPF) model has been used worldwide to support water quality planning and management. In the United States, HSPF receives widespread endorsement as a regulatory tool at all levels of government and is a core component of the EPA's Better Assessment Science Integrating Point and Nonpoint Sources (BASINS) system, which was developed to support nationwide Total Maximum Daily Load (TMDL) analysis. However, the model's legacy code and data management systems have limitations in their ability to integrate with modern software, hardware, and leverage parallel computing, which have left voids in optimization, pre-, and post-processing tools. Advances in technology and our scientific understanding of environmental processes that have occurred over the last 30 years mandate that upgrades be made to HSPF to allow it to evolve and continue to be a premiere tool for water resource planners. This work aims to mitigate the challenges currently facing HSPF through two primary tasks: (1) convert code to a modern widely accepted, open-source, high-performance computing (hpc) code; and (2) convert model input and output files to modern widely accepted, open-source, data model, library, and binary file format. Python was chosen as the new language for the code conversion. It is an interpreted, object-oriented, hpc code with dynamic semantics that has become one of the most popular open-source languages. While python code execution can be slow compared to compiled, statically typed programming languages, such as C and FORTRAN, the integration of Numba (a just-in-time specializing compiler) has allowed this challenge to be overcome. For the legacy model data management conversion, HDF5 was chosen to store the model input and output. The code conversion for HSPF's hydrologic and hydraulic modules has been completed. The converted code has been tested against HSPF's suite of "test" runs and shown good agreement and similar execution times while using the Numba compiler. Continued verification of the accuracy of the converted code against more complex legacy applications and improvement upon execution times by incorporating an intelligent network change detection tool is currently underway, and preliminary results will be presented.

  8. Hypothesis: the risk of childhood leukemia is related to combinations of power-frequency and static magnetic fields.

    PubMed

    Bowman, J D; Thomas, D C; London, S J; Peters, J M

    1995-01-01

    We present a hypothesis that the risk of childhood leukemia is related to exposure to specific combinations of static and extremely-low-frequency (ELF) magnetic fields. Laboratory data from calcium efflux and diatom mobility experiments were used with the gyromagnetic equation to predict combinations of 60 Hz and static magnetic fields hypothesized to enhance leukemia risk. The laboratory data predicted 19 bands of the static field magnitude with a bandwidth of 9.1 microT that, together with 60 Hz magnetic fields, are expected to have biological activity. We then assessed the association between this exposure metric and childhood leukemia using data from a case-control study in Los Angeles County. ELF and static magnetic fields were measured in the bedrooms of 124 cases determined from a tumor registry and 99 controls drawn from friends and random digit dialing. Among these subjects, 26 cases and 20 controls were exposed to static magnetic fields lying in the predicted bands of biological activity centered at 38.0 microT and 50.6 microT. Although no association was found for childhood leukemia in relation to measured ELF or static magnetic fields alone, an increasing trend of leukemia risk with measured ELF fields was found for subjects within these static field bands (P for trend = 0.041). The odds ratio (OR) was 3.3 [95% confidence interval (CI) = 0.4-30.5] for subjects exposed to static fields within the derived bands and to ELF magnetic field above 0.30 microT (compared to subjects exposed to static fields outside the bands and ELF magnetic fields below 0.07 microT). When the 60 Hz magnetic fields were assessed according to the Wertheimer-Leeper code for wiring configurations, leukemia risks were again greater with the hypothesized exposure conditions (OR = 9.2 for very high current configurations within the static field bands; 95% CI = 1.3-64.6). Although the risk estimates are based on limited magnetic field measurements for a small number of subjects, these findings suggest that the risk of childhood leukemia may be related to the combined effects of the static and ELF magnetic fields. Further tests of the hypothesis are proposed.

  9. Handling the satellite inter-frequency biases in triple-frequency observations

    NASA Astrophysics Data System (ADS)

    Zhao, Lewen; Ye, Shirong; Song, Jia

    2017-04-01

    The new generation of GNSS satellites, including BDS, Galileo, modernized GPS, and GLONASS, transmit navigation sdata at more frequencies. Multi-frequency signals open new prospects for precise positioning, but satellite code and phase inter-frequency biases (IFB) induced by the third frequency need to be handled. Satellite code IFB can be corrected using products estimated by different strategies, the theoretical and numerical compatibility of these methods need to be proved. Furthermore, a new type of phase IFB, which changes with the relative sun-spacecraft-earth geometry, has been observed. It is necessary to investigate the cause and possible impacts of phase Time-variant IFB (TIFB). Therefore, we present systematic analysis to illustrate the relevancy between satellite clocks and phase TIFB, and compare the handling strategies of the code and phase IFB in triple-frequency positioning. First, the un-differenced L1/L2 satellite clock corrections considering the hardware delays are derived. And IFB induced by the dual-frequency satellite clocks to triple-frequency PPP model is detailed. The analysis shows that estimated satellite clocks actually contain the time-variant phase hardware delays, which can be compensated in L1/L2 ionosphere-free combinations. However, the time-variant hardware delays will lead to TIFB if the third frequency is used. Then, the methods used to correct the code and phase IFB are discussed. Standard point positioning (SPP) and precise point positioning (PPP) using BDS observations are carried out to validate the improvement of different IFB correction strategies. Experiments show that code IFB derived from DCB or geometry-free and ionosphere-free combination show an agreement of 0.3 ns for all satellites. Positioning results and error distribution with two different code IFB correcting strategies achieve similar tendency, which shows their substitutability. The original and wavelet filtered phase TIFB long-term series show significant periodical characteristic for most GEO and IGSO satellites, with the magnitude varies between - 5 cm and 5 cm. Finally, BDS L1/L3 kinematic PPP is conducted with code IFB corrected with DCB combinations, and TIFB corrected with filtered series. Results show that the IFB corrected L1/L3 PPP can achieve comparable convergence and positioning accuracy as L1/L2 combinations in static and kinematic mode.

  10. Examination of Buckling Behavior of Thin-Walled Al-Mg-Si Alloy Extrusions

    NASA Astrophysics Data System (ADS)

    Vazdirvanidis, Athanasios; Koumarioti, Ioanna; Pantazopoulos, George; Rikos, Andreas; Toulfatzis, Anagnostis; Kostazos, Protesilaos; Manolakos, Dimitrios

    To achieve the combination of improved crash tolerance and maximum strength in aluminium automotive extrusions, a research program was carried out. The main objective was to study AA6063 alloy thin-walled square tubes' buckling behavior under axial quasi-static load after various artificial aging treatments. Variables included cooling rate after solid solution treatment, duration of the 1st stage of artificial aging and time and temperature of the 2nd stage of artificial aging. Metallography and tensile testing were employed for developing deeper knowledge on the effect of the aging process parameters. FEM analysis with the computer code LS-DYNA was supplementary applied for deformation mode investigation and crashworthiness prediction. Results showed that data from actual compression tests and numerical modeling were in considerable agreement.

  11. Analysis of Test Case Computations and Experiments for the First Aeroelastic Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Schuster, David M.; Heeg, Jennifer; Wieseman, Carol D.; Chwalowski, Pawel

    2013-01-01

    This paper compares computational and experimental data from the Aeroelastic Prediction Workshop (AePW) held in April 2012. This workshop was designed as a series of technical interchange meetings to assess the state of the art of computational methods for predicting unsteady flowfields and static and dynamic aeroelastic response. The goals are to provide an impartial forum to evaluate the effectiveness of existing computer codes and modeling techniques to simulate aeroelastic problems and to identify computational and experimental areas needing additional research and development. Three subject configurations were chosen from existing wind-tunnel data sets where there is pertinent experimental data available for comparison. Participant researchers analyzed one or more of the subject configurations, and results from all of these computations were compared at the workshop.

  12. Planning, creating and documenting a NASTRAN finite element model of a modern helicopter

    NASA Technical Reports Server (NTRS)

    Gabal, R.; Reed, D.; Ricks, R.; Kesack, W.

    1985-01-01

    Mathematical models based on the finite element method of structural analysis as embodied in the NASTRAN computer code are widely used by the helicopter industry to calculate static internal loads and vibration of airframe structure. The internal loads are routinely used for sizing structural members. The vibration predictions are not yet relied on during design. NASA's Langley Research Center sponsored a program to conduct an application of the finite element method with emphasis on predicting structural vibration. The Army/Boeing CH-47D helicopter was used as the modeling subject. The objective was to engender the needed trust in vibration predictions using these models and establish a body of modeling guides which would enable confident future prediction of airframe vibration as part of the regular design process.

  13. Simulated Data for High Temperature Composite Design

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Abumeri, Galib H.

    2006-01-01

    The paper describes an effective formal method that can be used to simulate design properties for composites that is inclusive of all the effects that influence those properties. This effective simulation method is integrated computer codes that include composite micromechanics, composite macromechanics, laminate theory, structural analysis, and multi-factor interaction model. Demonstration of the method includes sample examples for static, thermal, and fracture reliability for a unidirectional metal matrix composite as well as rupture strength and fatigue strength for a high temperature super alloy. Typical results obtained for a unidirectional composite show that the thermal properties are more sensitive to internal local damage, the longitudinal properties degrade slowly with temperature, the transverse and shear properties degrade rapidly with temperature as do rupture strength and fatigue strength for super alloys.

  14. Static and dynamic stability analysis of the space shuttle vehicle-orbiter

    NASA Technical Reports Server (NTRS)

    Chyu, W. J.; Cavin, R. K.; Erickson, L. L.

    1978-01-01

    The longitudinal static and dynamic stability of a Space Shuttle Vehicle-Orbiter (SSV Orbiter) model is analyzed using the FLEXSTAB computer program. Nonlinear effects are accounted for by application of a correction technique in the FLEXSTAB system; the technique incorporates experimental force and pressure data into the linear aerodynamic theory. A flexible Orbiter model is treated in the static stability analysis for the flight conditions of Mach number 0.9 for rectilinear flight (1 g) and for a pull-up maneuver (2.5 g) at an altitude of 15.24 km. Static stability parameters and structural deformations of the Orbiter are calculated at trim conditions for the dynamic stability analysis, and the characteristics of damping in pitch are investigated for a Mach number range of 0.3 to 1.2. The calculated results for both the static and dynamic stabilities are compared with the available experimental data.

  15. Theoretical research and experimental validation of quasi-static load spectra on bogie frame structures of high-speed trains

    NASA Astrophysics Data System (ADS)

    Zhu, Ning; Sun, Shou-Guang; Li, Qiang; Zou, Hua

    2014-12-01

    One of the major problems in structural fatigue life analysis is establishing structural load spectra under actual operating conditions. This study conducts theoretical research and experimental validation of quasi-static load spectra on bogie frame structures of high-speed trains. The quasistatic load series that corresponds to quasi-static deformation modes are identified according to the structural form and bearing conditions of high-speed train bogie frames. Moreover, a force-measuring frame is designed and manufactured based on the quasi-static load series. The load decoupling model of the quasi-static load series is then established via calibration tests. Quasi-static load-time histories, together with online tests and decoupling analysis, are obtained for the intermediate range of the Beijing—Shanghai dedicated passenger line. The damage consistency calibration of the quasi-static discrete load spectra is performed according to a damage consistency criterion and a genetic algorithm. The calibrated damage that corresponds with the quasi-static discrete load spectra satisfies the safety requirements of bogie frames.

  16. Study on Design of High Efficiency and Light Weight Composite Propeller Blade for a Regional Turboprop Aircraft

    NASA Astrophysics Data System (ADS)

    Kong, Changduk; Lee, Kyungsun

    2013-03-01

    In this study, aerodynamic and structural design of the composite propeller blade for a regional turboprop aircraft is performed. The thin and wide chord propeller blade of high speed turboprop aircraft should have proper strength and stiffness to carry various kinds of loads such as high aerodynamic bending and twisting moments and centrifugal forces. Therefore the skin-spar-foam sandwich structure using high strength and stiffness carbon/epoxy composite materials is used to improve the lightness. A specific design procedure is proposed in this work as follows; firstly the aerodynamic configuration design, which is acceptable for the design requirements, is carried out using the in-house code developed by authors, secondly the structure design loads are determined through the aerodynamic load case analysis, thirdly the spar flange and the skin are preliminarily sized by consideration of major bending moments and shear forces using both the netting rule and the rule of mixture, and finally, the stress analysis is performed to confirm the structural safety and stability using finite element analysis commercial code, MSC. NASTRAN/PATRAN. Furthermore the additional analysis is performed to confirm the structural safety due to bird strike impact on the blade during flight operation using a commercial code, ANSYS. To realize the proposed propeller design, the prototype blades are manufactured by the following procedure; the carbon/epoxy composite fabric prepregs are laid up for skin and spar on a mold using the hand lay-up method and consolidated with a proper temperature and vacuum in the oven. To finalize the structural design, the full-scale static structural test is performed under the simulated aerodynamic loads using 3 point loading method. From the experimental results, it is found that the designed blade has a good structural integrity, and the measured results agree well with the analytical results as well.

  17. Monte Carlo Analysis of the Battery-Type High Temperature Gas Cooled Reactor

    NASA Astrophysics Data System (ADS)

    Grodzki, Marcin; Darnowski, Piotr; Niewiński, Grzegorz

    2017-12-01

    The paper presents a neutronic analysis of the battery-type 20 MWth high-temperature gas cooled reactor. The developed reactor model is based on the publicly available data being an `early design' variant of the U-battery. The investigated core is a battery type small modular reactor, graphite moderated, uranium fueled, prismatic, helium cooled high-temperature gas cooled reactor with graphite reflector. The two core alternative designs were investigated. The first has a central reflector and 30×4 prismatic fuel blocks and the second has no central reflector and 37×4 blocks. The SERPENT Monte Carlo reactor physics computer code, with ENDF and JEFF nuclear data libraries, was applied. Several nuclear design static criticality calculations were performed and compared with available reference results. The analysis covered the single assembly models and full core simulations for two geometry models: homogenous and heterogenous (explicit). A sensitivity analysis of the reflector graphite density was performed. An acceptable agreement between calculations and reference design was obtained. All calculations were performed for the fresh core state.

  18. Three Dimensional Compressible Turbulent Flow Computations for a Diffusing S-Duct With/Without Vortex Generators

    NASA Technical Reports Server (NTRS)

    Cho, Soo-Yong; Greber, Isaac

    1994-01-01

    Numerical investigations on a diffusing S-duct with/without vortex generators and a straight duct with vortex generators are presented. The investigation consists of solving the full three-dimensional unsteady compressible mass averaged Navier-Stokes equations. An implicit finite volume lower-upper time marching code (RPLUS3D) has been employed and modified. A three-dimensional Baldwin-Lomax turbulence model has been modified in conjunction with the flow physics. A model for the analysis of vortex generators in a fully viscous subsonic internal flow is evaluated. A vortical structure for modeling the shed vortex is used as a source term in the computation domain. The injected vortex paths in the straight duct are compared with the analysis by two kinds of prediction models. The flow structure by the vortex generators are investigated along the duct. Computed results of the flow in a circular diffusing S-duct provide an understanding of the flow structure within a typical engine inlet system. These are compared with the experimental wall static-pressure, static- and total-pressure field, and secondary velocity profiles. Additionally, boundary layer thickness, skin friction values, and velocity profiles in wall coordinates are presented. In order to investigate the effect of vortex generators, various vortex strengths are examined in this study. The total-pressure recovery and distortion coefficients are obtained at the exit of the S-duct. The numerical results clearly depict the interaction between the low velocity flow by the flow separation and the injected vortices.

  19. MOSHFIT: algorithms for occlusion-tolerant mean shape and rigid motion from 3D movement data.

    PubMed

    Mitchelson, Joel R

    2013-09-03

    This work addresses the use of 3D point data to measure rigid motions, in the presence of occlusion and without reference to a prior model of relative point locations. This is a problem where cluster-based measurement techniques are used (e.g. for measuring limb movements) and no static calibration trial is available. The same problem arises when performing the task known as roving capture, in which a mobile 3D movement analysis system is moved through a volume with static markers in unknown locations and the ego-motion of the system is required in order to understand biomechanical activity in the environment. To provide a solution for both of these applications, the new concept of a visibility graph is introduced, and is combined with a generalised procrustes method adapted from ones used by the biological shape statistics and computer graphics communities. Recent results on shape space manifolds are applied to show sufficient conditions for convergence to unique solution. Algorithm source code is available and referenced here. Processing speed and rate of convergence are demonstrated using simulated data. Positional and angular accuracy are shown to be equivalent to approaches which require full calibration, to within a small fraction of input resolution. Typical processing times for sub-micron convergence are found to be fractions of a second, so the method is suitable for workflows where there may be time pressure such as in sports science and clinical analysis. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Static aeroelastic analysis and tailoring of a single-element racing car wing

    NASA Astrophysics Data System (ADS)

    Sadd, Christopher James

    This thesis presents the research from an Engineering Doctorate research programme in collaboration with Reynard Motorsport Ltd, a manufacturer of racing cars. Racing car wing design has traditionally considered structures to be rigid. However, structures are never perfectly rigid and the interaction between aerodynamic loading and structural flexibility has a direct impact on aerodynamic performance. This interaction is often referred to as static aeroelasticity and the focus of this research has been the development of a computational static aeroelastic analysis method to improve the design of a single-element racing car wing. A static aeroelastic analysis method has been developed by coupling a Reynolds-Averaged Navier-Stokes CFD analysis method with a Finite Element structural analysis method using an iterative scheme. Development of this method has included assessment of CFD and Finite Element analysis methods and development of data transfer and mesh deflection methods. Experimental testing was also completed to further assess the computational analyses. The computational and experimental results show a good correlation and these studies have also shown that a Navier-Stokes static aeroelastic analysis of an isolated wing can be performed at an acceptable computational cost. The static aeroelastic analysis tool was used to assess methods of tailoring the structural flexibility of the wing to increase its aerodynamic performance. These tailoring methods were then used to produce two final wing designs to increase downforce and reduce drag respectively. At the average operating dynamic pressure of the racing car, the computational analysis predicts that the downforce-increasing wing has a downforce of C[1]=-1.377 in comparison to C[1]=-1.265 for the original wing. The computational analysis predicts that the drag-reducing wing has a drag of C[d]=0.115 in comparison to C[d]=0.143 for the original wing.

  1. Proceedings of the First NASA Formal Methods Symposium

    NASA Technical Reports Server (NTRS)

    Denney, Ewen (Editor); Giannakopoulou, Dimitra (Editor); Pasareanu, Corina S. (Editor)

    2009-01-01

    Topics covered include: Model Checking - My 27-Year Quest to Overcome the State Explosion Problem; Applying Formal Methods to NASA Projects: Transition from Research to Practice; TLA+: Whence, Wherefore, and Whither; Formal Methods Applications in Air Transportation; Theorem Proving in Intel Hardware Design; Building a Formal Model of a Human-Interactive System: Insights into the Integration of Formal Methods and Human Factors Engineering; Model Checking for Autonomic Systems Specified with ASSL; A Game-Theoretic Approach to Branching Time Abstract-Check-Refine Process; Software Model Checking Without Source Code; Generalized Abstract Symbolic Summaries; A Comparative Study of Randomized Constraint Solvers for Random-Symbolic Testing; Component-Oriented Behavior Extraction for Autonomic System Design; Automated Verification of Design Patterns with LePUS3; A Module Language for Typing by Contracts; From Goal-Oriented Requirements to Event-B Specifications; Introduction of Virtualization Technology to Multi-Process Model Checking; Comparing Techniques for Certified Static Analysis; Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder; jFuzz: A Concolic Whitebox Fuzzer for Java; Machine-Checkable Timed CSP; Stochastic Formal Correctness of Numerical Algorithms; Deductive Verification of Cryptographic Software; Coloured Petri Net Refinement Specification and Correctness Proof with Coq; Modeling Guidelines for Code Generation in the Railway Signaling Context; Tactical Synthesis Of Efficient Global Search Algorithms; Towards Co-Engineering Communicating Autonomous Cyber-Physical Systems; and Formal Methods for Automated Diagnosis of Autosub 6000.

  2. Results of application of automatic computation of static corrections on data from the South Banat Terrain

    NASA Astrophysics Data System (ADS)

    Milojević, Slavka; Stojanovic, Vojislav

    2017-04-01

    Due to the continuous development of the seismic acquisition and processing method, the increase of the signal/fault ratio always represents a current target. The correct application of the latest software solutions improves the processing results and justifies their development. A correct computation and application of static corrections represents one of the most important tasks in pre-processing. This phase is of great importance for further processing steps. Static corrections are applied to seismic data in order to compensate the effects of irregular topography, the difference between the levels of source points and receipt in relation to the level of reduction, of close to the low-velocity surface layer (weathering correction), or any reasons that influence the spatial and temporal position of seismic routes. The refraction statics method is the most common method for computation of static corrections. It is successful in resolving of both the long-period statics problems and determining of the difference in the statics caused by abrupt lateral changes in velocity in close to the surface layer. XtremeGeo FlatironsTM is a program whose main purpose is computation of static correction through a refraction statics method and allows the application of the following procedures: picking of first arrivals, checking of geometry, multiple methods for analysis and modelling of statics, analysis of the refractor anisotropy and tomography (Eikonal Tomography). The exploration area is located on the southern edge of the Pannonian Plain, in the plain area with altitudes of 50 to 195 meters. The largest part of the exploration area covers Deliblato Sands, where the geological structure of the terrain and high difference in altitudes significantly affects the calculation of static correction. Software XtremeGeo FlatironsTM has powerful visualization and tools for statistical analysis which contributes to significantly more accurate assessment of geometry close to the surface layers and therefore more accurately computed static corrections.

  3. TIME-DEPENDENT MULTI-GROUP MULTI-DIMENSIONAL RELATIVISTIC RADIATIVE TRANSFER CODE BASED ON SPHERICAL HARMONIC DISCRETE ORDINATE METHOD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tominaga, Nozomu; Shibata, Sanshiro; Blinnikov, Sergei I., E-mail: tominaga@konan-u.ac.jp, E-mail: sshibata@post.kek.jp, E-mail: Sergei.Blinnikov@itep.ru

    We develop a time-dependent, multi-group, multi-dimensional relativistic radiative transfer code, which is required to numerically investigate radiation from relativistic fluids that are involved in, e.g., gamma-ray bursts and active galactic nuclei. The code is based on the spherical harmonic discrete ordinate method (SHDOM) which evaluates a source function including anisotropic scattering in spherical harmonics and implicitly solves the static radiative transfer equation with ray tracing in discrete ordinates. We implement treatments of time dependence, multi-frequency bins, Lorentz transformation, and elastic Thomson and inelastic Compton scattering to the publicly available SHDOM code. Our code adopts a mixed-frame approach; the source functionmore » is evaluated in the comoving frame, whereas the radiative transfer equation is solved in the laboratory frame. This implementation is validated using various test problems and comparisons with the results from a relativistic Monte Carlo code. These validations confirm that the code correctly calculates the intensity and its evolution in the computational domain. The code enables us to obtain an Eddington tensor that relates the first and third moments of intensity (energy density and radiation pressure) and is frequently used as a closure relation in radiation hydrodynamics calculations.« less

  4. Quasi-static MHD processes in earth's magnetosphere

    NASA Technical Reports Server (NTRS)

    Voigt, Gerd-Hannes

    1988-01-01

    An attempt is made to use the MHD equilibrium theory to describe the global magnetic field configuration of earth's magnetosphere and its time evolution under the influence of magnetospheric convection. To circumvent the difficulties inherent in today's MHD codes, use is made of a restriction to slowly time-dependent convection processes with convective velocities well below the typical Alfven speed. This restriction leads to a quasi-static MHD theory. The two-dimensional theory is outlined, and it is shown how sequences of two-dimensional equilibria evolve into a steady state configuration that is likely to become tearing mode unstable. It is then concluded that magnetospheric substorms occur periodically in earth's magnetosphere, thus being an integral part of the entire convection cycle.

  5. Second order tensor finite element

    NASA Technical Reports Server (NTRS)

    Oden, J. Tinsley; Fly, J.; Berry, C.; Tworzydlo, W.; Vadaketh, S.; Bass, J.

    1990-01-01

    The results of a research and software development effort are presented for the finite element modeling of the static and dynamic behavior of anisotropic materials, with emphasis on single crystal alloys. Various versions of two dimensional and three dimensional hybrid finite elements were implemented and compared with displacement-based elements. Both static and dynamic cases are considered. The hybrid elements developed in the project were incorporated into the SPAR finite element code. In an extension of the first phase of the project, optimization of experimental tests for anisotropic materials was addressed. In particular, the problem of calculating material properties from tensile tests and of calculating stresses from strain measurements were considered. For both cases, numerical procedures and software for the optimization of strain gauge and material axes orientation were developed.

  6. Static Chemistry in Disks or Clouds

    NASA Astrophysics Data System (ADS)

    Semenov, D.; Wiebe, D.

    2006-11-01

    This FORTRAN77 code can be used to model static, time-dependent chemistry in ISM and circumstellar disks. Current version is based on the OSU'06 gas-grain astrochemical network with all updates to the reaction rates, and includes surface chemistry from Hasegawa & Herbst (1993) and Hasegawa, Herbst, and Leung (1992). Surface chemistry can be modeled either with the standard rate equation approach or modified rate equation approach (useful in disks). Gas-grain interactions include sticking of neutral molecules to grains, dissociative recombination of ions on grains as well as thermal, UV, X-ray, and CRP-induced desorption of frozen species. An advanced X-ray chemistry and 3 grain sizes with power-law size distribution are also included. An deuterium extension to this chemical model is available.

  7. A charging study of ACTS using NASCAP

    NASA Technical Reports Server (NTRS)

    Herr, Joel L.

    1991-01-01

    The NASA Charging Analyzer Program (NASCAP) computer code is a three dimensional finite element charging code designed to analyze spacecraft charging in the magnetosphere. Because of the characteristics of this problem, NASCAP can use an quasi-static approach to provide a spacecraft designer with an understanding of how a specific spacecraft will interact with a geomagnetic substorm. The results of the simulation can help designers evaluate the probability and location of arc discharges of charged surfaces on the spacecraft. A charging study of NASA's Advanced Communication Technology Satellite (ACTS) using NASCAP is reported. The results show that the ACTS metalized multilayer insulating blanket design should provide good electrostatic discharge control.

  8. Demonstration of Weight-Four Parity Measurements in the Surface Code Architecture.

    PubMed

    Takita, Maika; Córcoles, A D; Magesan, Easwar; Abdo, Baleegh; Brink, Markus; Cross, Andrew; Chow, Jerry M; Gambetta, Jay M

    2016-11-18

    We present parity measurements on a five-qubit lattice with connectivity amenable to the surface code quantum error correction architecture. Using all-microwave controls of superconducting qubits coupled via resonators, we encode the parities of four data qubit states in either the X or the Z basis. Given the connectivity of the lattice, we perform a full characterization of the static Z interactions within the set of five qubits, as well as dynamical Z interactions brought along by single- and two-qubit microwave drives. The parity measurements are significantly improved by modifying the microwave two-qubit gates to dynamically remove nonideal Z errors.

  9. Principles of Billing for Diagnostic Ultrasound in the Office and Operating Room.

    PubMed

    Grasu, Beatrice L; Wolock, Bruce S; Sedgley, Matthew D; Murphy, Michael S

    2018-05-08

    Ultrasound is becoming more prevalent as physicians gain comfort in its diagnostic and therapeutic uses. It allows for both static and dynamic evaluation of conditions and assists in therapeutic injections of joints and tendons. Proper technique is necessary for successful use of this modality. Appropriate coding for physician reimbursement is required. We discuss common wrist and hand pathology for which ultrasound may be useful as an adjunct to diagnosis and treatment and provide an overview of technique and reimbursement codes when using ultrasound in a variety of situations. Copyright © 2018 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  10. Flexible configuration-interaction shell-model many-body solver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Calvin W.; Ormand, W. Erich; McElvain, Kenneth S.

    BIGSTICK Is a flexible configuration-Interaction open-source shell-model code for the many-fermion problem In a shell model (occupation representation) framework. BIGSTICK can generate energy spectra, static and transition one-body densities, and expectation values of scalar operators. Using the built-in Lanczos algorithm one can compute transition probabflity distributions and decompose wave functions into components defined by group theory.

  11. SnoMAP: Pioneering the Path for Clinical Coding to Improve Patient Care.

    PubMed

    Lawley, Michael; Truran, Donna; Hansen, David; Good, Norm; Staib, Andrew; Sullivan, Clair

    2017-01-01

    The increasing demand for healthcare and the static resources available necessitate data driven improvements in healthcare at large scale. The SnoMAP tool was rapidly developed to provide an automated solution that transforms and maps clinician-entered data to provide data which is fit for both administrative and clinical purposes. Accuracy of data mapping was maintained.

  12. Sawja: Static Analysis Workshop for Java

    NASA Astrophysics Data System (ADS)

    Hubert, Laurent; Barré, Nicolas; Besson, Frédéric; Demange, Delphine; Jensen, Thomas; Monfort, Vincent; Pichardie, David; Turpin, Tiphaine

    Static analysis is a powerful technique for automatic verification of programs but raises major engineering challenges when developing a full-fledged analyzer for a realistic language such as Java. Efficiency and precision of such a tool rely partly on low level components which only depend on the syntactic structure of the language and therefore should not be redesigned for each implementation of a new static analysis. This paper describes the Sawja library: a static analysis workshop fully compliant with Java 6 which provides OCaml modules for efficiently manipulating Java bytecode programs. We present the main features of the library, including i) efficient functional data-structures for representing a program with implicit sharing and lazy parsing, ii) an intermediate stack-less representation, and iii) fast computation and manipulation of complete programs. We provide experimental evaluations of the different features with respect to time, memory and precision.

  13. Static-stress analysis of dual-axis confinement vessel

    NASA Astrophysics Data System (ADS)

    Bultman, D. H.

    1992-11-01

    This study evaluates the static-pressure containment capability of a 6-ft-diameter, spherical vessel, made of HSLA-100 steel, to be used for high-explosive (HE) containment. The confinement vessel is designed for use with the Dual-Axis Radiographic Hydrotest Facility (DARHT) being developed at Los Alamos National Laboratory. Two sets of openings in the vessel are covered with x-ray transparent covers to allow radiographic imaging of an explosion as it occurs inside the vessel. The confinement vessel is analyzed as a pressure vessel based on the ASME Boiler and Pressure Vessel Code, Section 8, Division 1, and the Welding Research Council Bulletin, WRC-107. Combined stresses resulting from internal pressure and external loads on nozzles are calculated and compared with the allowable stresses for HSLA-100 steel. Results confirm that the shell and nozzles of the confinement vessel are adequately designed to safely contain the maximum residual pressure of 1675 psi that would result from an HE charge of 24.2 kg detonated in a vacuum. Shell stresses at the shell-to-nozzle interface, produced from external loads on the nozzles, were less than 400 psi. The maximum combined stress resulting from the internal pressure plus external loads was 16,070 psi, which is less than half the allowable stress of 42,375 psi for HSLA-100 steel.

  14. Effect of Ankle Range of Motion (ROM) and Lower-Extremity Muscle Strength on Static Balance Control Ability in Young Adults: A Regression Analysis

    PubMed Central

    Kim, Seong-Gil

    2018-01-01

    Background The purpose of this study was to investigate the effect of ankle ROM and lower-extremity muscle strength on static balance control ability in young adults. Material/Methods This study was conducted with 65 young adults, but 10 young adults dropped out during the measurement, so 55 young adults (male: 19, female: 36) completed the study. Postural sway (length and velocity) was measured with eyes open and closed, and ankle ROM (AROM and PROM of dorsiflexion and plantarflexion) and lower-extremity muscle strength (flexor and extensor of hip, knee, and ankle joint) were measured. Pearson correlation coefficient was used to examine the correlation between variables and static balance ability. Simple linear regression analysis and multiple linear regression analysis were used to examine the effect of variables on static balance ability. Results In correlation analysis, plantarflexion ROM (AROM and PROM) and lower-extremity muscle strength (except hip extensor) were significantly correlated with postural sway (p<0.05). In simple correlation analysis, all variables that passed the correlation analysis procedure had significant influence (p<0.05). In multiple linear regression analysis, plantar flexion PROM with eyes open significantly influenced sway length (B=0.681) and sway velocity (B=0.011). Conclusions Lower-extremity muscle strength and ankle plantarflexion ROM influenced static balance control ability, with ankle plantarflexion PROM showing the greatest influence. Therefore, both contractile structures and non-contractile structures should be of interest when considering static balance control ability improvement. PMID:29760375

  15. Effect of Ankle Range of Motion (ROM) and Lower-Extremity Muscle Strength on Static Balance Control Ability in Young Adults: A Regression Analysis.

    PubMed

    Kim, Seong-Gil; Kim, Wan-Soo

    2018-05-15

    BACKGROUND The purpose of this study was to investigate the effect of ankle ROM and lower-extremity muscle strength on static balance control ability in young adults. MATERIAL AND METHODS This study was conducted with 65 young adults, but 10 young adults dropped out during the measurement, so 55 young adults (male: 19, female: 36) completed the study. Postural sway (length and velocity) was measured with eyes open and closed, and ankle ROM (AROM and PROM of dorsiflexion and plantarflexion) and lower-extremity muscle strength (flexor and extensor of hip, knee, and ankle joint) were measured. Pearson correlation coefficient was used to examine the correlation between variables and static balance ability. Simple linear regression analysis and multiple linear regression analysis were used to examine the effect of variables on static balance ability. RESULTS In correlation analysis, plantarflexion ROM (AROM and PROM) and lower-extremity muscle strength (except hip extensor) were significantly correlated with postural sway (p<0.05). In simple correlation analysis, all variables that passed the correlation analysis procedure had significant influence (p<0.05). In multiple linear regression analysis, plantar flexion PROM with eyes open significantly influenced sway length (B=0.681) and sway velocity (B=0.011). CONCLUSIONS Lower-extremity muscle strength and ankle plantarflexion ROM influenced static balance control ability, with ankle plantarflexion PROM showing the greatest influence. Therefore, both contractile structures and non-contractile structures should be of interest when considering static balance control ability improvement.

  16. WARP3D-Release 10.8: Dynamic Nonlinear Analysis of Solids using a Preconditioned Conjugate Gradient Software Architecture

    NASA Technical Reports Server (NTRS)

    Koppenhoefer, Kyle C.; Gullerud, Arne S.; Ruggieri, Claudio; Dodds, Robert H., Jr.; Healy, Brian E.

    1998-01-01

    This report describes theoretical background material and commands necessary to use the WARP3D finite element code. WARP3D is under continuing development as a research code for the solution of very large-scale, 3-D solid models subjected to static and dynamic loads. Specific features in the code oriented toward the investigation of ductile fracture in metals include a robust finite strain formulation, a general J-integral computation facility (with inertia, face loading), an element extinction facility to model crack growth, nonlinear material models including viscoplastic effects, and the Gurson-Tver-gaard dilatant plasticity model for void growth. The nonlinear, dynamic equilibrium equations are solved using an incremental-iterative, implicit formulation with full Newton iterations to eliminate residual nodal forces. The history integration of the nonlinear equations of motion is accomplished with Newmarks Beta method. A central feature of WARP3D involves the use of a linear-preconditioned conjugate gradient (LPCG) solver implemented in an element-by-element format to replace a conventional direct linear equation solver. This software architecture dramatically reduces both the memory requirements and CPU time for very large, nonlinear solid models since formation of the assembled (dynamic) stiffness matrix is avoided. Analyses thus exhibit the numerical stability for large time (load) steps provided by the implicit formulation coupled with the low memory requirements characteristic of an explicit code. In addition to the much lower memory requirements of the LPCG solver, the CPU time required for solution of the linear equations during each Newton iteration is generally one-half or less of the CPU time required for a traditional direct solver. All other computational aspects of the code (element stiffnesses, element strains, stress updating, element internal forces) are implemented in the element-by- element, blocked architecture. This greatly improves vectorization of the code on uni-processor hardware and enables straightforward parallel-vector processing of element blocks on multi-processor hardware.

  17. Handheld laser scanner automatic registration based on random coding

    NASA Astrophysics Data System (ADS)

    He, Lei; Yu, Chun-ping; Wang, Li

    2011-06-01

    Current research on Laser Scanner often focuses mainly on the static measurement. Little use has been made of dynamic measurement, that are appropriate for more problems and situations. In particular, traditional Laser Scanner must Keep stable to scan and measure coordinate transformation parameters between different station. In order to make the scanning measurement intelligently and rapidly, in this paper ,we developed a new registration algorithm for handleheld laser scanner based on the positon of target, which realize the dynamic measurement of handheld laser scanner without any more complex work. the double camera on laser scanner can take photograph of the artificial target points to get the three-dimensional coordinates, this points is designed by random coding. And then, a set of matched points is found from control points to realize the orientation of scanner by the least-square common points transformation. After that the double camera can directly measure the laser point cloud in the surface of object and get the point cloud data in an unified coordinate system. There are three major contributions in the paper. Firstly, a laser scanner based on binocular vision is designed with double camera and one laser head. By those, the real-time orientation of laser scanner is realized and the efficiency is improved. Secondly, the coding marker is introduced to solve the data matching, a random coding method is proposed. Compared with other coding methods,the marker with this method is simple to match and can avoid the shading for the object. Finally, a recognition method of coding maker is proposed, with the use of the distance recognition, it is more efficient. The method present here can be used widely in any measurement from small to huge obiect, such as vehicle, airplane which strengthen its intelligence and efficiency. The results of experiments and theory analzing demonstrate that proposed method could realize the dynamic measurement of handheld laser scanner. Theory analysis and experiment shows the method is reasonable and efficient.

  18. Classic-Ada(TM)

    NASA Technical Reports Server (NTRS)

    Valley, Lois

    1989-01-01

    The SPS product, Classic-Ada, is a software tool that supports object-oriented Ada programming with powerful inheritance and dynamic binding. Object Oriented Design (OOD) is an easy, natural development paradigm, but it is not supported by Ada. Following the DOD Ada mandate, SPS developed Classic-Ada to provide a tool which supports OOD and implements code in Ada. It consists of a design language, a code generator and a toolset. As a design language, Classic-Ada supports the object-oriented principles of information hiding, data abstraction, dynamic binding, and inheritance. It also supports natural reuse and incremental development through inheritance, code factoring, and Ada, Classic-Ada, dynamic binding and static binding in the same program. Only nine new constructs were added to Ada to provide object-oriented design capabilities. The Classic-Ada code generator translates user application code into fully compliant, ready-to-run, standard Ada. The Classic-Ada toolset is fully supported by SPS and consists of an object generator, a builder, a dictionary manager, and a reporter. Demonstrations of Classic-Ada and the Classic-Ada Browser were given at the workshop.

  19. Simulation of hydrostatic water level measuring system for pressure vessels with the ATHLET-code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hampel, R.; Vandreier, B.; Kaestner, W.

    1996-11-01

    The static and dynamic behavior of measuring systems determine the value indicated by the measuring systems in relation to the true operating conditions. This paper demonstrates the necessity to involve the behavior of measuring systems in accident analysis with the thermohydraulic code ATHLET (developed by GRS Germany) by the example of hydrostatic water level measurement for horizontal steam generators on NPP (VVER). The modelling of a comparison vessel for the level measuring system with high sensitivity and a limited range of measurement (narrow-range level measuring system) by using ATHLET components and the checking of the function of the module weremore » realized. A good correspondence (maximal deviation 3%) between the measured and calculated narrow-range water level by the module was obtained for a realized post calculation of a measured operational transient in a NPP (VVER). The research carried out was sponsored by the Federal Ministry for Research and Technology within the projects {open_quotes}Basic research of process and system behaviour of NPP, control technique for accident management{close_quotes} (Project number 150 0855/7) and the project RS 978. The research work appertains to the theoretic and experimental work of institute {open_quotes}Institut fuer ProzeBtechnik, ProzeBautomatisierung und MeBtechnik (IPM){close_quotes} for accident analysis and accident management.« less

  20. Structural Analysis Peer Review for the Static Display of the Orbiter Atlantis at the Kennedy Space Center Visitors Center

    NASA Technical Reports Server (NTRS)

    Minute, Stephen A.

    2013-01-01

    Mr. Christopher Miller with the Kennedy Space Center (KSC) NASA Safety & Mission Assurance (S&MA) office requested the NASA Engineering and Safety Center's (NESC) technical support on March 15, 2012, to review and make recommendations on the structural analysis being performed for the Orbiter Atlantis static display at the KSC Visitor Center. The principal focus of the assessment was to review the engineering firm's structural analysis for lifting and aligning the orbiter and its static display configuration

  1. Parametric optimization and design validation based on finite element analysis of hybrid socket adapter for transfemoral prosthetic knee.

    PubMed

    Kumar, Neelesh

    2014-10-01

    Finite element analysis has been universally employed for the stress and strain analysis in lower extremity prosthetics. The socket adapter was the principal subject of interest due to its importance in deciding the knee motion range. This article focused on the static and dynamic stress analysis of the designed hybrid adapter developed by the authors. A standard mechanical design validation approach using von Mises was followed. Four materials were considered for the analysis, namely, carbon fiber, oil-filled nylon, Al-6061, and mild steel. The paper analyses the static and dynamic stress on designed hybrid adapter which incorporates features of conventional male and female socket adapters. The finite element analysis was carried out for possible different angles of knee flexion simulating static and dynamic gait situation. Research was carried out on available design of socket adapter. Mechanical design of hybrid adapter was conceptualized and a CAD model was generated using Inventor modelling software. Static and dynamic stress analysis was carried out on different materials for optimization. The finite element analysis was carried out on the software Autodesk Inventor Professional Ver. 2011. The peak value of von Mises stress occurred in the neck region of the adapter and in the lower face region at rod eye-adapter junction in static and dynamic analyses, respectively. Oil-filled nylon was found to be the best material among the four with respect to strength, weight, and cost. Research investigations on newer materials for development of improved prosthesis will immensely benefit the amputees. The study analyze the static and dynamic stress on the knee joint adapter to provide better material used for hybrid design of adapter. © The International Society for Prosthetics and Orthotics 2013.

  2. LEWICE3D/GlennHT Particle Analysis of the Honeywell Al502 Low Pressure Compressor

    NASA Technical Reports Server (NTRS)

    Bidwell, Colin S.; Rigby, David L.

    2015-01-01

    A flow and ice particle trajectory analysis was performed for the booster of the Honeywell AL502 engine. The analysis focused on two closely related conditions one of which produced a rollback and another which did not rollback during testing in the Propulsion Systems Lab at NASA Glenn Research Center. The flow analysis was generated using the NASA Glenn GlennHT flow solver and the particle analysis was generated using the NASA Glenn LEWICE3D v3.56 ice accretion software. The flow and particle analysis used a 3D steady flow, mixing plane approach to model the transport of flow and particles through the engine. The inflow conditions for the rollback case were: airspeed, 145 ms; static pressure, 33,373 Pa; static temperature, 253.3 K. The inflow conditions for the non-roll-back case were: airspeed, 153 ms; static pressure, 34,252 Pa; static temperature, 260.1 K. Both cases were subjected to an ice particle cloud with a median volume diameter of 24 microns, an ice water content of 2.0 gm3 and a relative humidity of 100 percent. The most significant difference between the rollback and non-rollback conditions was the inflow static temperature which was 6.8 K higher for the non-rollback case.

  3. Learning Enterprise Malware Triage from Automatic Dynamic Analysis

    DTIC Science & Technology

    2013-03-01

    Kolter and Maloof n-gram method, Dube’s malware target recognition (MaTR) static method performs significantly more accurately at the 95% confidence...from the static method as in Kolter and Maloof. The MIST approach with behavior sequences 9 allows researchers to tailor the level of analysis to the...citations, none publish work that implements it. Only Kolter and Maloof use nearly as long gram structures, although that research uses static grams rather

  4. Inlet flowfield investigation. Part 2: Computation of the flow about a supercruise forebody at supersonic speeds

    NASA Technical Reports Server (NTRS)

    Paynter, G. C.; Salemann, V.; Strom, E. E. I.

    1984-01-01

    A numerical procedure which solves the parabolized Navier-Stokes (PNS) equations on a body fitted mesh was used to compute the flow about the forebody of an advanced tactical supercruise fighter configuration in an effort to explore the use of a PNS method for design of supersonic cruise forebody geometries. Forebody flow fields were computed at Mach numbers of 1.5, 2.0, and 2.5, and at angles-of-attack of 0 deg, 4 deg, and 8 deg. at each Mach number. Computed results are presented at several body stations and include contour plots of Mach number, total pressure, upwash angle, sidewash angle and cross-plane velocity. The computational analysis procedure was found reliable for evaluating forebody flow fields of advanced aircraft configurations for flight conditions where the vortex shed from the wing leading edge is not a dominant flow phenomenon. Static pressure distributions and boundary layer profiles on the forebody and wing were surveyed in a wind tunnel test, and the analytical results are compared to the data. The current status of the parabolized flow flow field code is described along with desirable improvements in the code.

  5. SU (2) lattice gauge theory simulations on Fermi GPUs

    NASA Astrophysics Data System (ADS)

    Cardoso, Nuno; Bicudo, Pedro

    2011-05-01

    In this work we explore the performance of CUDA in quenched lattice SU (2) simulations. CUDA, NVIDIA Compute Unified Device Architecture, is a hardware and software architecture developed by NVIDIA for computing on the GPU. We present an analysis and performance comparison between the GPU and CPU in single and double precision. Analyses with multiple GPUs and two different architectures (G200 and Fermi architectures) are also presented. In order to obtain a high performance, the code must be optimized for the GPU architecture, i.e., an implementation that exploits the memory hierarchy of the CUDA programming model. We produce codes for the Monte Carlo generation of SU (2) lattice gauge configurations, for the mean plaquette, for the Polyakov Loop at finite T and for the Wilson loop. We also present results for the potential using many configurations (50,000) without smearing and almost 2000 configurations with APE smearing. With two Fermi GPUs we have achieved an excellent performance of 200× the speed over one CPU, in single precision, around 110 Gflops/s. We also find that, using the Fermi architecture, double precision computations for the static quark-antiquark potential are not much slower (less than 2× slower) than single precision computations.

  6. Static Strength of Adhesively-bonded Woven Fabric Kenaf Composite Plates

    NASA Astrophysics Data System (ADS)

    Hilton, Ahmad; Lee, Sim Yee; Supar, Khairi

    2017-06-01

    Natural fibers are potentially used as reinforcing materials and combined with epoxy resin as matrix system to form a superior specific strength (or stiffness) materials known as composite materials. The advantages of implementing natural fibers such as kenaf fibers are renewable, less hazardous during fabrication and handling process; and relatively cheap compared to synthetic fibers. The aim of current work is to conduct a parametric study on static strength of adhesively bonded woven fabric kenaf composite plates. Fabrication of composite panels were conducted using hand lay-up techniques, with variation of stacking sequence, over-lap length, joint types and lay-up types as identified in testing series. Quasi-static testing was carried out using mechanical testing following code of practice. Load-displacement profiles were analyzed to study its structural response prior to ultimate failures. It was found that cross-ply lay-up demonstrates better static strength compared to quasi-isotropic lay-up counterparts due to larger volume of 0° plies exhibited in cross-ply lay-up. Consequently, larger overlap length gives better joining strength, as expected, however this promotes to weight penalty in the joining structure. Most samples showed failures within adhesive region known as cohesive failure modes, however, few sample demonstrated interface failure. Good correlations of parametric study were found and discussed in the respective section.

  7. A precise vertical network: Establishing new orthometric heights with static surveys in Florida tidal marshes

    USGS Publications Warehouse

    Raabe, E.A.; Stumpf, R.P.; Marth, N.J.; Shrestha, R.L.

    1996-01-01

    Elevation differences on the order of 10 cm within Florida's marsh system influence major variations in tidal flooding and in the associated plant communities. This low elevation gradient combined with sea level fluctuation of 5-to-10 cm over decadel and longer periods can generate significant alteration and erosion of marsh habitats along the Gulf Coast. Knowledge of precise and accurate elevations in the marsh is critical to the efficient monitoring and management of these habitats. Global positioning system (GPS) technology was employed to establish six new orthometric heights along the Gulf Coast from which kinematic surveys into the marsh interior are conducted. The vertical accuracy achieved using GPS technology was evaluated using two networks with 16 vertical and nine horizontal NGS published high accuracy positions. New positions were occupied near St. Marks National Wildlife Refuge and along the coastline of Levy County and Citrus County. Static surveys were conducted using four Ashtech dual frequency P-code receivers for 45-minute sessions and a data logging rate of 10 seconds. Network vector lengths ranged from 4 to 64 km and, including redundant baselines, totaled over 100 vectors. Analysis includes use of the GEOID93 model with a least squares network adjustment and reference to the National Geodetic Reference System (NGRS). The static surveys show high internal consistency and the desired centimeter-level accuracy is achieved for the local network. Uncertainties for the newly established vertical positions range from 0.8 cm to 1.8 cm at the 95% confidence level. These new positions provide sufficient vertical accuracy to achieve the project objectives of tying marsh surface elevations to long-term water level gauges recording sea level fluctuations along the coast.

  8. Static and quasi-static analysis of lobed-pumpkin balloon

    NASA Astrophysics Data System (ADS)

    Nakashino, Kyoichi; Sasaki, Makoto; Hashimoto, Satoshi; Saito, Yoshitaka; Izutsu, Naoki

    The present study is motivated by the need to improve design methodology for super pressure balloon with 3D gore design concept, currently being developed at the Scientific Balloon Center of ISAS/JAXA. The distinctive feature of the 3-D gore design is that the balloon film has excess materials not only in the circumferential direction but also in the meridional direction; the meridional excess is gained by attaching the film boundaries to the corresponding tendons of a shorter length with a controlled shortening rate. The resulting balloon shape is a pumpkin-like shape with large bulges formed between adjacent tendons. The balloon film, when fully inflated, develops wrinkles in the circumferential direction over its entire region, so that the stresses in the film are limited to a small amount of uniaxial tension in the circumferential direction while the high meridional loads are carried by re-enforced tendons. Naturally, the amount of wrinkling in the film is dominated by the shortening rate between the film boundaries and the tendon curve. In the 3-D gore design, as a consequence, the shortening rate becomes a fundamental design parameter along with the geometric parameters of the gore. In view of this, we have carried out a series of numerical study of the lobed-pumpkin balloon with varying gore geometry as well as with varying shortening rate. The numerical simula-tions were carried out with a nonlinear finite element code incorporating the wrinkling effect. Numerical results show that there is a threshold value for the shortening rate beyond which the stresses in the balloon film increases disproportionately. We have also carried out quasi-static simulations of the inflation process of the lobed-pumpkin balloon, and have obtained asymmetric deformations when the balloon films are in uniaxial tension state.

  9. Development of a unified constitutive model for an isotropic nickel base superalloy Rene 80

    NASA Technical Reports Server (NTRS)

    Ramaswamy, V. G.; Vanstone, R. H.; Laflen, J. H.; Stouffer, D. C.

    1988-01-01

    Accurate analysis of stress-strain behavior is of critical importance in the evaluation of life capabilities of hot section turbine engine components such as turbine blades and vanes. The constitutive equations used in the finite element analysis of such components must be capable of modeling a variety of complex behavior exhibited at high temperatures by cast superalloys. The classical separation of plasticity and creep employed in most of the finite element codes in use today is known to be deficient in modeling elevated temperature time dependent phenomena. Rate dependent, unified constitutive theories can overcome many of these difficulties. A new unified constitutive theory was developed to model the high temperature, time dependent behavior of Rene' 80 which is a cast turbine blade and vane nickel base superalloy. Considerations in model development included the cyclic softening behavior of Rene' 80, rate independence at lower temperatures and the development of a new model for static recovery.

  10. The MHOST finite element program: 3-D inelastic analysis methods for hot section components. Volume 1: Theoretical manual

    NASA Technical Reports Server (NTRS)

    Nakazawa, Shohei

    1991-01-01

    Formulations and algorithms implemented in the MHOST finite element program are discussed. The code uses a novel concept of the mixed iterative solution technique for the efficient 3-D computations of turbine engine hot section components. The general framework of variational formulation and solution algorithms are discussed which were derived from the mixed three field Hu-Washizu principle. This formulation enables the use of nodal interpolation for coordinates, displacements, strains, and stresses. Algorithmic description of the mixed iterative method includes variations for the quasi static, transient dynamic and buckling analyses. The global-local analysis procedure referred to as the subelement refinement is developed in the framework of the mixed iterative solution, of which the detail is presented. The numerically integrated isoparametric elements implemented in the framework is discussed. Methods to filter certain parts of strain and project the element discontinuous quantities to the nodes are developed for a family of linear elements. Integration algorithms are described for linear and nonlinear equations included in MHOST program.

  11. Assessment of Static Delamination Propagation Capabilities in Commercial Finite Element Codes Using Benchmark Analysis

    NASA Technical Reports Server (NTRS)

    Orifici, Adrian C.; Krueger, Ronald

    2010-01-01

    With capabilities for simulating delamination growth in composite materials becoming available, the need for benchmarking and assessing these capabilities is critical. In this study, benchmark analyses were performed to assess the delamination propagation simulation capabilities of the VCCT implementations in Marc TM and MD NastranTM. Benchmark delamination growth results for Double Cantilever Beam, Single Leg Bending and End Notched Flexure specimens were generated using a numerical approach. This numerical approach was developed previously, and involves comparing results from a series of analyses at different delamination lengths to a single analysis with automatic crack propagation. Specimens were analyzed with three-dimensional and two-dimensional models, and compared with previous analyses using Abaqus . The results demonstrated that the VCCT implementation in Marc TM and MD Nastran(TradeMark) was capable of accurately replicating the benchmark delamination growth results and that the use of the numerical benchmarks offers advantages over benchmarking using experimental and analytical results.

  12. Computational analysis of unmanned aerial vehicle (UAV)

    NASA Astrophysics Data System (ADS)

    Abudarag, Sakhr; Yagoub, Rashid; Elfatih, Hassan; Filipovic, Zoran

    2017-01-01

    A computational analysis has been performed to verify the aerodynamics properties of Unmanned Aerial Vehicle (UAV). The UAV-SUST has been designed and fabricated at the Department of Aeronautical Engineering at Sudan University of Science and Technology in order to meet the specifications required for surveillance and reconnaissance mission. It is classified as a medium range and medium endurance UAV. A commercial CFD solver is used to simulate steady and unsteady aerodynamics characteristics of the entire UAV. In addition to Lift Coefficient (CL), Drag Coefficient (CD), Pitching Moment Coefficient (CM) and Yawing Moment Coefficient (CN), the pressure and velocity contours are illustrated. The aerodynamics parameters are represented a very good agreement with the design consideration at angle of attack ranging from zero to 26 degrees. Moreover, the visualization of the velocity field and static pressure contours is indicated a satisfactory agreement with the proposed design. The turbulence is predicted by enhancing K-ω SST turbulence model within the computational fluid dynamics code.

  13. Investigation of a Macromechanical Approach to Analyzing Triaxially-Braided Polymer Composites

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Blinzler, Brina J.; Binienda, Wieslaw K.

    2010-01-01

    A macro level finite element-based model has been developed to simulate the mechanical and impact response of triaxially-braided polymer matrix composites. In the analytical model, the triaxial braid architecture is simulated by using four parallel shell elements, each of which is modeled as a laminated composite. The commercial transient dynamic finite element code LS-DYNA is used to conduct the simulations, and a continuum damage mechanics model internal to LS-DYNA is used as the material constitutive model. The material stiffness and strength values required for the constitutive model are determined based on coupon level tests on the braided composite. Simulations of quasi-static coupon tests of a representative braided composite are conducted. Varying the strength values that are input to the material model is found to have a significant influence on the effective material response predicted by the finite element analysis, sometimes in ways that at first glance appear non-intuitive. A parametric study involving the input strength parameters provides guidance on how the analysis model can be improved.

  14. Development of 1D Liner Compression Code for IDL

    NASA Astrophysics Data System (ADS)

    Shimazu, Akihisa; Slough, John; Pancotti, Anthony

    2015-11-01

    A 1D liner compression code is developed to model liner implosion dynamics in the Inductively Driven Liner Experiment (IDL) where FRC plasmoid is compressed via inductively-driven metal liners. The driver circuit, magnetic field, joule heating, and liner dynamics calculations are performed at each time step in sequence to couple these effects in the code. To obtain more realistic magnetic field results for a given drive coil geometry, 2D and 3D effects are incorporated into the 1D field calculation through use of correction factor table lookup approach. Commercial low-frequency electromagnetic fields solver, ANSYS Maxwell 3D, is used to solve the magnetic field profile for static liner condition at various liner radius in order to derive correction factors for the 1D field calculation in the code. The liner dynamics results from the code is verified to be in good agreement with the results from commercial explicit dynamics solver, ANSYS Explicit Dynamics, and previous liner experiment. The developed code is used to optimize the capacitor bank and driver coil design for better energy transfer and coupling. FRC gain calculations are also performed using the liner compression data from the code for the conceptual design of the reactor sized system for fusion energy gains.

  15. Static, Dynamic, and Fatigue Analysis of the Mechanical System of Ultrasonic Scanner for Inservice Inspection of Research Reactors

    NASA Astrophysics Data System (ADS)

    Awwaluddin, Muhammad; Kristedjo, K.; Handono, Khairul; Ahmad, H.

    2018-02-01

    This analysis is conducted to determine the effects of static and dynamic loads of the structure of mechanical system of Ultrasonic Scanner i.e., arm, column, and connection systems for inservice inspection of research reactors. The analysis is performed using the finite element method with 520 N static load. The correction factor of dynamic loads used is the Gerber mean stress correction (stress life). The results of the analysis show that the value of maximum equivalent von Mises stress is 1.3698E8 Pa for static loading and value of the maximum equivalent alternating stress is 1.4758E7 Pa for dynamic loading. These values are below the upper limit allowed according to ASTM A240 standards i.e. 2.05E8 Pa. The result analysis of fatigue life cycle are at least 1E6 cycle, so it can be concluded that the structure is in the high life cycle category.

  16. A simplified method in comparison with comprehensive interaction incremental dynamic analysis to assess seismic performance of jacket-type offshore platforms

    NASA Astrophysics Data System (ADS)

    Zolfaghari, M. R.; Ajamy, A.; Asgarian, B.

    2015-12-01

    The primary goal of seismic reassessment procedures in oil platform codes is to determine the reliability of a platform under extreme earthquake loading. Therefore, in this paper, a simplified method is proposed to assess seismic performance of existing jacket-type offshore platforms (JTOP) in regions ranging from near-elastic to global collapse. The simplified method curve exploits well agreement between static pushover (SPO) curve and the entire summarized interaction incremental dynamic analysis (CI-IDA) curve of the platform. Although the CI-IDA method offers better understanding and better modelling of the phenomenon, it is a time-consuming and challenging task. To overcome the challenges, the simplified procedure, a fast and accurate approach, is introduced based on SPO analysis. Then, an existing JTOP in the Persian Gulf is presented to illustrate the procedure, and finally a comparison is made between the simplified method and CI-IDA results. The simplified method is very informative and practical for current engineering purposes. It is able to predict seismic performance elasticity to global dynamic instability with reasonable accuracy and little computational effort.

  17. The NASA/industry Design Analysis Methods for Vibrations (DAMVIBS) program: Boeing Helicopters airframe finite element modeling

    NASA Technical Reports Server (NTRS)

    Gabel, R.; Lang, P.; Reed, D.

    1993-01-01

    Mathematical models based on the finite element method of structural analysis, as embodied in the NASTRAN computer code, are routinely used by the helicopter industry to calculate airframe static internal loads used for sizing structural members. Historically, less reliance has been placed on the vibration predictions based on these models. Beginning in the early 1980's NASA's Langley Research Center initiated an industry wide program with the objective of engendering the needed trust in vibration predictions using these models and establishing a body of modeling guides which would enable confident future prediction of airframe vibration as part of the regular design process. Emphasis in this paper is placed on the successful modeling of the Army/Boeing CH-47D which showed reasonable correlation with test data. A principal finding indicates that improved dynamic analysis requires greater attention to detail and perhaps a finer mesh, especially the mass distribution, than the usual stress model. Post program modeling efforts show improved correlation placing key modal frequencies in the b/rev range with 4 percent of the test frequencies.

  18. Structural/aerodynamic Blade Analyzer (SAB) User's Guide, Version 1.0

    NASA Technical Reports Server (NTRS)

    Morel, M. R.

    1994-01-01

    The structural/aerodynamic blade (SAB) analyzer provides an automated tool for the static-deflection analysis of turbomachinery blades with aerodynamic and rotational loads. A structural code calculates a deflected blade shape using aerodynamic loads input. An aerodynamic solver computes aerodynamic loads using deflected blade shape input. The two programs are iterated automatically until deflections converge. Currently, SAB version 1.0 is interfaced with MSC/NASTRAN to perform the structural analysis and PROP3D to perform the aerodynamic analysis. This document serves as a guide for the operation of the SAB system with specific emphasis on its use at NASA Lewis Research Center (LeRC). This guide consists of six chapters: an introduction which gives a summary of SAB; SAB's methodology, component files, links, and interfaces; input/output file structure; setup and execution of the SAB files on the Cray computers; hints and tips to advise the user; and an example problem demonstrating the SAB process. In addition, four appendices are presented to define the different computer programs used within the SAB analyzer and describe the required input decks.

  19. Overview of the DAEDALOS project

    NASA Astrophysics Data System (ADS)

    Bisagni, Chiara

    2015-10-01

    The "Dynamics in Aircraft Engineering Design and Analysis for Light Optimized Structures" (DAEDALOS) project aimed to develop methods and procedures to determine dynamic loads by considering the effects of dynamic buckling, material damping and mechanical hysteresis during aircraft service. Advanced analysis and design principles were assessed with the scope of partly removing the uncertainty and the conservatism of today's design and certification procedures. To reach these objectives a DAEDALOS aircraft model representing a mid-size business jet was developed. Analysis and in-depth investigation of the dynamic response were carried out on full finite element models and on hybrid models. Material damping was experimentally evaluated, and different methods for damping evaluation were developed, implemented in finite element codes and experimentally validated. They include a strain energy method, a quasi-linear viscoelastic material model, and a generalized Maxwell viscous material damping. Panels and shells representative of typical components of the DAEDALOS aircraft model were experimentally tested subjected to static as well as dynamic loads. Composite and metallic components of the aircraft model were investigated to evaluate the benefit in terms of weight saving.

  20. Large Angle Transient Dynamics (LATDYN) user's manual

    NASA Technical Reports Server (NTRS)

    Abrahamson, A. Louis; Chang, Che-Wei; Powell, Michael G.; Wu, Shih-Chin; Bingel, Bradford D.; Theophilos, Paula M.

    1991-01-01

    A computer code for modeling the large angle transient dynamics (LATDYN) of structures was developed to investigate techniques for analyzing flexible deformation and control/structure interaction problems associated with large angular motions of spacecraft. This type of analysis is beyond the routine capability of conventional analytical tools without simplifying assumptions. In some instances, the motion may be sufficiently slow and the spacecraft (or component) sufficiently rigid to simplify analyses of dynamics and controls by making pseudo-static and/or rigid body assumptions. The LATDYN introduces a new approach to the problem by combining finite element structural analysis, multi-body dynamics, and control system analysis in a single tool. It includes a type of finite element that can deform and rotate through large angles at the same time, and which can be connected to other finite elements either rigidly or through mechanical joints. The LATDYN also provides symbolic capabilities for modeling control systems which are interfaced directly with the finite element structural model. Thus, the nonlinear equations representing the structural model are integrated along with the equations representing sensors, processing, and controls as a coupled system.

  1. High velocity impact on composite link of aircraft wing flap mechanism

    NASA Astrophysics Data System (ADS)

    Heimbs, Sebastian; Lang, Holger; Havar, Tamas

    2012-12-01

    This paper describes the numerical investigation of the mechanical behaviour of a structural component of an aircraft wing flap support impacted by a wheel rim fragment. The support link made of composite materials was modelled in the commercial finite element code Abaqus/Explicit, incorporating intralaminar and interlaminar failure modes by adequate material models and cohesive interfaces. Validation studies were performed step by step using quasi-static tensile test data and low velocity impact test data. Finally, high velocity impact simulations with a metallic rim fragment were performed for several load cases involving different impact angles, impactor rotation and pre-stress. The numerical rim release analysis turned out to be an efficient approach in the development process of such composite structures and for the identification of structural damage and worst case impact loading scenarios.

  2. Comparative Study on Code-based Linear Evaluation of an Existing RC Building Damaged during 1998 Adana-Ceyhan Earthquake

    NASA Astrophysics Data System (ADS)

    Toprak, A. Emre; Gülay, F. Gülten; Ruge, Peter

    2008-07-01

    Determination of seismic performance of existing buildings has become one of the key concepts in structural analysis topics after recent earthquakes (i.e. Izmit and Duzce Earthquakes in 1999, Kobe Earthquake in 1995 and Northridge Earthquake in 1994). Considering the need for precise assessment tools to determine seismic performance level, most of earthquake hazardous countries try to include performance based assessment in their seismic codes. Recently, Turkish Earthquake Code 2007 (TEC'07), which was put into effect in March 2007, also introduced linear and non-linear assessment procedures to be applied prior to building retrofitting. In this paper, a comparative study is performed on the code-based seismic assessment of RC buildings with linear static methods of analysis, selecting an existing RC building. The basic principles dealing the procedure of seismic performance evaluations for existing RC buildings according to Eurocode 8 and TEC'07 will be outlined and compared. Then the procedure is applied to a real case study building is selected which is exposed to 1998 Adana-Ceyhan Earthquake in Turkey, the seismic action of Ms = 6.3 with a maximum ground acceleration of 0.28 g It is a six-storey RC residential building with a total of 14.65 m height, composed of orthogonal frames, symmetrical in y direction and it does not have any significant structural irregularities. The rectangular shaped planar dimensions are 16.40 m×7.80 m = 127.90 m2 with five spans in x and two spans in y directions. It was reported that the building had been moderately damaged during the 1998 earthquake and retrofitting process was suggested by the authorities with adding shear-walls to the system. The computations show that the performing methods of analysis with linear approaches using either Eurocode 8 or TEC'07 independently produce similar performance levels of collapse for the critical storey of the structure. The computed base shear value according to Eurocode is much higher than the requirements of the Turkish Earthquake Code while the selected ground conditions represent the same characteristics. The main reason is that the ordinate of the horizontal elastic response spectrum for Eurocode 8 is increased by the soil factor. In TEC'07 force-based linear assessment, the seismic demands at cross-sections are to be checked with residual moment capacities; however, the chord rotations of primary ductile elements must be checked for Eurocode safety verifications. On the other hand, the demand curvatures from linear methods of analysis of Eurocode 8 together with TEC'07 are almost similar.

  3. A COMPARATIVE STUDY OF REAL-TIME AND STATIC ULTRASONOGRAPHY DIAGNOSES FOR THE INCIDENTAL DETECTION OF DIFFUSE THYROID DISEASE.

    PubMed

    Kim, Dong Wook

    2015-08-01

    The aim of this study was to compare the diagnostic accuracy of real-time and static ultrasonography (US) for the incidental detection of diffuse thyroid disease (DTD). In 118 consecutive patients, a single radiologist performed real-time US before thyroidectomy. For static US, the same radiologist retrospectively investigated the sonographic findings on a picture-archiving and communication system after 3 months. The diagnostic categories of both real-time and static US diagnoses were determined based on the number of abnormal findings, and the diagnostic indices were calculated by a receiver operating characteristic (ROC) curve analysis using the histopathologic results as the reference standard. Histopathologic results included normal thyroid (n = 77), Hashimoto thyroiditis (n = 11), non-Hashimoto lymphocytic thyroiditis (n = 29), and diffuse hyperplasia (n = 1). Normal thyroid and DTD showed significant differences in echogenicity, echotexture, glandular margin, and vascularity on both real-time and static US. There was a positive correlation between US categories and histopathologic results in both real-time and static US. The highest diagnostic indices were obtained when the cutoff criteria of real-time and static US diagnoses were chosen as indeterminate and suspicious for DTD, respectively. The ROC curve analysis showed that real-time US was superior to static US in diagnostic accuracy. Both real-time and static US may be helpful for the detection of incidental DTD, but real-time US is superior to static US for detecting incidental DTD.

  4. Stability of the Einstein static universe in open cosmological models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Canonico, Rosangela; Parisi, Luca; INFN, Sezione di Napoli, GC di Salerno, Via Ponte Don Melillo, I-84081 Baronissi

    2010-09-15

    The stability properties of the Einstein static solution of general relativity are altered when corrective terms arising from modification of the underlying gravitational theory appear in the cosmological equations. In this paper the existence and stability of static solutions are considered in the framework of two recently proposed quantum gravity models. The previously known analysis of the Einstein static solutions in the semiclassical regime of loop quantum cosmology with modifications to the gravitational sector is extended to open cosmological models where a static neutrally stable solution is found. A similar analysis is also performed in the framework of Horava-Lifshitz gravitymore » under detailed balance and projectability conditions. In the case of open cosmological models the two solutions found can be either unstable or neutrally stable according to the admitted values of the parameters.« less

  5. A Change Impact Analysis to Characterize Evolving Program Behaviors

    NASA Technical Reports Server (NTRS)

    Rungta, Neha Shyam; Person, Suzette; Branchaud, Joshua

    2012-01-01

    Change impact analysis techniques estimate the potential effects of changes made to software. Directed Incremental Symbolic Execution (DiSE) is an intraprocedural technique for characterizing the impact of software changes on program behaviors. DiSE first estimates the impact of the changes on the source code using program slicing techniques, and then uses the impact sets to guide symbolic execution to generate path conditions that characterize impacted program behaviors. DiSE, however, cannot reason about the flow of impact between methods and will fail to generate path conditions for certain impacted program behaviors. In this work, we present iDiSE, an extension to DiSE that performs an interprocedural analysis. iDiSE combines static and dynamic calling context information to efficiently generate impacted program behaviors across calling contexts. Information about impacted program behaviors is useful for testing, verification, and debugging of evolving programs. We present a case-study of our implementation of the iDiSE algorithm to demonstrate its efficiency at computing impacted program behaviors. Traditional notions of coverage are insufficient for characterizing the testing efforts used to validate evolving program behaviors because they do not take into account the impact of changes to the code. In this work we present novel definitions of impacted coverage metrics that are useful for evaluating the testing effort required to test evolving programs. We then describe how the notions of impacted coverage can be used to configure techniques such as DiSE and iDiSE in order to support regression testing related tasks. We also discuss how DiSE and iDiSE can be configured for debugging finding the root cause of errors introduced by changes made to the code. In our empirical evaluation we demonstrate that the configurations of DiSE and iDiSE can be used to support various software maintenance tasks

  6. The Neglect of Monotone Comparative Statics Methods

    ERIC Educational Resources Information Center

    Tremblay, Carol Horton; Tremblay, Victor J.

    2010-01-01

    Monotone methods enable comparative static analysis without the restrictive assumptions of the implicit-function theorem. Ease of use and flexibility in solving comparative static and game-theory problems have made monotone methods popular in the economics literature and in graduate courses, but they are still absent from undergraduate…

  7. Concomitant Zn-Cd and Pb retention in a carbonated fluvio-glacial deposit under both static and dynamic conditions.

    PubMed

    Lassabatere, Laurent; Spadini, Lorenzo; Delolme, Cécile; Février, Laureline; Galvez Cloutier, Rosa; Winiarski, Thierry

    2007-11-01

    The chemical and physical processes involved in the retention of 10(-2)M Zn, Pb and Cd in a calcareous medium were studied under saturated dynamic (column) and static (batch) conditions. Retention in columns decreased in order: Pb>Cd approximately Zn. In the batch experiments, the same order was observed for a contact time of less than 40h and over, Pb>Cd>Zn. Stronger Pb retention is in accordance with the lower solubility of Pb carbonates. However, the equality of retained Zn and Cd does not fit the solubility constants of carbonated solids. SEM analysis revealed that heavy metals and calcareous particles are associated. Pb precipitated as individualized Zn-Cd-Ca- free carbonated crystallites. All the heavy metals were also found to be associated with calcareous particles, without any change in their porosity, pointing to a surface/lattice diffusion-controlled substitution process. Zn and Cd were always found in concomitancy, though Pb fixed separately at the particle circumferences. The Phreeqc 2.12 interactive code was used to model experimental data on the following basis: flow fractionation in the columns, precipitation of Pb as cerrusite linked to kinetically controlled calcite dissolution, and heavy metal sorption onto proton exchanging sites (presumably surface complexation onto a calcite surface). This model simulates exchanges of metals with surface protons, pH buffering and the prevention of early Zn and Cd precipitation. Both modeling and SEM analysis show a probable significant decrease of calcite dissolution along with its contamination with metals.

  8. MicroRNA Expression Profiles in Cultured Human Fibroblasts in Space

    NASA Technical Reports Server (NTRS)

    Wu, Honglu; Lu, Tao; Jeevarajan, John; Rohde, Larry; Zhang, Ye

    2014-01-01

    Microgravity, or an altered gravity environment from the static 1g, has been shown to influence global gene expression patterns and protein levels in living organisms. However, it is unclear how these changes in gene and protein expressions are related to each other or are related to other factors regulating such changes. A different class of RNA, the small non-coding microRNA (miRNA), can have a broad effect on gene expression networks by mainly inhibiting the translation process. Previously, we investigated changes in the expression of miRNA and related genes under simulated microgravity conditions on the ground using the NASA invented bioreactor. In comparison to static 1 g, simulated microgravity altered a number of miRNAs in human lymphoblastoid cells. Pathway analysis with the altered miRNAs and RNA expressions revealed differential involvement of cell communication and catalytic activity, as well as immune response signaling and NGF activation of NF-kB pathways under simulated microgravity condition. The network analysis also identified several projected networks with c- Rel, ETS1 and Ubiquitin C as key factors. In a flight experiment on the International Space Station (ISS), we will investigate the effects of actual spaceflight on miRNA expressions in nondividing human fibroblast cells in mostly G1 phase of the cell cycle. A fibroblast is a type of cell that synthesizes the extracellular matrix and collagen, the structural framework for tissues, and plays a critical role in wound healing and other functions. In addition to miRNA expressions, we will investigate the effects of spaceflight on the cellular response to DNA damages from bleomycin treatment.

  9. Expert Design Advisor

    DTIC Science & Technology

    1990-10-01

    to economic, technological, spatial or logistic concerns, or involve training, man-machine interfaces, or integration into existing systems. Once the...probabilistic reasoning, mixed analysis- and simulation-oriented, mixed computation- and communication-oriented, nonpreemptive static priority...scheduling base, nonrandomized, preemptive static priority scheduling base, randomized, simulation-oriented, and static scheduling base. The selection of both

  10. Duration Dependent Effect of Static Stretching on Quadriceps and Hamstring Muscle Force.

    PubMed

    Alizadeh Ebadi, Leyla; Çetin, Ebru

    2018-03-13

    The aim of this study was to determine the acute effect of static stretching on hamstring and quadriceps muscles' isokinetic strength when applied for various durations to elite athletes, to investigate the effect of different static stretching durations on isokinetic strength, and finally to determine the optimal stretching duration. Fifteen elite male athletes from two different sport branches (10 football and five basketball) participated in this study. Experimental protocol was designed as 17 repetitive static stretching exercises for hamstring and quadriceps muscle groups according to the indicated experimental protocols; ((A) 5 min jogging; (B) 5 min jogging followed by 15 s static stretching; (C) 5 min jogging followed by 30 s static stretching; (D) 5 min jogging, followed by static stretching for 45 s). Immediately after each protocol, an isokinetic strength test consisting of five repetitions at 60°/s speed and 20 repetitions at 180°/s speed was recorded for the right leg by the Isomed 2000 device. Friedman variance analysis test was employed for data analysis. According to the analyzes, it was observed that 5 min jogging and 15 s stretching exercises increased the isokinetic strength, whereas 30 and 45 s stretching exercises caused a decrease.

  11. Duration Dependent Effect of Static Stretching on Quadriceps and Hamstring Muscle Force

    PubMed Central

    Çetin, Ebru

    2018-01-01

    The aim of this study was to determine the acute effect of static stretching on hamstring and quadriceps muscles’ isokinetic strength when applied for various durations to elite athletes, to investigate the effect of different static stretching durations on isokinetic strength, and finally to determine the optimal stretching duration. Fifteen elite male athletes from two different sport branches (10 football and five basketball) participated in this study. Experimental protocol was designed as 17 repetitive static stretching exercises for hamstring and quadriceps muscle groups according to the indicated experimental protocols; ((A) 5 min jogging; (B) 5 min jogging followed by 15 s static stretching; (C) 5 min jogging followed by 30 s static stretching; (D) 5 min jogging, followed by static stretching for 45 s). Immediately after each protocol, an isokinetic strength test consisting of five repetitions at 60°/s speed and 20 repetitions at 180°/s speed was recorded for the right leg by the Isomed 2000 device. Friedman variance analysis test was employed for data analysis. According to the analyzes, it was observed that 5 min jogging and 15 s stretching exercises increased the isokinetic strength, whereas 30 and 45 s stretching exercises caused a decrease.

  12. Collaborative Research: Simulation of Beam-Electron Cloud Interactions in Circular Accelerators Using Plasma Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katsouleas, Thomas; Decyk, Viktor

    Final Report for grant DE-FG02-06ER54888, "Simulation of Beam-Electron Cloud Interactions in Circular Accelerators Using Plasma Models" Viktor K. Decyk, University of California, Los Angeles Los Angeles, CA 90095-1547 The primary goal of this collaborative proposal was to modify the code QuickPIC and apply it to study the long-time stability of beam propagation in low density electron clouds present in circular accelerators. The UCLA contribution to this collaborative proposal was in supporting the development of the pipelining scheme for the QuickPIC code, which extended the parallel scaling of this code by two orders of magnitude. The USC work was as describedmore » here the PhD research for Ms. Bing Feng, lead author in reference 2 below, who performed the research at USC under the guidance of the PI Tom Katsouleas and the collaboration of Dr. Decyk The QuickPIC code [1] is a multi-scale Particle-in-Cell (PIC) code. The outer 3D code contains a beam which propagates through a long region of plasma and evolves slowly. The plasma response to this beam is modeled by slices of a 2D plasma code. This plasma response then is fed back to the beam code, and the process repeats. The pipelining is based on the observation that once the beam has passed a 2D slice, its response can be fed back to the beam immediately without waiting for the beam to pass all the other slices. Thus independent blocks of 2D slices from different time steps can be running simultaneously. The major difficulty was when particles at the edges needed to communicate with other blocks. Two versions of the pipelining scheme were developed, for the the full quasi-static code and the other for the basic quasi-static code used by this e-cloud proposal. Details of the pipelining scheme were published in [2]. The new version of QuickPIC was able to run with more than 1,000 processors, and was successfully applied in modeling e-clouds by our collaborators in this proposal [3-8]. Jean-Luc Vay at Lawrence Berkeley National Lab later implemented a similar basic quasistatic scheme including pipelining in the code WARP [9] and found good to very good quantitative agreement between the two codes in modeling e-clouds. References [1] C. Huang, V. K. Decyk, C. Ren, M. Zhou, W. Lu, W. B. Mori, J. H. Cooley, T. M. Antonsen, Jr., and T. Katsouleas, "QUICKPIC: A highly efficient particle-in-cell code for modeling wakefield acceleration in plasmas," J. Computational Phys. 217, 658 (2006). [2] B. Feng, C. Huang, V. K. Decyk, W. B. Mori, P. Muggli, and T. Katsouleas, "Enhancing parallel quasi-static particle-in-cell simulations with a pipelining algorithm," J. Computational Phys, 228, 5430 (2009). [3] C. Huang, V. K. Decyk, M. Zhou, W. Lu, W. B. Mori, J. H. Cooley, T. M. Antonsen, Jr., and B. Feng, T. Katsouleas, J. Vieira, and L. O. Silva, "QUICKPIC: A highly efficient fully parallelized PIC code for plasma-based acceleration," Proc. of the SciDAC 2006 Conf., Denver, Colorado, June, 2006 [Journal of Physics: Conference Series, W. M. Tang, Editor, vol. 46, Institute of Physics, Bristol and Philadelphia, 2006], p. 190. [4] B. Feng, C. Huang, V. Decyk, W. B. Mori, T. Katsouleas, P. Muggli, "Enhancing Plasma Wakefield and E-cloud Simulation Performance Using a Pipelining Algorithm," Proc. 12th Workshop on Advanced Accelerator Concepts, Lake Geneva, WI, July, 2006, p. 201 [AIP Conf. Proceedings, vol. 877, Melville, NY, 2006]. [5] B. Feng, P. Muggli, T. Katsouleas, V. Decyk, C. Huang, and W. Mori, "Long Time Electron Cloud Instability Simulation Using QuickPIC with Pipelining Algorithm," Proc. of the 2007 Particle Accelerator Conference, Albuquerque, NM, June, 2007, p. 3615. [6] B. Feng, C. Huang, V. Decyk, W. B. Mori, G. H. Hoffstaetter, P. Muggli, T. Katsouleas, "Simulation of Electron Cloud Effects on Electron Beam at ERL with Pipelined QuickPIC," Proc. 13th Workshop on Advanced Accelerator Concepts, Santa Cruz, CA, July-August, 2008, p. 340 [AIP Conf. Proceedings, vol. 1086, Melville, NY, 2008]. [7] B. Feng, C. Huang, V. K. Decyk, W. B. Mori, P. Muggli, and T. Katsouleas, "Enhancing parallel quasi-static particle-in-cell simulations with a pipelining algorithm," J. Computational Phys, 228, 5430 (2009). [8] C. Huang, W. An, V. K. Decyk, W. Lu, W. B. Mori, F. S. Tsung, M. Tzoufras, S. Morshed, T. Antonsen, B. Feng, T. Katsouleas, R., A. Fonseca, S. F. Martins, J. Vieira, L. O. Silva, E. Esarey, C. G. R. Geddes, W. P. Leemans, E. Cormier-Michel, J.-L. Vay, D. L. Bruhwiler, B. Cowan, J. R. Cary, and K. Paul, "Recent results and future challenges for large scale particleion- cell simulations of plasma-based accelerator concepts," Proc. of the SciDAC 2009 Conf., San Diego, CA, June, 2009 [Journal of Physics: Conference Series, vol. 180, Institute of Physics, Bristol and Philadelphia, 2009], p. 012005. [9] J.-L. Vay, C. M. Celata, M. A. Furman, G. Penn, M. Venturini, D. P. Grote, and K. G. Sonnad, ?Update on Electron-Cloud Simulations Using the Package WARP-POSINST.? Proc. of the 2009 Particle Accelerator Conference PAC09, Vancouver, Canada, June, 2009, paper FR5RFP078.« less

  13. Electro-Thermal-Mechanical Simulation Capability Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, D

    This is the Final Report for LDRD 04-ERD-086, 'Electro-Thermal-Mechanical Simulation Capability'. The accomplishments are well documented in five peer-reviewed publications and six conference presentations and hence will not be detailed here. The purpose of this LDRD was to research and develop numerical algorithms for three-dimensional (3D) Electro-Thermal-Mechanical simulations. LLNL has long been a world leader in the area of computational mechanics, and recently several mechanics codes have become 'multiphysics' codes with the addition of fluid dynamics, heat transfer, and chemistry. However, these multiphysics codes do not incorporate the electromagnetics that is required for a coupled Electro-Thermal-Mechanical (ETM) simulation. There aremore » numerous applications for an ETM simulation capability, such as explosively-driven magnetic flux compressors, electromagnetic launchers, inductive heating and mixing of metals, and MEMS. A robust ETM simulation capability will enable LLNL physicists and engineers to better support current DOE programs, and will prepare LLNL for some very exciting long-term DoD opportunities. We define a coupled Electro-Thermal-Mechanical (ETM) simulation as a simulation that solves, in a self-consistent manner, the equations of electromagnetics (primarily statics and diffusion), heat transfer (primarily conduction), and non-linear mechanics (elastic-plastic deformation, and contact with friction). There is no existing parallel 3D code for simulating ETM systems at LLNL or elsewhere. While there are numerous magnetohydrodynamic codes, these codes are designed for astrophysics, magnetic fusion energy, laser-plasma interaction, etc. and do not attempt to accurately model electromagnetically driven solid mechanics. This project responds to the Engineering R&D Focus Areas of Simulation and Energy Manipulation, and addresses the specific problem of Electro-Thermal-Mechanical simulation for design and analysis of energy manipulation systems such as magnetic flux compression generators and railguns. This project compliments ongoing DNT projects that have an experimental emphasis. Our research efforts have been encapsulated in the Diablo and ALE3D simulation codes. This new ETM capability already has both internal and external users, and has spawned additional research in plasma railgun technology. By developing this capability Engineering has become a world-leader in ETM design, analysis, and simulation. This research has positioned LLNL to be able to compete for new business opportunities with the DoD in the area of railgun design. We currently have a three-year $1.5M project with the Office of Naval Research to apply our ETM simulation capability to railgun bore life issues and we expect to be a key player in the railgun community.« less

  14. Slope Stability Analysis In Seismic Areas Of The Northern Apennines (Italy)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lo Presti, D.; Fontana, T.; Marchetti, D.

    2008-07-08

    Several research works have been published on the slope stability in the northern Tuscany (central Italy) and particularly in the seismic areas of Garfagnana and Lunigiana (Lucca and Massa-Carrara districts), aimed at analysing the slope stability under static and dynamic conditions and mapping the landslide hazard. In addition, in situ and laboratory investigations are available for the study area, thanks to the activities undertaken by the Tuscany Seismic Survey. Based on such a huge information the co-seismic stability of few ideal slope profiles have been analysed by means of Limit equilibrium method LEM - (pseudo-static) and Newmark sliding block analysismore » (pseudo-dynamic). The analysis--results gave indications about the most appropriate seismic coefficient to be used in pseudo-static analysis after establishing allowable permanent displacement. Such indications are commented in the light of the Italian and European prescriptions for seismic stability analysis with pseudo-static approach. The stability conditions, obtained from the previous analyses, could be used to define microzonation criteria for the study area.« less

  15. TORO II: A finite element computer program for nonlinear quasi-static problems in electromagnetics: Part 2, User`s manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gartling, D.K.

    User instructions are given for the finite element, electromagnetics program, TORO II. The theoretical background and numerical methods used in the program are documented in SAND95-2472. The present document also describes a number of example problems that have been analyzed with the code and provides sample input files for typical simulations. 20 refs., 34 figs., 3 tabs.

  16. Enhanced quasi-static particle-in-cell simulation of electron cloud instabilities in circular accelerators

    NASA Astrophysics Data System (ADS)

    Feng, Bing

    Electron cloud instabilities have been observed in many circular accelerators around the world and raised concerns of future accelerators and possible upgrades. In this thesis, the electron cloud instabilities are studied with the quasi-static particle-in-cell (PIC) code QuickPIC. Modeling in three-dimensions the long timescale propagation of beam in electron clouds in circular accelerators requires faster and more efficient simulation codes. Thousands of processors are easily available for parallel computations. However, it is not straightforward to increase the effective speed of the simulation by running the same problem size on an increasingly number of processors because there is a limit to domain size in the decomposition of the two-dimensional part of the code. A pipelining algorithm applied on the fully parallelized particle-in-cell code QuickPIC is implemented to overcome this limit. The pipelining algorithm uses multiple groups of processors and optimizes the job allocation on the processors in parallel computing. With this novel algorithm, it is possible to use on the order of 102 processors, and to expand the scale and the speed of the simulation with QuickPIC by a similar factor. In addition to the efficiency improvement with the pipelining algorithm, the fidelity of QuickPIC is enhanced by adding two physics models, the beam space charge effect and the dispersion effect. Simulation of two specific circular machines is performed with the enhanced QuickPIC. First, the proposed upgrade to the Fermilab Main Injector is studied with an eye upon guiding the design of the upgrade and code validation. Moderate emittance growth is observed for the upgrade of increasing the bunch population by 5 times. But the simulation also shows that increasing the beam energy from 8GeV to 20GeV or above can effectively limit the emittance growth. Then the enhanced QuickPIC is used to simulate the electron cloud effect on electron beam in the Cornell Energy Recovery Linac (ERL) due to extremely small emittance and high peak currents anticipated in the machine. A tune shift is discovered from the simulation; however, emittance growth of the electron beam in electron cloud is not observed for ERL parameters.

  17. Functional differentiation of macaque visual temporal cortical neurons using a parametric action space.

    PubMed

    Vangeneugden, Joris; Pollick, Frank; Vogels, Rufin

    2009-03-01

    Neurons in the rostral superior temporal sulcus (STS) are responsive to displays of body movements. We employed a parametric action space to determine how similarities among actions are represented by visual temporal neurons and how form and motion information contributes to their responses. The stimulus space consisted of a stick-plus-point-light figure performing arm actions and their blends. Multidimensional scaling showed that the responses of temporal neurons represented the ordinal similarity between these actions. Further tests distinguished neurons responding equally strongly to static presentations and to actions ("snapshot" neurons), from those responding much less strongly to static presentations, but responding well when motion was present ("motion" neurons). The "motion" neurons were predominantly found in the upper bank/fundus of the STS, and "snapshot" neurons in the lower bank of the STS and inferior temporal convexity. Most "motion" neurons showed strong response modulation during the course of an action, thus responding to action kinematics. "Motion" neurons displayed a greater average selectivity for these simple arm actions than did "snapshot" neurons. We suggest that the "motion" neurons code for visual kinematics, whereas the "snapshot" neurons code for form/posture, and that both can contribute to action recognition, in agreement with computation models of action recognition.

  18. Teaching strategies for using projected images to develop conceptual understanding: Exploring discussion practices in computer simulation and static image-based lessons

    NASA Astrophysics Data System (ADS)

    Price, Norman T.

    The availability and sophistication of visual display images, such as simulations, for use in science classrooms has increased exponentially however, it can be difficult for teachers to use these images to encourage and engage active student thinking. There is a need to describe flexible discussion strategies that use visual media to engage active thinking. This mixed methods study analyzes teacher behavior in lessons using visual media about the particulate model of matter that were taught by three experienced middle school teachers. Each teacher taught one half of their students with lessons using static overheads and taught the other half with lessons using a projected dynamic simulation. The quantitative analysis of pre-post data found significant gain differences between the two image mode conditions, suggesting that the students who were assigned to the simulation condition learned more than students who were assigned to the overhead condition. Open coding was used to identify a set of eight image-based teaching strategies that teachers were using with visual displays. Fixed codes for this set of image-based discussion strategies were then developed and used to analyze video and transcripts of whole class discussions from 12 lessons. The image-based discussion strategies were refined over time in a set of three in-depth 2x2 comparative case studies of two teachers teaching one lesson topic with two image display modes. The comparative case study data suggest that the simulation mode may have offered greater affordances than the overhead mode for planning and enacting discussions. The 12 discussions were also coded for overall teacher student interaction patterns, such as presentation, IRE, and IRF. When teachers moved during a lesson from using no image to using either image mode, some teachers were observed asking more questions when the image was displayed while others asked many fewer questions. The changes in teacher student interaction patterns suggest that teachers vary on whether they consider the displayed image as a "tool-for-telling" and a "tool-for-asking." The study attempts to provide new descriptions of strategies teachers use to orchestrate image-based discussions designed to promote student engagement and reasoning in lessons with conceptual goals.

  19. Auditory motion-specific mechanisms in the primate brain

    PubMed Central

    Baumann, Simon; Dheerendra, Pradeep; Joly, Olivier; Hunter, David; Balezeau, Fabien; Sun, Li; Rees, Adrian; Petkov, Christopher I.; Thiele, Alexander; Griffiths, Timothy D.

    2017-01-01

    This work examined the mechanisms underlying auditory motion processing in the auditory cortex of awake monkeys using functional magnetic resonance imaging (fMRI). We tested to what extent auditory motion analysis can be explained by the linear combination of static spatial mechanisms, spectrotemporal processes, and their interaction. We found that the posterior auditory cortex, including A1 and the surrounding caudal belt and parabelt, is involved in auditory motion analysis. Static spatial and spectrotemporal processes were able to fully explain motion-induced activation in most parts of the auditory cortex, including A1, but not in circumscribed regions of the posterior belt and parabelt cortex. We show that in these regions motion-specific processes contribute to the activation, providing the first demonstration that auditory motion is not simply deduced from changes in static spatial location. These results demonstrate that parallel mechanisms for motion and static spatial analysis coexist within the auditory dorsal stream. PMID:28472038

  20. Does Intellectual Property Restrict Output? An Analysis of Pharmaceutical Markets*

    PubMed Central

    Lakdawalla, Darius; Philipson, Tomas

    2013-01-01

    Standard normative analysis of intellectual property focuses on the balance between incentives for research and the static welfare costs of reduced price-competition from monopoly. However, static welfare loss from patents is not universal. While patents restrict price competition, they may also provide static welfare benefits by improving incentives for marketing, which is a form of non-price competition. We show theoretically how stronger marketing incentives mitigate, and can even offset, the static costs of monopoly pricing. Empirical analysis in the pharmaceutical industry context suggests that, in the short-run, patent expirations reduce consumer welfare as a result of decreased marketing effort. In the long-run, patent expirations do benefit consumers, but by 30% less than would be implied by the reduction in price alone. The social value of monopoly marketing to consumers alone is roughly on par with its costs to firms. PMID:25221349

  1. Does Intellectual Property Restrict Output? An Analysis of Pharmaceutical Markets.

    PubMed

    Lakdawalla, Darius; Philipson, Tomas

    2012-02-01

    Standard normative analysis of intellectual property focuses on the balance between incentives for research and the static welfare costs of reduced price-competition from monopoly. However, static welfare loss from patents is not universal. While patents restrict price competition, they may also provide static welfare benefits by improving incentives for marketing, which is a form of non -price competition. We show theoretically how stronger marketing incentives mitigate, and can even offset, the static costs of monopoly pricing. Empirical analysis in the pharmaceutical industry context suggests that, in the short-run, patent expirations reduce consumer welfare as a result of decreased marketing effort. In the long-run, patent expirations do benefit consumers, but by 30% less than would be implied by the reduction in price alone. The social value of monopoly marketing to consumers alone is roughly on par with its costs to firms.

  2. Automatic Estimation of Verified Floating-Point Round-Off Errors via Static Analysis

    NASA Technical Reports Server (NTRS)

    Moscato, Mariano; Titolo, Laura; Dutle, Aaron; Munoz, Cesar A.

    2017-01-01

    This paper introduces a static analysis technique for computing formally verified round-off error bounds of floating-point functional expressions. The technique is based on a denotational semantics that computes a symbolic estimation of floating-point round-o errors along with a proof certificate that ensures its correctness. The symbolic estimation can be evaluated on concrete inputs using rigorous enclosure methods to produce formally verified numerical error bounds. The proposed technique is implemented in the prototype research tool PRECiSA (Program Round-o Error Certifier via Static Analysis) and used in the verification of floating-point programs of interest to NASA.

  3. 14 CFR 23.785 - Seats, berths, litters, safety belts, and shoulder harnesses.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... combination of structural analysis and static load tests to limit load; or (3) Static load tests to ultimate... OF TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY... resulting from the ultimate static load factors prescribed in § 23.561(b)(2) of this part. Each occupant...

  4. 14 CFR 23.785 - Seats, berths, litters, safety belts, and shoulder harnesses.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... combination of structural analysis and static load tests to limit load; or (3) Static load tests to ultimate... OF TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY... resulting from the ultimate static load factors prescribed in § 23.561(b)(2) of this part. Each occupant...

  5. 14 CFR 23.785 - Seats, berths, litters, safety belts, and shoulder harnesses.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... combination of structural analysis and static load tests to limit load; or (3) Static load tests to ultimate... OF TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY... resulting from the ultimate static load factors prescribed in § 23.561(b)(2) of this part. Each occupant...

  6. 14 CFR 23.785 - Seats, berths, litters, safety belts, and shoulder harnesses.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... combination of structural analysis and static load tests to limit load; or (3) Static load tests to ultimate... OF TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY... resulting from the ultimate static load factors prescribed in § 23.561(b)(2) of this part. Each occupant...

  7. 14 CFR 23.785 - Seats, berths, litters, safety belts, and shoulder harnesses.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... combination of structural analysis and static load tests to limit load; or (3) Static load tests to ultimate... OF TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY... resulting from the ultimate static load factors prescribed in § 23.561(b)(2) of this part. Each occupant...

  8. Finite element structural redesign by large admissible perturbations

    NASA Technical Reports Server (NTRS)

    Bernitsas, Michael M.; Beyko, E.; Rim, C. W.; Alzahabi, B.

    1991-01-01

    In structural redesign, two structural states are involved; the baseline (known) State S1 with unacceptable performance, and the objective (unknown) State S2 with given performance specifications. The difference between the two states in performance and design variables may be as high as 100 percent or more depending on the scale of the structure. A Perturbation Approach to Redesign (PAR) is presented to relate any two structural states S1 and S2 that are modeled by the same finite element model and represented by different values of the design variables. General perturbation equations are derived expressing implicitly the natural frequencies, dynamic modes, static deflections, static stresses, Euler buckling loads, and buckling modes of the objective S2 in terms of its performance specifications, and S1 data and Finite Element Analysis (FEA) results. Large Admissible Perturbation (LEAP) algorithms are implemented in code RESTRUCT to define the objective S2 incrementally without trial and error by postprocessing FEA results of S1 with no additional FEAs. Systematic numerical applications in redesign of a 10 element 48 degree of freedom (dof) beam, a 104 element 192 dof offshore tower, a 64 element 216 dof plate, and a 144 element 896 dof cylindrical shell show the accuracy, efficiency, and potential of PAR to find an objective state that may differ 100 percent from the baseline design.

  9. Critical evaluation of reverse engineering tool Imagix 4D!

    PubMed

    Yadav, Rashmi; Patel, Ravindra; Kothari, Abhay

    2016-01-01

    The comprehension of legacy codes is difficult to understand. Various commercial reengineering tools are available that have unique working styles, and are equipped with their inherent capabilities and shortcomings. The focus of the available tools is in visualizing static behavior not the dynamic one. Therefore, it is difficult for people who work in software product maintenance, code understanding reengineering/reverse engineering. Consequently, the need for a comprehensive reengineering/reverse engineering tool arises. We found the usage of Imagix 4D to be good as it generates the maximum pictorial representations in the form of flow charts, flow graphs, class diagrams, metrics and, to a partial extent, dynamic visualizations. We evaluated Imagix 4D with the help of a case study involving a few samples of source code. The behavior of the tool was analyzed on multiple small codes and a large code gcc C parser. Large code evaluation was performed to uncover dead code, unstructured code, and the effect of not including required files at preprocessing level. The utility of Imagix 4D to prepare decision density and complexity metrics for a large code was found to be useful in getting to know how much reengineering is required. At the outset, Imagix 4D offered limitations in dynamic visualizations, flow chart separation (large code) and parsing loops. The outcome of evaluation will eventually help in upgrading Imagix 4D and posed a need of full featured tools in the area of software reengineering/reverse engineering. It will also help the research community, especially those who are interested in the realm of software reengineering tool building.

  10. Passive infrared ice detection for helicopter applications

    NASA Technical Reports Server (NTRS)

    Dershowitz, Adam L.; Hansman, R. John, Jr.

    1990-01-01

    A technique is proposed to remotely detect rotor icing on helicopters by using passive IR thermometry to detect the warming caused by latent heat release as supercooled water freezes. During icing, the ice accretion region will be warmer than the uniced trailing edge, resulting in a characteristic chordwise temperature profile. Preliminary tests were conducted on a static model in the NASA Icing Research Tunnel for a variety of wet (glaze) and dry (rime) ice conditions. The chordwise temperature profiles were confirmed by observation with an IR thermal video system and thermocouple observations. The IR observations were consistent with predictions of the LEWICE ice accretion code, which was used to extrapolate the observations to rotor icing conditions. Based on the static observations, the passive IR ice detection technique appears promising; however, further testing or rotating blades is required.

  11. Static impedance behavior of programmable metallization cells

    NASA Astrophysics Data System (ADS)

    Rajabi, S.; Saremi, M.; Barnaby, H. J.; Edwards, A.; Kozicki, M. N.; Mitkova, M.; Mahalanabis, D.; Gonzalez-Velo, Y.; Mahmud, A.

    2015-04-01

    Programmable metallization cell (PMC) devices work by growing and dissolving a conducting metallic bridge across a chalcogenide glass (ChG) solid electrolyte, which changes the resistance of the cell. PMC operation relies on the incorporation of metal ions in the ChG films via photo-doping to lower the off-state resistance and stabilize resistive switching, and subsequent transport of these ions by electric fields induced from an externally applied bias. In this paper, the static on- and off-state resistance of a PMC device composed of a layered (Ag-rich/Ag-poor) Ge30Se70 ChG film with active Ag and inert Ni electrodes is characterized and modeled using three dimensional simulation code. Calibrating the model to experimental data enables the extraction of device parameters such as material bandgaps, workfunctions, density of states, carrier mobilities, dielectric constants, and affinities.

  12. Trade Studies of Space Launch Architectures using Modular Probabilistic Risk Analysis

    NASA Technical Reports Server (NTRS)

    Mathias, Donovan L.; Go, Susie

    2006-01-01

    A top-down risk assessment in the early phases of space exploration architecture development can provide understanding and intuition of the potential risks associated with new designs and technologies. In this approach, risk analysts draw from their past experience and the heritage of similar existing systems as a source for reliability data. This top-down approach captures the complex interactions of the risk driving parts of the integrated system without requiring detailed knowledge of the parts themselves, which is often unavailable in the early design stages. Traditional probabilistic risk analysis (PRA) technologies, however, suffer several drawbacks that limit their timely application to complex technology development programs. The most restrictive of these is a dependence on static planning scenarios, expressed through fault and event trees. Fault trees incorporating comprehensive mission scenarios are routinely constructed for complex space systems, and several commercial software products are available for evaluating fault statistics. These static representations cannot capture the dynamic behavior of system failures without substantial modification of the initial tree. Consequently, the development of dynamic models using fault tree analysis has been an active area of research in recent years. This paper discusses the implementation and demonstration of dynamic, modular scenario modeling for integration of subsystem fault evaluation modules using the Space Architecture Failure Evaluation (SAFE) tool. SAFE is a C++ code that was originally developed to support NASA s Space Launch Initiative. It provides a flexible framework for system architecture definition and trade studies. SAFE supports extensible modeling of dynamic, time-dependent risk drivers of the system and functions at the level of fidelity for which design and failure data exists. The approach is scalable, allowing inclusion of additional information as detailed data becomes available. The tool performs a Monte Carlo analysis to provide statistical estimates. Example results of an architecture system reliability study are summarized for an exploration system concept using heritage data from liquid-fueled expendable Saturn V/Apollo launch vehicles.

  13. Application of artificial neural networks to the design optimization of aerospace structural components

    NASA Technical Reports Server (NTRS)

    Berke, Laszlo; Patnaik, Surya N.; Murthy, Pappu L. N.

    1993-01-01

    The application of artificial neural networks to capture structural design expertise is demonstrated. The principal advantage of a trained neural network is that it requires trivial computational effort to produce an acceptable new design. For the class of problems addressed, the development of a conventional expert system would be extremely difficult. In the present effort, a structural optimization code with multiple nonlinear programming algorithms and an artificial neural network code NETS were used. A set of optimum designs for a ring and two aircraft wings for static and dynamic constraints were generated by using the optimization codes. The optimum design data were processed to obtain input and output pairs, which were used to develop a trained artificial neural network with the code NETS. Optimum designs for new design conditions were predicted by using the trained network. Neural net prediction of optimum designs was found to be satisfactory for most of the output design parameters. However, results from the present study indicate that caution must be exercised to ensure that all design variables are within selected error bounds.

  14. Optimum Design of Aerospace Structural Components Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Berke, L.; Patnaik, S. N.; Murthy, P. L. N.

    1993-01-01

    The application of artificial neural networks to capture structural design expertise is demonstrated. The principal advantage of a trained neural network is that it requires a trivial computational effort to produce an acceptable new design. For the class of problems addressed, the development of a conventional expert system would be extremely difficult. In the present effort, a structural optimization code with multiple nonlinear programming algorithms and an artificial neural network code NETS were used. A set of optimum designs for a ring and two aircraft wings for static and dynamic constraints were generated using the optimization codes. The optimum design data were processed to obtain input and output pairs, which were used to develop a trained artificial neural network using the code NETS. Optimum designs for new design conditions were predicted using the trained network. Neural net prediction of optimum designs was found to be satisfactory for the majority of the output design parameters. However, results from the present study indicate that caution must be exercised to ensure that all design variables are within selected error bounds.

  15. Performance comparison of AV1, HEVC, and JVET video codecs on 360 (spherical) video

    NASA Astrophysics Data System (ADS)

    Topiwala, Pankaj; Dai, Wei; Krishnan, Madhu; Abbas, Adeel; Doshi, Sandeep; Newman, David

    2017-09-01

    This paper compares the coding efficiency performance on 360 videos, of three software codecs: (a) AV1 video codec from the Alliance for Open Media (AOM); (b) the HEVC Reference Software HM; and (c) the JVET JEM Reference SW. Note that 360 video is especially challenging content, in that one codes full res globally, but typically looks locally (in a viewport), which magnifies errors. These are tested in two different projection formats ERP and RSP, to check consistency. Performance is tabulated for 1-pass encoding on two fronts: (1) objective performance based on end-to-end (E2E) metrics such as SPSNR-NN, and WS-PSNR, currently developed in the JVET committee; and (2) informal subjective assessment of static viewports. Constant quality encoding is performed with all the three codecs for an unbiased comparison of the core coding tools. Our general conclusion is that under constant quality coding, AV1 underperforms HEVC, which underperforms JVET. We also test with rate control, where AV1 currently underperforms the open source X265 HEVC codec. Objective and visual evidence is provided.

  16. An Optimization Code for Nonlinear Transient Problems of a Large Scale Multidisciplinary Mathematical Model

    NASA Astrophysics Data System (ADS)

    Takasaki, Koichi

    This paper presents a program for the multidisciplinary optimization and identification problem of the nonlinear model of large aerospace vehicle structures. The program constructs the global matrix of the dynamic system in the time direction by the p-version finite element method (pFEM), and the basic matrix for each pFEM node in the time direction is described by a sparse matrix similarly to the static finite element problem. The algorithm used by the program does not require the Hessian matrix of the objective function and so has low memory requirements. It also has a relatively low computational cost, and is suited to parallel computation. The program was integrated as a solver module of the multidisciplinary analysis system CUMuLOUS (Computational Utility for Multidisciplinary Large scale Optimization of Undense System) which is under development by the Aerospace Research and Development Directorate (ARD) of the Japan Aerospace Exploration Agency (JAXA).

  17. Results of an Advanced Fan Stage Operating Over a Wide Range of Speed and Bypass Ratio. Part 2; Comparison of CFD and Experimental Results

    NASA Technical Reports Server (NTRS)

    Celestina, Mark L.; Suder, Kenneth L.; Kulkarni, Sameer

    2010-01-01

    NASA and GE teamed to design and build a 57 percent engine scaled fan stage for a Mach 4 variable cycle turbofan/ramjet engine for access to space with multipoint operations. This fan stage was tested in NASA's transonic compressor facility. The objectives of this test were to assess the aerodynamic and aero mechanic performance and operability characteristics of the fan stage over the entire range of engine operation including: 1) sea level static take-off; 2) transition over large swings in fan bypass ratio; 3) transition from turbofan to ramjet; and 4) fan wind-milling operation at high Mach flight conditions. This paper will focus on an assessment of APNASA, a multistage turbomachinery analysis code developed by NASA, to predict the fan stage performance and operability over a wide range of speeds (37 to 100 percent) and bypass ratios.

  18. Structural analysis of wind turbine rotors for NSF-NASA Mod-0 wind power system

    NASA Technical Reports Server (NTRS)

    Spera, D. A.

    1976-01-01

    Preliminary estimates are presented of vibratory loads and stresses in hingeless and teetering rotors for the proposed NSF-NASA Mod-0 wind power system. Preliminary blade design utilizes a tapered tubular aluminum spar which supports nonstructural aluminum ribs and skin and is joined to the rotor hub by a steel shank tube. Stresses in the shank of the blade are calculated for static, rated, and overload operating conditions. Blade vibrations were limited to the fundamental flapping modes, which were elastic cantilever bending for hingeless rotor blades and rigid-body rotation for teetering rotor blades. The MOSTAB-C computer code was used to calculate aerodynamic and mechanical loads. The teetering rotor has substantial advantages over the hingeless rotor with respect to shank stresses, fatigue life, and tower loading. The hingeless rotor analyzed does not appear to be structurally stable during overloads.

  19. McEliece PKC Calculator

    NASA Astrophysics Data System (ADS)

    Marek, Repka

    2015-01-01

    The original McEliece PKC proposal is interesting thanks to its resistance against all known attacks, even using quantum cryptanalysis, in an IND-CCA2 secure conversion. Here we present a generic implementation of the original McEliece PKC proposal, which provides test vectors (for all important intermediate results), and also in which a measurement tool for side-channel analysis is employed. To our best knowledge, this is the first such an implementation. This Calculator is valuable in implementation optimization, in further McEliece/Niederreiter like PKCs properties investigations, and also in teaching. Thanks to that, one can, for example, examine side-channel vulnerability of a certain implementation, or one can find out and test particular parameters of the cryptosystem in order to make them appropriate for an efficient hardware implementation. This implementation is available [1] in executable binary format, and as a static C++ library, as well as in form of source codes, for Linux and Windows operating systems.

  20. Effect of Microscopic Damage Events on Static and Ballistic Impact Strength of Triaxial Braid Composites

    NASA Technical Reports Server (NTRS)

    Littell, Justin D.; Binienda, Wieslaw K.; Arnold, William A.; Roberts, Gary d.; Goldberg, Robert K.

    2008-01-01

    In previous work, the ballistic impact resistance of triaxial braided carbon/epoxy composites made with large flat tows (12k and 24k) was examined by impacting 2 X2 X0.125" composite panels with gelatin projectiles. Several high strength, intermediate modulus carbon fibers were used in combination with both untoughened and toughened matrix materials. A wide range of penetration thresholds were measured for the various fiber/matrix combinations. However, there was no clear relationship between the penetration threshold and the properties of the constituents. During some of these experiments high speed cameras were used to view the failure process, and full-field strain measurements were made to determine the strain at the onset of failure. However, these experiments provided only limited insight into the microscopic failure processes responsible for the wide range of impact resistance observed. In order to investigate potential microscopic failure processes in more detail, quasi-static tests were performed in tension, compression, and shear. Full-field strain measurement techniques were used to identify local regions of high strain resulting from microscopic failures. Microscopic failure events near the specimen surface, such as splitting of fiber bundles in surface plies, were easily identified. Subsurface damage, such as fiber fracture or fiber bundle splitting, could be identified by its effect on in-plane surface strains. Subsurface delamination could be detected as an out-of-plane deflection at the surface. Using this data, failure criteria could be established at the fiber tow level for use in analysis. An analytical formulation was developed to allow the microscopic failure criteria to be used in place of macroscopic properties as input to simulations performed using the commercial explicit finite element code, LS-DYNA. The test methods developed to investigate microscopic failure will be presented along with methods for determining local failure criteria that can be used in analysis. Results of simulations performed using LS-DYNA will be presented to illustrate the capabilities and limitations for simulating failure during quasi-static deformation and during ballistic impact of large unit cell size triaxial braid composites.

  1. Evaluation of Relationship between Trunk Muscle Endurance and Static Balance in Male Students

    PubMed Central

    Barati, Amirhossein; SafarCherati, Afsaneh; Aghayari, Azar; Azizi, Faeze; Abbasi, Hamed

    2013-01-01

    Purpose Fatigue of trunk muscle contributes to spinal instability over strenuous and prolonged physical tasks and therefore may lead to injury, however from a performance perspective, relation between endurance efficient core muscles and optimal balance control has not been well-known. The purpose of this study was to examine the relationship of trunk muscle endurance and static balance. Methods Fifty male students inhabitant of Tehran university dormitory (age 23.9±2.4, height 173.0±4.5 weight 70.7±6.3) took part in the study. Trunk muscle endurance was assessed using Sørensen test of trunk extensor endurance, trunk flexor endurance test, side bridge endurance test and static balance was measured using single-limb stance test. A multiple linear regression analysis was applied to test if the trunk muscle endurance measures significantly predicted the static balance. Results There were positive correlations between static balance level and trunk flexor, extensor and lateral endurance measures (Pearson correlation test, r=0.80 and P<0.001; r=0.71 and P<0.001; r=0.84 and P<0.001, respectively). According to multiple regression analysis for variables predicting static balance, the linear combination of trunk muscle endurance measures was significantly related to the static balance (F (3,46) = 66.60, P<0.001). Endurance of trunk flexor, extensor and lateral muscles were significantly associated with the static balance level. The regression model which included these factors had the sample multiple correlation coefficient of 0.902, indicating that approximately 81% of the variance of the static balance is explained by the model. Conclusion There is a significant relationship between trunk muscle endurance and static balance. PMID:24800004

  2. The HIBEAM Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fawley, William M.

    2000-02-01

    HIBEAM is a 2 1/2D particle-in-cell (PIC) simulation code developed in the late 1990's in the Heavy-Ion Fusion research program at Lawrence Berkeley National Laboratory. The major purpose of HIBEAM is to simulate the transverse (i.e., X-Y) dynamics of a space-charge-dominated, non-relativistic heavy-ion beam being transported in a static accelerator focusing lattice. HIBEAM has been used to study beam combining systems, effective dynamic apertures in electrostatic quadrupole lattices, and emittance growth due to transverse misalignments. At present, HIBEAM runs on the CRAY vector machines (C90 and J90's) at NERSC, although it would be relatively simple to port the code tomore » UNIX workstations so long as IMSL math routines were available.« less

  3. The measurement of boundary layers on a compressor blade in cascade. Volume 2: Data tables

    NASA Technical Reports Server (NTRS)

    Zierke, William C.; Deutsch, Steven

    1989-01-01

    Measurements were made of the boundary layers and wakes about a highly loaded, double-circular-arc compressor blade in cascade. These laser Doppler velocimetry measurements have yielded a very detailed and precise data base with which to test the application of viscous computational codes to turbomachinery. In order to test the computational codes at off-design conditions, the data have been acquired at a chord Reynolds number of 500,000 and at three incidence angles. Average values and 95 percent confidence bands were tabularized for the velocity, local turbulence intensity, skewness, kurtosis, and percent backflow. Tables also exist for the blade static-pressure distributions and boundary layer velocity profiles reconstructed to account for the normal pressure gradient.

  4. Time-marching transonic flutter solutions including angle-of-attack effects

    NASA Technical Reports Server (NTRS)

    Edwards, J. W.; Bennett, R. M.; Whitlow, W., Jr.; Seidel, D. A.

    1982-01-01

    Transonic aeroelastic solutions based upon the transonic small perturbation potential equation were studied. Time-marching transient solutions of plunging and pitching airfoils were analyzed using a complex exponential modal identification technique, and seven alternative integration techniques for the structural equations were evaluated. The HYTRAN2 code was used to determine transonic flutter boundaries versus Mach number and angle-of-attack for NACA 64A010 and MBB A-3 airfoils. In the code, a monotone differencing method, which eliminates leading edge expansion shocks, is used to solve the potential equation. When the effect of static pitching moment upon the angle-of-attack is included, the MBB A-3 airfoil can have multiple flutter speeds at a given Mach number.

  5. A comparison of dynamic and static economic models of uneven-aged stand management

    Treesearch

    Robert G. Haight

    1985-01-01

    Numerical techniques have been used to compute the discrete-time sequence of residual diameter distributions that maximize the present net worth (PNW) of harvestable volume from an uneven-aged stand. Results contradicted optimal steady-state diameter distributions determined with static analysis. In this paper, optimality conditions for solutions to dynamic and static...

  6. Development of wide-angle 2D light scattering static cytometry

    NASA Astrophysics Data System (ADS)

    Xie, Linyan; Liu, Qiao; Shao, Changshun; Su, Xuantao

    2016-10-01

    We have recently developed a 2D light scattering static cytometer for cellular analysis in a label-free manner, which measures side scatter (SSC) light in the polar angular range from 79 to 101 degrees. Compared with conventional flow cytometry, our cytometric technique requires no fluorescent labeling of the cells, and static cytometry measurements can be performed without flow control. In this paper we present an improved label-free static cytometer that can obtain 2D light scattering patterns in a wider angular range. By illuminating the static microspheres on chip with a scanning optical fiber, wide-angle 2D light scattering patterns of single standard microspheres with a mean diameter of 3.87 μm are obtained. The 2D patterns of 3.87 μm microspheres contain both large-angle forward scatter (FSC) and SSC light in the polar angular range from 40 to 100 degrees, approximately. Experimental 2D patterns of 3.87 μm microspheres are in good agreement with Mie theory simulated ones. The wide-angle light scattering measurements may provide a better resolution for particle analysis as compared with the SSC measurements. Two dimensional light scattering patterns of HL-60 human acute leukemia cells are obtained by using our static cytometer. Compared with SSC 2D light scattering patterns, wide-angle 2D patterns contain richer information of the HL-60 cells. The obtaining of 2D light scattering patterns in a wide angular range could help to enhance the capabilities of our label-free static cytometry for cell analysis.

  7. Design and Implementation of Decoy Enhanced Dynamic Virtualization Networks

    DTIC Science & Technology

    2016-12-12

    From - To) 12/12/2016 Final 07/01/2015-08/31/2016 4. TITLE AND SUBTITLE Sa. CONTRACT NUMBER Design and Implementation of Decoy Enhanced Dynamic...TELEPHONE NUMBER (Include area code) 703-993-1715 Standard Form 298 (Rev . 8/98) Prescribed by ANSI Std . Z39.18 " Design and Implementation of...8 2 Design and Implementation ofDecoy Enhanced Dynamic Virtualization Networks 1 Major Goals The relatively static configurations of networks and

  8. On the Stefan Problem with Volumetric Energy Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John Crepeau; Ali Siahpush; Blaine Spotten

    2009-11-01

    This paper presents results of solid-liquid phase change, driven by volumetric energy generation, in a vertical cylinder. We show excellent agreement between a quasi-static, approximate analytical solution valid for Stefan numbers less than one, and a computational model solved using the CFD code FLUENT®. A computational study also shows the effect that the volumetric energy generation has on both the mushy zone thickness and convection in the melt during phase change.

  9. Numerical Modeling of Sliding Stability of RCC dam

    NASA Astrophysics Data System (ADS)

    Mughieda, O.; Hazirbaba, K.; Bani-Hani, K.; Daoud, W.

    2017-06-01

    Stability and stress analyses are the most important elements that require rigorous consideration in design of a dam structure. Stability of dams against sliding is crucial due to the substantial horizontal load that requires sufficient and safe resistance to develop by mobilization of adequate shearing forces along the base of the dam foundation. In the current research, the static sliding stability of a roller-compacted-concrete (RCC) dam was modelled using finite element method to investigate the stability against sliding. A commercially available finite element software (SAP 2000) was used to analyze stresses in the body of the dam and foundation. A linear finite element static analysis was performed in which a linear plane strain isoperimetric four node elements was used for modelling the dam-foundation system. The analysis was carried out assuming that no slip will occur at the interface between the dam and the foundation. Usual static loading condition was applied for the static analysis. The greatest tension was found to develop in the rock adjacent to the toe of the upstream slope. The factor of safety against sliding along the entire base of the dam was found to be greater than 1 (FS>1), for static loading conditions.

  10. Spatial coding of eye movements relative to perceived earth and head orientations during static roll tilt

    NASA Technical Reports Server (NTRS)

    Wood, S. J.; Paloski, W. H.; Reschke, M. F.

    1998-01-01

    This purpose of this study was to examine the spatial coding of eye movements during static roll tilt (up to +/-45 degrees) relative to perceived earth and head orientations. Binocular videographic recordings obtained in darkness from eight subjects allowed us to quantify the mean deviations in gaze trajectories along both horizontal and vertical coordinates relative to the true earth and head orientations. We found that both variability and curvature of gaze trajectories increased with roll tilt. The trajectories of eye movements made along the perceived earth-horizontal (PEH) were more accurate than movements along the perceived head-horizontal (PHH). The trajectories of both PEH and PHH saccades tended to deviate in the same direction as the head tilt. The deviations in gaze trajectories along the perceived earth-vertical (PEV) and perceived head-vertical (PHV) were both similar to the PHH orientation, except that saccades along the PEV deviated in the opposite direction relative to the head tilt. The magnitude of deviations along the PEV, PHH, and PHV corresponded to perceptual overestimations of roll tilt obtained from verbal reports. Both PEV gaze trajectories and perceptual estimates of tilt orientation were different following clockwise rather than counterclockwise tilt rotation; however, the PEH gaze trajectories were less affected by the direction of tilt rotation. Our results suggest that errors in gaze trajectories along PEV and perceived head orientations increase during roll tilt in a similar way to perceptual errors of tilt orientation. Although PEH and PEV gaze trajectories became nonorthogonal during roll tilt, we conclude that the spatial coding of eye movements during roll tilt is overall more accurate for the perceived earth reference frame than for the perceived head reference frame.

  11. Blasim: A computational tool to assess ice impact damage on engine blades

    NASA Astrophysics Data System (ADS)

    Reddy, E. S.; Abumeri, G. H.; Chamis, C. C.

    1993-04-01

    A portable computer called BLASIM was developed at NASA LeRC to assess ice impact damage on aircraft engine blades. In addition to ice impact analyses, the code also contains static, dynamic, resonance margin, and supersonic flutter analysis capabilities. Solid, hollow, superhybrid, and composite blades are supported. An optional preprocessor (input generator) was also developed to interactively generate input for BLASIM. The blade geometry can be defined using a series of airfoils at discrete input stations or by a finite element grid. The code employs a coarse, fixed finite element mesh containing triangular plate finite elements to minimize program execution time. Ice piece is modeled using an equivalent spherical objective that has a high velocity opposite that of the aircraft and parallel to the engine axis. For local impact damage assessment, the impact load is considered as a distributed force acting over a region around the impact point. The average radial strain of the finite elements along the leading edge is used as a measure of the local damage. To estimate damage at the blade root, the impact is treated as an impulse and a combined stress failure criteria is employed. Parametric studies of local and root ice impact damage, and post-impact dynamics are discussed for solid and composite blades.

  12. SU (2) lattice gauge theory simulations on Fermi GPUs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cardoso, Nuno, E-mail: nunocardoso@cftp.ist.utl.p; Bicudo, Pedro, E-mail: bicudo@ist.utl.p

    2011-05-10

    In this work we explore the performance of CUDA in quenched lattice SU (2) simulations. CUDA, NVIDIA Compute Unified Device Architecture, is a hardware and software architecture developed by NVIDIA for computing on the GPU. We present an analysis and performance comparison between the GPU and CPU in single and double precision. Analyses with multiple GPUs and two different architectures (G200 and Fermi architectures) are also presented. In order to obtain a high performance, the code must be optimized for the GPU architecture, i.e., an implementation that exploits the memory hierarchy of the CUDA programming model. We produce codes formore » the Monte Carlo generation of SU (2) lattice gauge configurations, for the mean plaquette, for the Polyakov Loop at finite T and for the Wilson loop. We also present results for the potential using many configurations (50,000) without smearing and almost 2000 configurations with APE smearing. With two Fermi GPUs we have achieved an excellent performance of 200x the speed over one CPU, in single precision, around 110 Gflops/s. We also find that, using the Fermi architecture, double precision computations for the static quark-antiquark potential are not much slower (less than 2x slower) than single precision computations.« less

  13. Archer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Atzeni, Simone; Ahn, Dong; Gopalakrishnan, Ganesh

    2017-01-12

    Archer is built on top of the LLVM/Clang compilers that support OpenMP. It applies static and dynamic analysis techniques to detect data races in OpenMP programs generating a very low runtime and memory overhead. Static analyses identify data race free OpenMP regions and exclude them from runtime analysis, which is performed by ThreadSanitizer included in LLVM/Clang.

  14. Static Analysis of Programming Exercises: Fairness, Usefulness and a Method for Application

    ERIC Educational Resources Information Center

    Nutbrown, Stephen; Higgins, Colin

    2016-01-01

    This article explores the suitability of static analysis techniques based on the abstract syntax tree (AST) for the automated assessment of early/mid degree level programming. Focus is on fairness, timeliness and consistency of grades and feedback. Following investigation into manual marking practises, including a survey of markers, the assessment…

  15. An Economical Method for Static Headspace Enrichment for Arson Analysis

    ERIC Educational Resources Information Center

    Olesen, Bjorn

    2010-01-01

    Static headspace analysis of accelerants from suspected arsons is accomplished by placing an arson sample in a sealed container with a carbon strip suspended above the sample. The sample is heated, cooled to room temperature, and then the organic components are extracted from the carbon strip with carbon disulfide followed by gas chromatography…

  16. Pressure Mapping and Efficiency Analysis of an EPPLER 857 Hydrokinetic Turbine

    NASA Astrophysics Data System (ADS)

    Clark, Tristan

    A conceptual energy ship is presented to provide renewable energy. The ship, driven by the wind, drags a hydrokinetic turbine through the water. The power generated is used to run electrolysis on board, taking the resultant hydrogen back to shore to be used as an energy source. The basin efficiency (Power/thrust*velocity) of the Hydrokinetic Turbine (HTK) plays a vital role in this process. In order to extract the maximum allowable power from the flow, the blades need to be optimized. The structural analysis of the blade is important, as the blade will undergo high pressure loads from the water. A procedure for analysis of a preliminary Hydrokinetic Turbine blade design is developed. The blade was designed by a non-optimized Blade Element Momentum Theory (BEMT) code. Six simulations were run, with varying mesh resolution, turbulence models, and flow region size. The procedure was developed that provides detailed explanation for the entire process, from geometry and mesh generation to post-processing analysis tools. The efficiency results from the simulations are used to study the mesh resolution, flow region size, and turbulence models. The results are compared to the BEMT model design targets. Static pressure maps are created that can be used for structural analysis of the blades.

  17. Determination of Stability and Control Derivatives using Computational Fluid Dynamics and Automatic Differentiation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Green, Lawrence L.; Montgomery, Raymond C.; Raney, David L.

    1999-01-01

    With the recent interest in novel control effectors there is a need to determine the stability and control derivatives of new aircraft configurations early in the design process. These derivatives are central to most control law design methods and would allow the determination of closed-loop control performance of the vehicle. Early determination of the static and dynamic behavior of an aircraft may permit significant improvement in configuration weight, cost, stealth, and performance through multidisciplinary design. The classical method of determining static stability and control derivatives - constructing and testing wind tunnel models - is expensive and requires a long lead time for the resultant data. Wind tunnel tests are also limited to the preselected control effectors of the model. To overcome these shortcomings, computational fluid dynamics (CFD) solvers are augmented via automatic differentiation, to directly calculate the stability and control derivatives. The CFD forces and moments are differentiated with respect to angle of attack, angle of sideslip, and aircraft shape parameters to form these derivatives. A subset of static stability and control derivatives of a tailless aircraft concept have been computed by two differentiated inviscid CFD codes and verified for accuracy with central finite-difference approximations and favorable comparisons to a simulation database.

  18. Heat transfer, velocity-temperature correlation, and turbulent shear stress from Navier-Stokes computations of shock wave/turbulent boundary layer interaction flows

    NASA Technical Reports Server (NTRS)

    Wang, C. R.; Hingst, W. R.; Porro, A. R.

    1991-01-01

    The properties of 2-D shock wave/turbulent boundary layer interaction flows were calculated by using a compressible turbulent Navier-Stokes numerical computational code. Interaction flows caused by oblique shock wave impingement on the turbulent boundary layer flow were considered. The oblique shock waves were induced with shock generators at angles of attack less than 10 degs in supersonic flows. The surface temperatures were kept at near-adiabatic (ratio of wall static temperature to free stream total temperature) and cold wall (ratio of wall static temperature to free stream total temperature) conditions. The computational results were studied for the surface heat transfer, velocity temperature correlation, and turbulent shear stress in the interaction flow fields. Comparisons of the computational results with existing measurements indicated that (1) the surface heat transfer rates and surface pressures could be correlated with Holden's relationship, (2) the mean flow streamwise velocity components and static temperatures could be correlated with Crocco's relationship if flow separation did not occur, and (3) the Baldwin-Lomax turbulence model should be modified for turbulent shear stress computations in the interaction flows.

  19. Cost-Effectiveness Analysis of Family Planning Services Offered by Mobile Clinics versus Static Clinics in Assiut, Egypt.

    PubMed

    Al-Attar, Ghada S T; Bishai, David; El-Gibaly, Omaima

    2017-03-01

    Cost effectiveness studies of family planning (FP) services are very valuable in providing evidence-based data for decision makers in Egypt. Cost data came from record reviews for all 15 mobile clinics and a matched set of 15 static clinics and interviews with staff members of the selected clinics at Assiut Governorate. Effectiveness measures included couple years of protection (CYPs) and FP visits. Incremental cost-effectiveness ratios (ICER) and sensitivity analyses were calculated. Mobile clinics cost more per facility, produced more CYPs but had fewer FP visits. Sensitivity analysis was done using: total costs, CYP and FP visits of mobile and static clinics and showed that variations in CYP of mobile and static clinics altered the ICER for CYP from $2 -$6. Mobile clinics with their high emphasis on IUDs offer a reasonable cost effectiveness of $4.46 per additional CYP compared to static clinics. The ability of mobile clinics to reach more vulnerable women and to offer more long acting methods might affect a policy decision between these options. Static clinics should consider whether emphasizing IUDs may make their services more cost-effective.

  20. Comparative Study on Code-based Linear Evaluation of an Existing RC Building Damaged during 1998 Adana-Ceyhan Earthquake

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toprak, A. Emre; Guelay, F. Guelten; Ruge, Peter

    2008-07-08

    Determination of seismic performance of existing buildings has become one of the key concepts in structural analysis topics after recent earthquakes (i.e. Izmit and Duzce Earthquakes in 1999, Kobe Earthquake in 1995 and Northridge Earthquake in 1994). Considering the need for precise assessment tools to determine seismic performance level, most of earthquake hazardous countries try to include performance based assessment in their seismic codes. Recently, Turkish Earthquake Code 2007 (TEC'07), which was put into effect in March 2007, also introduced linear and non-linear assessment procedures to be applied prior to building retrofitting. In this paper, a comparative study is performedmore » on the code-based seismic assessment of RC buildings with linear static methods of analysis, selecting an existing RC building. The basic principles dealing the procedure of seismic performance evaluations for existing RC buildings according to Eurocode 8 and TEC'07 will be outlined and compared. Then the procedure is applied to a real case study building is selected which is exposed to 1998 Adana-Ceyhan Earthquake in Turkey, the seismic action of Ms = 6.3 with a maximum ground acceleration of 0.28 g It is a six-storey RC residential building with a total of 14.65 m height, composed of orthogonal frames, symmetrical in y direction and it does not have any significant structural irregularities. The rectangular shaped planar dimensions are 16.40 mx7.80 m = 127.90 m{sup 2} with five spans in x and two spans in y directions. It was reported that the building had been moderately damaged during the 1998 earthquake and retrofitting process was suggested by the authorities with adding shear-walls to the system. The computations show that the performing methods of analysis with linear approaches using either Eurocode 8 or TEC'07 independently produce similar performance levels of collapse for the critical storey of the structure. The computed base shear value according to Eurocode is much higher than the requirements of the Turkish Earthquake Code while the selected ground conditions represent the same characteristics. The main reason is that the ordinate of the horizontal elastic response spectrum for Eurocode 8 is increased by the soil factor. In TEC'07 force-based linear assessment, the seismic demands at cross-sections are to be checked with residual moment capacities; however, the chord rotations of primary ductile elements must be checked for Eurocode safety verifications. On the other hand, the demand curvatures from linear methods of analysis of Eurocode 8 together with TEC'07 are almost similar.« less

  1. Combining Acceleration and Displacement Dependent Modal Frequency Responses Using an MSC/NASTRAN DMAP Alter

    NASA Technical Reports Server (NTRS)

    Barnett, Alan R.; Widrick, Timothy W.; Ludwiczak, Damian R.

    1996-01-01

    Solving for dynamic responses of free-free launch vehicle/spacecraft systems acted upon by buffeting winds is commonly performed throughout the aerospace industry. Due to the unpredictable nature of this wind loading event, these problems are typically solved using frequency response random analysis techniques. To generate dynamic responses for spacecraft with statically-indeterminate interfaces, spacecraft contractors prefer to develop models which have response transformation matrices developed for mode acceleration data recovery. This method transforms spacecraft boundary accelerations and displacements into internal responses. Unfortunately, standard MSC/NASTRAN modal frequency response solution sequences cannot be used to combine acceleration- and displacement-dependent responses required for spacecraft mode acceleration data recovery. External user-written computer codes can be used with MSC/NASTRAN output to perform such combinations, but these methods can be labor and computer resource intensive. Taking advantage of the analytical and computer resource efficiencies inherent within MS C/NASTRAN, a DMAP Alter has been developed to combine acceleration- and displacement-dependent modal frequency responses for performing spacecraft mode acceleration data recovery. The Alter has been used successfully to efficiently solve a common aerospace buffeting wind analysis.

  2. Local buckling and crippling of composite stiffener sections

    NASA Technical Reports Server (NTRS)

    Bonanni, David L.; Johnson, Eric R.; Starnes, James H., Jr.

    1988-01-01

    Local buckling, postbuckling, and crippling (failure) of channel, zee, and I- and J-section stiffeners made of AS4/3502 graphite-epoxy unidirectional tape are studied by experiment and analysis. Thirty-six stiffener specimens were tested statically to failure in axial compression as intermediate length columns. Web width is 1.25 inches for all specimens, and the flange width-to-thickness ratio ranges from 7 to 28 for the specimens tested. The radius of the stiffener corners is either 0.125 or 0.250 inches. A sixteen-ply orthotropic layup, an eight-ply quasi-isotropic layup, and a sixteen-ply quasi-isotropic layup are examined. Geometrically nonlinear analyses of five specimens were performed with the STAGS finite element code. Analytical results are compared to experimental data. Inplane stresses from STAGS are used to conduct a plane stress failure analysis of these specimens. Also, the development of interlaminar stress equations from equilibrium for classical laminated plate theory is presented. An algorithm to compute high order displacement derivatives required by these equations based on the Discrete Fourier Transform (DFT) is discussed.

  3. Fatigue based design and analysis of wheel hub for Student formula car by Simulation Approach

    NASA Astrophysics Data System (ADS)

    Gowtham, V.; Ranganathan, A. S.; Satish, S.; Alexis, S. John; Siva kumar, S.

    2016-09-01

    In the existing design of Wheel hub used for Student formula cars, the brake discs cannot be removed easily since the disc is mounted in between the knuckle and hub. In case of bend or any other damage to the disc, the replacement of the disc becomes difficult. Further using OEM hub and knuckle that are used for commercial vehicles will result in increase of unsprung mass, which should be avoided in Student formula cars for improving the performance. In this design the above mentioned difficulties have been overcome by redesigning the hub in such a way that the brake disc could be removed easily by just removing the wheel and the caliper and also it will have reduced weight when compared to existing OEM hub. A CAD Model was developed based on the required fatigue life cycles. The forces acting on the hub were calculated and linear static structural analysis was performed on the wheel hub for three different materials using ANSYS Finite Element code V 16.2. The theoretical fatigue strength was compared with the stress obtained from the structural analysis for each material.

  4. Dynamic and static fatigue behavior of sintered silicon nitrides

    NASA Technical Reports Server (NTRS)

    Chang, J.; Khandelwal, P.; Heitman, P. W.

    1987-01-01

    The dynamic and static fatigue behavior of Kyocera SN220M sintered silicon nitride at 1000 C was studied. Fractographic analysis of the material failing in dynamic fatigue revealed the presence of slow crack growth (SCG) at stressing rates below 41 MPa/min. Under conditions of static fatigue this material also displayed SCG at stresses below 345 MPa. SCG appears to be controlled by microcracking of the grain boundaries. The crack velocity exponent (n) determined from both dynamic and static fatigue tests ranged from 11 to 16.

  5. Parametric Instability of Static Shafts-Disk System Using Finite Element Method

    NASA Astrophysics Data System (ADS)

    Wahab, A. M.; Rasid, Z. A.; Abu, A.

    2017-10-01

    Parametric instability condition is an important consideration in design process as it can cause failure in machine elements. In this study, parametric instability behaviour was studied for a simple shaft and disk system that was subjected to axial load under pinned-pinned boundary condition. The shaft was modelled based on the Nelson’s beam model, which considered translational and rotary inertias, transverse shear deformation and torsional effect. The Floquet’s method was used to estimate the solution for Mathieu equation. Finite element codes were developed using MATLAB to establish the instability chart. The effect of additional disk mass on the stability chart was investigated for pinned-pinned boundary conditions. Numerical results and illustrative examples are given. It is found that the additional disk mass decreases the instability region during static condition. The location of the disk as well has significant effect on the instability region of the shaft.

  6. Real-time filtering and detection of dynamics for compression of HDTV

    NASA Technical Reports Server (NTRS)

    Sauer, Ken D.; Bauer, Peter

    1991-01-01

    The preprocessing of video sequences for data compressing is discussed. The end goal associated with this is a compression system for HDTV capable of transmitting perceptually lossless sequences at under one bit per pixel. Two subtopics were emphasized to prepare the video signal for more efficient coding: (1) nonlinear filtering to remove noise and shape the signal spectrum to take advantage of insensitivities of human viewers; and (2) segmentation of each frame into temporally dynamic/static regions for conditional frame replenishment. The latter technique operates best under the assumption that the sequence can be modelled as a superposition of active foreground and static background. The considerations were restricted to monochrome data, since it was expected to use the standard luminance/chrominance decomposition, which concentrates most of the bandwidth requirements in the luminance. Similar methods may be applied to the two chrominance signals.

  7. Development and Positioning Accuracy Assessment of Single-Frequency Precise Point Positioning Algorithms by Combining GPS Code-Pseudorange Measurements with Real-Time SSR Corrections

    PubMed Central

    Kim, Miso; Park, Kwan-Dong

    2017-01-01

    We have developed a suite of real-time precise point positioning programs to process GPS pseudorange observables, and validated their performance through static and kinematic positioning tests. To correct inaccurate broadcast orbits and clocks, and account for signal delays occurring from the ionosphere and troposphere, we applied State Space Representation (SSR) error corrections provided by the Seoul Broadcasting System (SBS) in South Korea. Site displacements due to solid earth tide loading are also considered for the purpose of improving the positioning accuracy, particularly in the height direction. When the developed algorithm was tested under static positioning, Kalman-filtered solutions produced a root-mean-square error (RMSE) of 0.32 and 0.40 m in the horizontal and vertical directions, respectively. For the moving platform, the RMSE was found to be 0.53 and 0.69 m in the horizontal and vertical directions. PMID:28598403

  8. Effective Use of Multimedia Presentations to Maximize Learning within High School Science Classrooms

    ERIC Educational Resources Information Center

    Rapp, Eric

    2013-01-01

    This research used an evidenced-based experimental 2 x 2 factorial design General Linear Model with Repeated Measures Analysis of Covariance (RMANCOVA). For this analysis, time served as the within-subjects factor while treatment group (i.e., static and signaling, dynamic and signaling, static without signaling, and dynamic without signaling)…

  9. Correcting (18)F-fluoride PET static scan measurements of skeletal plasma clearance for tracer efflux from bone.

    PubMed

    Siddique, Musib; Frost, Michelle L; Moore, Amelia E B; Fogelman, Ignac; Blake, Glen M

    2014-03-01

    The aim of the study was to examine whether (18)F-fluoride PET ((18)F-PET) static scan measurements of bone plasma clearance (Ki) can be corrected for tracer efflux from bone from the time of injection. The efflux of tracer from bone mineral to plasma was described by a first-order rate constant kloss. A modified Patlak analysis was applied to 60-min dynamic (18)F-PET scans of the spine and hip acquired during trials on the bone anabolic agent teriparatide to find the best-fit values of kloss at the lumbar spine, total hip and femoral shaft. The resulting values of kloss were used to extrapolate the modified Patlak plots to 120 min after injection and derive a sequence of static scan estimates of Ki at 4-min intervals that were compared with the Patlak Ki values from the 60-min dynamic scans. A comparison was made with the results of the standard static scan analysis, which assumes kloss=0. The best-fit values of kloss for the spine and hip regions of interest averaged 0.006/min and did not change when patients were treated with teriparatide. Static scan values of Ki calculated using the modified analysis with kloss=0.006/min were independent of time between 10 and 120 min after injection and were in close agreement with findings from the dynamic scans. In contrast, by 2 h after injection the static scan Ki values calculated using the standard analysis underestimated the dynamic scan results by 20%. Using a modified analysis that corrects for F efflux from bone, estimates of Ki from static PET scans can be corrected for time up to 2 h after injection. This simplified approach may obviate the need to perform dynamic scans and hence shorten the scanning procedure for the patient and reduce the cost of studies. It also enables reliable estimates of Ki to be obtained from multiple skeletal sites with a single injection of tracer.

  10. The Gremlin Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2013-09-26

    The Gremlin sofrware package is a performance analysis approach targeted to support the Co-Design process for future systems. It consists of a series of modules that can be used to alter a machine's behavior with the goal of emulating future machine properties. The modules can be divided into several classes; the most significant ones are detailed below. PowGre is a series of modules that help explore the power consumption properties of applications and to determine the impact of power constraints on applications. Most of them use low-level processor interfaces to directly control voltage and frequency settings as well as permore » nodes, socket, or memory power bounds. MemGre are memory Gremlins and implement a new performance analysis technique that captures the application's effective use of the storage capacity of different levels of the memory hierarchy as well as the bandwidth between adjacent levels. The approach models various memory components as resources and measures how much of each resource the application uses from the application's own perspective. To the application a given amount of a resource is "used" if not having this amount will degrade the application's performance. This is in contrast to the hardware-centric perspective that considers "use" as any hardware action that utilizes the resource, even if it has no effect on performance. ResGre are Gremlins that use fault injection techniques to emulate higher fault rates than currently present in today's systems. Faults can be injected through various means, including network interposition, static analysis, and code modification, or direct application notification. ResGre also includes patches to previously released LLNL codes that can counteract and react to injected failures.« less

  11. A method for the geometrically nonlinear analysis of compressively loaded prismatic composite structures

    NASA Technical Reports Server (NTRS)

    Stoll, Frederick; Gurdal, Zafer; Starnes, James H., Jr.

    1991-01-01

    A method was developed for the geometrically nonlinear analysis of the static response of thin-walled stiffened composite structures loaded in uniaxial or biaxial compression. The method is applicable to arbitrary prismatic configurations composed of linked plate strips, such as stiffened panels and thin-walled columns. The longitudinal ends of the structure are assumed to be simply supported, and geometric shape imperfections can be modeled. The method can predict the nonlinear phenomena of postbuckling strength and imperfection sensitivity which are exhibited by some buckling-dominated structures. The method is computer-based and is semi-analytic in nature, making it computationally economical in comparison to finite element methods. The method uses a perturbation approach based on the use of a series of buckling mode shapes to represent displacement contributions associated with nonlinear response. Displacement contributions which are of second order in the model amplitudes are incorported in addition to the buckling mode shapes. The principle of virtual work is applied using a finite basis of buckling modes, and terms through the third order in the model amplitudes are retained. A set of cubic nonlinear algebraic equations are obtained, from which approximate equilibrium solutions are determined. Buckling mode shapes for the general class of structure are obtained using the VIPASA analysis code within the PASCO stiffened-panel design code. Thus, subject to some additional restrictions in loading and plate anisotropy, structures which can be modeled with respect to buckling behavior by VIPASA can be analyzed with respect to nonlinear response using the new method. Results obtained using the method are compared with both experimental and analytical results in the literature. The configurations investigated include several different unstiffened and blade-stiffening panel configurations, featuring both homogeneous, isotropic materials, and laminated composite material.

  12. Experimental and numerical results for a generic axisymmetric single-engine afterbody with tails at transonic speeds

    NASA Technical Reports Server (NTRS)

    Burley, J. R., II; Carlson, J. R.; Henderson, W. P.

    1986-01-01

    Static pressure measurements were made on the afterbody, nozzle and tails of a generic single-engine axisymmetric fighter configuration. Data were recorded at Mach numbers of 0.6, 0.9, and 1.2. NPR was varied from 1.0 to 8.0 and angle of attack was varied from -3 deg. to 9 deg. Experimental data were compared with numerical results from two state-of-the-art computer codes.

  13. Status and future of the 3D MAFIA group of codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ebeling, F.; Klatt, R.; Krawzcyk, F.

    1988-12-01

    The group of fully three dimensional computer codes for solving Maxwell's equations for a wide range of applications, MAFIA, is already well established. Extensive comparisons with measurements have demonstrated the accuracy of the computations. A large numer of components have been designed for accelerators, such as kicker magnets, non cyclindrical cavities, ferrite loaded cavities, vacuum chambers with slots and transitions, etc. The latest additions to the system include a new static solver that can calculate 3D magneto- and electrostatic fields, and a self consistent version of the 2D-BCI that solves the field equations and the equations of motion in parallel.more » Work on new eddy current modules has started, which will allow treatment of laminated and/or solid iron cores excited by low frequency currents. Based on our experience with the present releases 1 and 2, we have started a complete revision of the whole user interface and data structure, which will make the codes even more user-friendly and flexible.« less

  14. Steady-State Computation of Constant Rotational Rate Dynamic Stability Derivatives

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Green, Lawrence L.

    2000-01-01

    Dynamic stability derivatives are essential to predicting the open and closed loop performance, stability, and controllability of aircraft. Computational determination of constant-rate dynamic stability derivatives (derivatives of aircraft forces and moments with respect to constant rotational rates) is currently performed indirectly with finite differencing of multiple time-accurate computational fluid dynamics solutions. Typical time-accurate solutions require excessive amounts of computational time to complete. Formulating Navier-Stokes (N-S) equations in a rotating noninertial reference frame and applying an automatic differentiation tool to the modified code has the potential for directly computing these derivatives with a single, much faster steady-state calculation. The ability to rapidly determine static and dynamic stability derivatives by computational methods can benefit multidisciplinary design methodologies and reduce dependency on wind tunnel measurements. The CFL3D thin-layer N-S computational fluid dynamics code was modified for this study to allow calculations on complex three-dimensional configurations with constant rotation rate components in all three axes. These CFL3D modifications also have direct application to rotorcraft and turbomachinery analyses. The modified CFL3D steady-state calculation is a new capability that showed excellent agreement with results calculated by a similar formulation. The application of automatic differentiation to CFL3D allows the static stability and body-axis rate derivatives to be calculated quickly and exactly.

  15. The First Static and Dynamic Analysis of 3-D Printed Sintered Ceramics for Body Armor Applications

    DTIC Science & Technology

    2016-09-01

    evaluate sintered alumina tiles produced by 3-D printing methodology. This report examines the static and quasi -static parameters (including density...Figures iv List of Tables iv Acknowledgments v 1. Introduction 1 2. Processing and Experimental Procedures 1 3. Results and Discussion 7 4...6 Fig. 8 Experimental setup for recording fracture .............................................7 Fig. 9 Rod projectile

  16. Numerical modeling of the fracture process in a three-unit all-ceramic fixed partial denture.

    PubMed

    Kou, Wen; Kou, Shaoquan; Liu, Hongyuan; Sjögren, Göran

    2007-08-01

    The main objectives were to examine the fracture mechanism and process of a ceramic fixed partial denture (FPD) framework under simulated mechanical loading using a recently developed numerical modeling code, the R-T(2D) code, and also to evaluate the suitability of R-T(2D) code as a tool for this purpose. Using the recently developed R-T(2D) code the fracture mechanism and process of a 3U yttria-tetragonal zirconia polycrystal ceramic (Y-TZP) FPD framework was simulated under static loading. In addition, the fracture pattern obtained using the numerical simulation was compared with the fracture pattern obtained in a previous laboratory test. The result revealed that the framework fracture pattern obtained using the numerical simulation agreed with that observed in a previous laboratory test. Quasi-photoelastic stress fringe pattern and acoustic emission showed that the fracture mechanism was tensile failure and that the crack started at the lower boundary of the framework. The fracture process could be followed both in step-by-step and step-in-step. Based on the findings in the current study, the R-T(2D) code seems suitable for use as a complement to other tests and clinical observations in studying stress distribution, fracture mechanism and fracture processes in ceramic FPD frameworks.

  17. LSENS - GENERAL CHEMICAL KINETICS AND SENSITIVITY ANALYSIS CODE

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.

    1994-01-01

    LSENS has been developed for solving complex, homogeneous, gas-phase, chemical kinetics problems. The motivation for the development of this program is the continuing interest in developing detailed chemical reaction mechanisms for complex reactions such as the combustion of fuels and pollutant formation and destruction. A reaction mechanism is the set of all elementary chemical reactions that are required to describe the process of interest. Mathematical descriptions of chemical kinetics problems constitute sets of coupled, nonlinear, first-order ordinary differential equations (ODEs). The number of ODEs can be very large because of the numerous chemical species involved in the reaction mechanism. Further complicating the situation are the many simultaneous reactions needed to describe the chemical kinetics of practical fuels. For example, the mechanism describing the oxidation of the simplest hydrocarbon fuel, methane, involves over 25 species participating in nearly 100 elementary reaction steps. Validating a chemical reaction mechanism requires repetitive solutions of the governing ODEs for a variety of reaction conditions. Analytical solutions to the systems of ODEs describing chemistry are not possible, except for the simplest cases, which are of little or no practical value. Consequently, there is a need for fast and reliable numerical solution techniques for chemical kinetics problems. In addition to solving the ODEs describing chemical kinetics, it is often necessary to know what effects variations in either initial condition values or chemical reaction mechanism parameters have on the solution. Such a need arises in the development of reaction mechanisms from experimental data. The rate coefficients are often not known with great precision and in general, the experimental data are not sufficiently detailed to accurately estimate the rate coefficient parameters. The development of a reaction mechanism is facilitated by a systematic sensitivity analysis which provides the relationships between the predictions of a kinetics model and the input parameters of the problem. LSENS provides for efficient and accurate chemical kinetics computations and includes sensitivity analysis for a variety of problems, including nonisothermal conditions. LSENS replaces the previous NASA general chemical kinetics codes GCKP and GCKP84. LSENS is designed for flexibility, convenience and computational efficiency. A variety of chemical reaction models can be considered. The models include static system, steady one-dimensional inviscid flow, reaction behind an incident shock wave including boundary layer correction, and the perfectly stirred (highly backmixed) reactor. In addition, computations of equilibrium properties can be performed for the following assigned states, enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. For static problems LSENS computes sensitivity coefficients with respect to the initial values of the dependent variables and/or the three rates coefficient parameters of each chemical reaction. To integrate the ODEs describing chemical kinetics problems, LSENS uses the packaged code LSODE, the Livermore Solver for Ordinary Differential Equations, because it has been shown to be the most efficient and accurate code for solving such problems. The sensitivity analysis computations use the decoupled direct method, as implemented by Dunker and modified by Radhakrishnan. This method has shown greater efficiency and stability with equal or better accuracy than other methods of sensitivity analysis. LSENS is written in FORTRAN 77 with the exception of the NAMELIST extensions used for input. While this makes the code fairly machine independent, execution times on IBM PC compatibles would be unacceptable to most users. LSENS has been successfully implemented on a Sun4 running SunOS and a DEC VAX running VMS. With minor modifications, it should also be easily implemented on other platforms with FORTRAN compilers which support NAMELIST input. LSENS required 4Mb of RAM under SunOS 4.1.1 and 3.4Mb of RAM under VMS 5.5.1. The standard distribution medium for LSENS is a .25 inch streaming magnetic tape cartridge (QIC-24) in UNIX tar format. It is also available on a 1600 BPI 9-track magnetic tape or a TK50 tape cartridge in DEC VAX BACKUP format. Alternate distribution media and formats are available upon request. LSENS was developed in 1992.

  18. In-flight measurement of the National Oceanic and Atmospheric Administration (NOAA)-10 static Earth sensor error

    NASA Technical Reports Server (NTRS)

    Harvie, E.; Filla, O.; Baker, D.

    1993-01-01

    Analysis performed in the Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) measures error in the static Earth sensor onboard the National Oceanic and Atmospheric Administration (NOAA)-10 spacecraft using flight data. Errors are computed as the difference between Earth sensor pitch and roll angle telemetry and reference pitch and roll attitude histories propagated by gyros. The flight data error determination illustrates the effect on horizon sensing of systemic variation in the Earth infrared (IR) horizon radiance with latitude and season, as well as the effect of anomalies in the global IR radiance. Results of the analysis provide a comparison between static Earth sensor flight performance and that of scanning Earth sensors studied previously in the GSFC/FDD. The results also provide a baseline for evaluating various models of the static Earth sensor. Representative days from the NOAA-10 mission indicate the extent of uniformity and consistency over time of the global IR horizon. A unique aspect of the NOAA-10 analysis is the correlation of flight data errors with independent radiometric measurements of stratospheric temperature. The determination of the NOAA-10 static Earth sensor error contributes to realistic performance expectations for missions to be equipped with similar sensors.

  19. A Comprehensive High Performance Predictive Tool for Fusion Liquid Metal Hydromagnetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Peter; Chhabra, Rupanshi; Munipalli, Ramakanth

    In Phase I SBIR project, HyPerComp and Texcel initiated the development of two induction-based MHD codes as a predictive tool for fusion hydro-magnetics. The newly-developed codes overcome the deficiency of other MHD codes based on the quasi static approximation by defining a more general mathematical model that utilizes the induced magnetic field rather than the electric potential as the main electromagnetic variable. The UCLA code is a finite-difference staggered-mesh code that serves as a supplementary tool to the massively-parallel finite-volume code developed by HyPerComp. As there is no suitable experimental data under blanket-relevant conditions for code validation, code-to-code comparisons andmore » comparisons against analytical solutions were successfully performed for three selected test cases: (1) lid-driven MHD flow, (2) flow in a rectangular duct in a transverse magnetic field, and (3) unsteady finite magnetic Reynolds number flow in a rectangular enclosure. The performed tests suggest that the developed codes are accurate and robust. Further work will focus on enhancing the code capabilities towards higher flow parameters and faster computations. At the conclusion of the current Phase-II Project we have completed the preliminary validation efforts in performing unsteady mixed-convection MHD flows (against limited data that is currently available in literature), and demonstrated flow behavior in large 3D channels including important geometrical features. Code enhancements such as periodic boundary conditions, unmatched mesh structures are also ready. As proposed, we have built upon these strengths and explored a much increased range of Grashof numbers and Hartmann numbers under various flow conditions, ranging from flows in a rectangular duct to prototypic blanket modules and liquid metal PFC. Parametric studies, numerical and physical model improvements to expand the scope of simulations, code demonstration, and continued validation activities have also been completed.« less

  20. Analysis of Piping Systems for Life Extension of Heavy Water Plants in India

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mishra, Rajesh K.; Soni, R.S.; Kushwaha, H.S.

    Heavy water production in India has achieved many milestones in the past. Two of the successfully running heavy water plants are on the verge of completion of their design life in the near future. One of these two plants, situated at Kota, is a hydrogen sulfide based plant and the other one at Tuticorin is an ammonia-based plant. Various exercises have been planned with an aim to assess the fatigue usage for the various components of these plants in order to extend their life. Considering the process parameters and the past history of the plant performance, critical piping systems andmore » equipment are identified. Analyses have been carried out for these critical piping systems for mainly two kinds of loading, viz. sustained loads and the expansion loads. Static analysis has been carried out to find the induced stress levels due to sustained as well as thermal expansion loading as per the design code ANSI B31.3. Due consideration has been given to the design corrosion allowance while evaluating the stresses due to sustained loads. At the locations where the induced stresses (S{sub L}) due to the sustained loads are exceeding the allowable limits (S{sub h}), exercises have been carried out considering the reduced corrosion allowance value. This strategy is adopted in view of the fact that the thickness measurements carried out at site at various critical locations show a very low rate of corrosion. It has been possible to qualify the system with reduced corrosion allowance values however, it is recommended to keep that location under periodic monitoring. The strategy adopted for carrying out analysis for thermal expansion loading is to qualify the system as per the code allowable value (S{sub a}). If the stresses are more than the allowable value, credit of liberal allowable value as suggested in the code i.e., with the addition of the term (S{sub h}-S{sub L}) to the term 0.25 S{sub h}, has been taken. However, if at any location, it is found that thermal stress is high, fatigue analysis has been carried out. This is done using the provisions of ASME Code Section VIII, Div. 2 by evaluating the cumulative fatigue usage factor. Results of these exercises reveal that the piping systems of both of these plants are in a very healthy state. Based on these exercises, it has been concluded that the life of the plants can be safely extended further with enhanced in-service inspection provisions. (authors)« less

  1. Validation of US3D for Capsule Aerodynamics using 05-CA Wind Tunnel Test Data

    NASA Technical Reports Server (NTRS)

    Schwing, Alan

    2012-01-01

    Several comparisons of computational fluid dynamics to wind tunnel test data are shown for the purpose of code validation. The wind tunnel test, 05-CA, uses a 7.66% model of NASA's Multi-Purpose Crew Vehicle in the 11-foot test section of the Ames Unitary Plan Wind tunnel. A variety of freestream conditions over four Mach numbers and three angles of attack are considered. Test data comparisons include time-averaged integrated forces and moments, time-averaged static pressure ports on the surface, and Strouhal Number. The applicability of the US3D code to subsonic and transonic flow over a bluff body is assessed on a comprehensive data set. With close comparison, this work validates US3D for highly separated flows similar to those examined here.

  2. The ASSERT Virtual Machine Kernel: Support for Preservation of Temporal Properties

    NASA Astrophysics Data System (ADS)

    Zamorano, J.; de la Puente, J. A.; Pulido, J. A.; Urueña

    2008-08-01

    A new approach to building embedded real-time software has been developed in the ASSERT project. One of its key elements is the concept of a virtual machine preserving the non-functional properties of the system, and especially real-time properties, all the way down from high- level design models down to executable code. The paper describes one instance of the virtual machine concept that provides support for the preservation of temporal properties both at the source code level —by accept- ing only "legal" entities, i.e. software components with statically analysable real-tim behaviour— and at run-time —by monitoring the temporal behaviour of the system. The virtual machine has been validated on several pilot projects carried out by aerospace companies in the framework of the ASSERT project.

  3. Static Behavior of Chalcogenide Based Programmable Metallization Cells

    NASA Astrophysics Data System (ADS)

    Rajabi, Saba

    Nonvolatile memory (NVM) technologies have been an integral part of electronic systems for the past 30 years. The ideal non-volatile memory have minimal physical size, energy usage, and cost while having maximal speed, capacity, retention time, and radiation hardness. A promising candidate for next-generation memory is ion-conducting bridging RAM which is referred to as programmable metallization cell (PMC), conductive bridge RAM (CBRAM), or electrochemical metallization memory (ECM), which is likely to surpass flash memory in all the ideal memory characteristics. A comprehensive physics-based model is needed to completely understand PMC operation and assist in design optimization. To advance the PMC modeling effort, this thesis presents a precise physical model parameterizing materials associated with both ion-rich and ion-poor layers of the PMC's solid electrolyte, so that captures the static electrical behavior of the PMC in both its low-resistance on-state (LRS) and high resistance off-state (HRS). The experimental data is measured from a chalcogenide glass PMC designed and manufactured at ASU. The static on- and off-state resistance of a PMC device composed of a layered (Ag-rich/Ag-poor) Ge30Se70 ChG film is characterized and modeled using three dimensional simulation code written in Silvaco Atlas finite element analysis software. Calibrating the model to experimental data enables the extraction of device parameters such as material bandgaps, workfunctions, density of states, carrier mobilities, dielectric constants, and affinities. The sensitivity of our modeled PMC to the variation of its prominent achieved material parameters is examined on the HRS and LRS impedance behavior. The obtained accurate set of material parameters for both Ag-rich and Ag-poor ChG systems and process variation verification on electrical characteristics enables greater fidelity in PMC device simulation, which significantly enhances our ability to understand the underlying physics of ChG-based resistive switching memory.

  4. Time-Averaged Velocity, Temperature and Density Surveys of Supersonic Free Jets

    NASA Technical Reports Server (NTRS)

    Panda, Jayanta; Seasholtz, Richard G.; Elam, Kristie A.; Mielke, Amy F.

    2005-01-01

    A spectrally resolved molecular Rayleigh scattering technique was used to simultaneously measure axial component of velocity U, static temperature T, and density p in unheated free jets at Mach numbers M = 0.6,0.95, 1.4 and 1.8. The latter two conditions were achieved using contoured convergent-divergent nozzles. A narrow line-width continuous wave laser was passed through the jet plumes and molecular scattered light from a small region on the beam was collected and analyzed using a Fabry-Perot interferometer. The optical spectrum analysis air density at the probe volume was determined by monitoring the intensity variation of the scattered light using photo-multiplier tubes. The Fabry-Perot interferometer was operated in the imaging mode, whereby the fringe formed at the image plane was captured by a cooled CCD camera. Special attention was given to remove dust particles from the plume and to provide adequate vibration isolation to the optical components. The velocity profiles from various operating conditions were compared with that measured by a Pitot tube. An excellent comparison within 5m's demonstrated the maturity of the technique. Temperature was measured least accurately, within 10K, while density was measured within 1% uncertainty. The survey data consisted of centerline variations and radial profiles of time-averaged U, T and p. The static temperature and density values were used to determine static pressure variations inside the jet. The data provided a comparative study of jet growth rates with increasing Mach number. The current work is part of a data-base development project for Computational Fluid Dynamics and Aeroacoustics codes that endeavor to predict noise characteristics of high speed jets. A limited amount of far field noise spectra from the same jets are also presented. Finally, a direct experimental validation was obtained for the Crocco-Busemann equation which is commonly used to predict temperature and density profiles from known velocity profiles. Data presented in this paper are available in ASCII format upon request.

  5. MASCOT - MATLAB Stability and Control Toolbox

    NASA Technical Reports Server (NTRS)

    Kenny, Sean; Crespo, Luis

    2011-01-01

    MASCOT software was created to provide the conceptual aircraft designer accurate predictions of air vehicle stability and control characteristics. The code takes as input mass property data in the form of an inertia tensor, aerodynamic loading data, and propulsion (i.e. thrust) loading data. Using fundamental non-linear equations of motion, MASCOT then calculates vehicle trim and static stability data for any desired flight condition. Common predefined flight conditions are included. The predefined flight conditions include six horizontal and six landing rotation conditions with varying options for engine out, crosswind and sideslip, plus three takeoff rotation conditions. Results are displayed through a unique graphical interface developed to provide stability and control information to the conceptual design engineers using a qualitative scale indicating whether the vehicle has acceptable, marginal, or unacceptable static stability characteristics. This software allows the user to prescribe the vehicle s CG location, mass, and inertia tensor so that any loading configuration between empty weight and maximum take-off weight can be analyzed. The required geometric and aerodynamic data as well as mass and inertia properties may be entered directly, passed through data files, or come from external programs such as Vehicle Sketch Pad (VSP). The current version of MASCOT has been tested with VSP used to compute the required data, which is then passed directly into the program. In VSP, the vehicle geometry is created and manipulated. The aerodynamic coefficients, stability and control derivatives, are calculated using VorLax, which is now available directly within VSP. MASCOT has been written exclusively using the technical computing language MATLAB . This innovation is able to bridge the gap between low-fidelity conceptual design and higher-fidelity stability and control analysis. This new tool enables the conceptual design engineer to include detailed static stability and trim constraints in the conceptual design loop. The unique graphical interface developed for this tool presents the stability data in a format that is understandable by the conceptual designer, yet also provides the detailed quantitative results if desired.

  6. The research of conformal optical design

    NASA Astrophysics Data System (ADS)

    Li, Lin; Li, Yan; Huang, Yi-fan; Du, Bao-lin

    2009-07-01

    Conformal optical domes are characterized as having external more elongated optical surfaces that are optimized to minimize drag, increased missile velocity and extended operational range. The outer surface of the conformal domes typically deviate greatly from spherical surface descriptions, so the inherent asymmetry of conformal surfaces leads to variations in the aberration content presented to the optical sensor as it is gimbaled across the field of regard, which degrades the sensor's ability to properly image targets of interest and then undermine the overall system performance. Consequently, the aerodynamic advantages of conformal domes cannot be realized in practical systems unless the dynamic aberration correction techniques are developed to restore adequate optical imaging capabilities. Up to now, many optical correction solutions have been researched in conformal optical design, including static aberrations corrections and dynamic aberrations corrections. There are three parts in this paper. Firstly, the combination of static and dynamic aberration correction is introduced. A system for correcting optical aberration created by a conformal dome has an outer surface and an inner surface. The optimization of the inner surface is regard as the static aberration correction; moreover, a deformable mirror is placed at the position of the secondary mirror in the two-mirror all reflective imaging system, which is the dynamic aberration correction. Secondly, the using of appropriate surface types is very important in conformal dome design. Better performing optical systems can result from surface types with adequate degrees of freedom to describe the proper corrector shape. Two surface types and the methods of using them are described, including Zernike polynomial surfaces used in correct elements and user-defined surfaces used in deformable mirror (DM). Finally, the Adaptive optics (AO) correction is presented. In order to correct the dynamical residual aberration in conformal optical design, the SPGD optimization algorithm is operated at each zoom position to calculate the optimized surface shape of the MEMS DM. The communication between MATLAB and Code V established via ActiveX technique is applied in simulation analysis.

  7. Effect of wettability on two-phase quasi-static displacement: Validation of two pore scale modeling approaches

    NASA Astrophysics Data System (ADS)

    Verma, Rahul; Icardi, Matteo; Prodanović, Maša

    2018-05-01

    Understanding of pore-scale physics for multiphase flow in porous media is essential for accurate description of various flow phenomena. In particular, capillarity and wettability strongly influence capillary pressure-saturation and relative permeability relationships. Wettability is quantified by the contact angle of the fluid-fluid interface at the pore walls. In this work we focus on the non-trivial interface equilibria in presence of non-neutral wetting and complex geometries. We quantify the accuracy of a volume-of-fluid (VOF) formulation, implemented in a popular open-source computational fluid dynamics code, compared with a new formulation of a level set (LS) method, specifically developed for quasi-static capillarity-dominated displacement. The methods are tested in rhomboidal packings of spheres for a range of contact angles and for different rhomboidal configurations and the accuracy is evaluated against the semi-analytical solutions obtained by Mason and Morrow (1994). While the VOF method is implemented in a general purpose code that solves the full Navier-Stokes (NS) dynamics in a finite volume formulation, with additional terms to model surface tension, the LS method is optimized for the quasi-static case and, therefore, less computationally expensive. To overcome the shortcomings of the finite volume NS-VOF system for low capillary number flows, and its computational cost, we introduce an overdamped dynamics and a local time stepping to speed up the convergence to the steady state, for every given imposed pressure gradient (and therefore saturation condition). Despite these modifications, the methods fundamentally differ in the way they capture the interface, as well as in the number of equations solved and in the way the mean curvature (or equivalently capillary pressure) is computed. This study is intended to provide a rigorous validation study and gives important indications on the errors committed by these methods in solving more complex geometry and dynamics, where usually many sources of errors are interplaying.

  8. Static strain and vibration characteristics of a metal semimonocoque helicopter tail cone of moderate size

    NASA Technical Reports Server (NTRS)

    Bielawa, Richard L.; Hefner, Rachel E.; Castagna, Andre

    1991-01-01

    The results are presented of an analytic and experimental research program involving a Sikorsky S-55 helicopter tail cone directed ultimately to the improved structural analysis of airframe substructures typical of moderate sized helicopters of metal semimonocoque construction. Experimental static strain and dynamic shake-testing measurements are presented. Correlation studies of each of these tests with a PC-based finite element analysis (COSMOS/M) are described. The tests included static loadings at the end of the tail cone supported in the cantilever configuration as well as vibrational shake-testing in both the cantilever and free-free configurations.

  9. Investigating the Magnetic Interaction with Geomag and Tracker Video Analysis: Static Equilibrium and Anharmonic Dynamics

    ERIC Educational Resources Information Center

    Onorato, P.; Mascheretti, P.; DeAmbrosis, A.

    2012-01-01

    In this paper, we describe how simple experiments realizable by using easily found and low-cost materials allow students to explore quantitatively the magnetic interaction thanks to the help of an Open Source Physics tool, the Tracker Video Analysis software. The static equilibrium of a "column" of permanents magnets is carefully investigated by…

  10. Dead zone analysis of ECAL barrel modules under static and dynamic load

    NASA Astrophysics Data System (ADS)

    Pierre-Emile, T.; Anduze, M.

    2018-03-01

    In the context of ILD project, impact studies of environmental loads on the Electromagnetic CALorimeter (ECAL) have been initiated. The ECAL part considered is the barrel and it consists of several independent modules which are mounted on the Hadronic CALorimeter barrel (HCAL) itself mounted on the cryostat coil and the yoke. The estimate of the gap required between each ECAL modules is fundamental to define the assembly step and avoid mechanical contacts over the barrel lifetime. In the meantime, it has to be done in consideration to the dead spaces reduction and detector hermiticity optimization. Several Finite Element Analysis (FEA) with static and dynamic loads have been performed in order to define correctly the minimum values for those gaps. Due to the implantation site of the whole project in Japan, seismic analysis were carried out in addition to the static ones. This article shows results of these analysis done with the Finite Element Method (FEM) in ANSYS. First results show the impact of HCAL design on the ECAL modules motion in static load. The second study dedicated to seismic approach on a larger model (including yoke and cryostat) gives additional results on earthquake consequences.

  11. Predicting System Accidents with Model Analysis During Hybrid Simulation

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Fleming, Land D.; Throop, David R.

    2002-01-01

    Standard discrete event simulation is commonly used to identify system bottlenecks and starving and blocking conditions in resources and services. The CONFIG hybrid discrete/continuous simulation tool can simulate such conditions in combination with inputs external to the simulation. This provides a means for evaluating the vulnerability to system accidents of a system's design, operating procedures, and control software. System accidents are brought about by complex unexpected interactions among multiple system failures , faulty or misleading sensor data, and inappropriate responses of human operators or software. The flows of resource and product materials play a central role in the hazardous situations that may arise in fluid transport and processing systems. We describe the capabilities of CONFIG for simulation-time linear circuit analysis of fluid flows in the context of model-based hazard analysis. We focus on how CONFIG simulates the static stresses in systems of flow. Unlike other flow-related properties, static stresses (or static potentials) cannot be represented by a set of state equations. The distribution of static stresses is dependent on the specific history of operations performed on a system. We discuss the use of this type of information in hazard analysis of system designs.

  12. Static air-gap eccentricity fault diagnosis using rotor slot harmonics in line neutral voltage of three-phase squirrel cage induction motor

    NASA Astrophysics Data System (ADS)

    Oumaamar, Mohamed El Kamel; Maouche, Yassine; Boucherma, Mohamed; Khezzar, Abdelmalek

    2017-02-01

    The mixed eccentricity fault detection in a squirrel cage induction motor has been thoroughly investigated. However, a few papers have been related to pure static eccentricity fault and the authors focused on the RSH harmonics presented in stator current. The main objective of this paper is to present an alternative method based on the analysis of line neutral voltage taking place between the supply and the stator neutrals in order to detect air-gap static eccentricity, and to highlight the classification of all RSH harmonics in line neutral voltage. The model of squirrel cage induction machine relies on the rotor geometry and winding layout. Such developed model is used to analyze the impact of the pure static air-gap eccentricity by predicting the related frequencies in the line neutral voltage spectrum. The results show that the line neutral voltage spectrum are more sensitive to the air-gap static eccentricity fault compared to stator current one. The theoretical analysis and simulated results are confirmed by experiments.

  13. Failure mechanics in low-velocity impacts on thin composite plates

    NASA Technical Reports Server (NTRS)

    Elber, W.

    1983-01-01

    Eight-ply quasi-isotropic composite plates of Thornel 300 graphite in Narmco 5208 epoxy resin (T300/5208) were tested to establish the degree of equivalence between low-velocity impact and static testing. Both the deformation and failure mechanics under impact were representable by static indentation tests. Under low-velocity impacts such as tool drops, the dominant deformation mode of the plates was the first, or static, mode. Higher modes are excited on contact, but they decay significantly by the time the first-mode load reaches a maximum. The delamination patterns were observed by X-ray analysis. The areas of maximum delamination patterns were observed by X-ray analysis. The areas of maximum delamination coincided with the areas of highest peel stresses. The extent of delamination was similar for static and impact tests. Fiber failure damage was established by tensile tests on small fiber bundles obtained by deplying test specimens. The onset of fiber damage was in internal plies near the lower surface of the plates. The distribution and amount of fiber damage was similar fo impact and static tests.

  14. Statistical Analysis of Online Eye and Face-tracking Applications in Marketing

    NASA Astrophysics Data System (ADS)

    Liu, Xuan

    Eye-tracking and face-tracking technology have been widely adopted to study viewers' attention and emotional response. In the dissertation, we apply these two technologies to investigate effective online contents that are designed to attract and direct attention and engage viewers emotional responses. In the first part of the dissertation, we conduct a series of experiments that use eye-tracking technology to explore how online models' facial cues affect users' attention on static e-commerce websites. The joint effects of two facial cues, gaze direction and facial expression on attention, are estimated by Bayesian ANOVA, allowing various distributional assumptions. We also consider the similarities and differences in the effects of facial cues among American and Chinese consumers. This study offers insights on how to attract and retain customers' attentions for advertisers that use static advertisement on various websites or ad networks. In the second part of the dissertation, we conduct a face-tracking study where we investigate the relation between experiment participants' emotional responseswhile watching comedy movie trailers and their watching intentions to the actual movies. Viewers' facial expressions are collected in real-time and converted to emo- tional responses with algorithms based on facial coding system. To analyze the data, we propose to use a joint modeling method that link viewers' longitudinal emotion measurements and their watching intentions. This research provides recommenda- tions to filmmakers on how to improve the effectiveness of movie trailers, and how to boost audiences' desire to watch the movies.

  15. Measurement and prediction of the thermomechanical response of shape memory alloy hybrid composite beams

    NASA Astrophysics Data System (ADS)

    Davis, Brian; Turner, Travis L.; Seelecke, Stefan

    2005-05-01

    Previous work at NASA Langley Research Center (LaRC) involved fabrication and testing of composite beams with embedded, pre-strained shape memory alloy (SMA) ribbons within the beam structures. That study also provided comparison of experimental results with numerical predictions from a research code making use of a new thermoelastic model for shape memory alloy hybrid composite (SMAHC) structures. The previous work showed qualitative validation of the numerical model. However, deficiencies in the experimental-numerical correlation were noted and hypotheses for the discrepancies were given for further investigation. The goal of this work is to refine the experimental measurement and numerical modeling approaches in order to better understand the discrepancies, improve the correlation between prediction and measurement, and provide rigorous quantitative validation of the numerical analysis/design tool. The experimental investigation is refined by a more thorough test procedure and incorporation of higher fidelity measurements such as infrared thermography and projection moire interferometry. The numerical results are produced by a recently commercialized version of the constitutive model as implemented in ABAQUS and are refined by incorporation of additional measured parameters such as geometric imperfection. Thermal buckling, post-buckling, and random responses to thermal and inertial (base acceleration) loads are studied. The results demonstrate the effectiveness of SMAHC structures in controlling static and dynamic responses by adaptive stiffening. Excellent agreement is achieved between the predicted and measured results of the static and dynamic thermomechanical response, thereby providing quantitative validation of the numerical tool.

  16. Measurement and Prediction of the Thermomechanical Response of Shape Memory Alloy Hybrid Composite Beams

    NASA Technical Reports Server (NTRS)

    Davis, Brian; Turner, Travis L.; Seelecke, Stefan

    2005-01-01

    Previous work at NASA Langley Research Center (LaRC) involved fabrication and testing of composite beams with embedded, pre-strained shape memory alloy (SMA) ribbons within the beam structures. That study also provided comparison of experimental results with numerical predictions from a research code making use of a new thermoelastic model for shape memory alloy hybrid composite (SMAHC) structures. The previous work showed qualitative validation of the numerical model. However, deficiencies in the experimental-numerical correlation were noted and hypotheses for the discrepancies were given for further investigation. The goal of this work is to refine the experimental measurement and numerical modeling approaches in order to better understand the discrepancies, improve the correlation between prediction and measurement, and provide rigorous quantitative validation of the numerical analysis/design tool. The experimental investigation is refined by a more thorough test procedure and incorporation of higher fidelity measurements such as infrared thermography and projection moire interferometry. The numerical results are produced by a recently commercialized version of the constitutive model as implemented in ABAQUS and are refined by incorporation of additional measured parameters such as geometric imperfection. Thermal buckling, post-buckling, and random responses to thermal and inertial (base acceleration) loads are studied. The results demonstrate the effectiveness of SMAHC structures in controlling static and dynamic responses by adaptive stiffening. Excellent agreement is achieved between the predicted and measured results of the static and dynamic thermomechanical response, thereby providing quantitative validation of the numerical tool.

  17. SSME Turbopump Turbine Computations

    NASA Technical Reports Server (NTRS)

    Jorgenson, P. G. E.

    1985-01-01

    A two-dimensional viscous code was developed to be used in the prediction of the flow in the SSME high-pressure turbopump blade passages. The rotor viscous code (RVC) employs a four-step Runge-Kutta scheme to solve the two-dimensional, thin-layer Navier-Stokes equations. The Baldwin-Lomax eddy-viscosity model is used for these turbulent flow calculations. A viable method was developed to use the relative exit conditions from an upstream blade row as the inlet conditions to the next blade row. The blade loading diagrams are compared with the meridional values obtained from an in-house quasithree-dimensional inviscid code. Periodic boundary conditions are imposed on a body-fitted C-grid computed by using the GRAPE GRids about Airfoils using Poisson's Equation (GRAPE) code. Total pressure, total temperature, and flow angle are specified at the inlet. The upstream-running Riemann invariant is extrapolated from the interior. Static pressure is specified at the exit such that mass flow is conserved from blade row to blade row, and the conservative variables are extrapolated from the interior. For viscous flows the noslip condition is imposed at the wall. The normal momentum equation gives the pressure at the wall. The density at the wall is obtained from the wall total temperature.

  18. CTViz: A tool for the visualization of transport in nanocomposites.

    PubMed

    Beach, Benjamin; Brown, Joshua; Tarlton, Taylor; Derosa, Pedro A

    2016-05-01

    A visualization tool (CTViz) for charge transport processes in 3-D hybrid materials (nanocomposites) was developed, inspired by the need for a graphical application to assist in code debugging and data presentation of an existing in-house code. As the simulation code grew, troubleshooting problems grew increasingly difficult without an effective way to visualize 3-D samples and charge transport in those samples. CTViz is able to produce publication and presentation quality visuals of the simulation box, as well as static and animated visuals of the paths of individual carriers through the sample. CTViz was designed to provide a high degree of flexibility in the visualization of the data. A feature that characterizes this tool is the use of shade and transparency levels to highlight important details in the morphology or in the transport paths by hiding or dimming elements of little relevance to the current view. This is fundamental for the visualization of 3-D systems with complex structures. The code presented here provides these required capabilities, but has gone beyond the original design and could be used as is or easily adapted for the visualization of other particulate transport where transport occurs on discrete paths. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Static analysis of the hull plate using the finite element method

    NASA Astrophysics Data System (ADS)

    Ion, A.

    2015-11-01

    This paper aims at presenting the static analysis for two levels of a container ship's construction as follows: the first level is at the girder / hull plate and the second level is conducted at the entire strength hull of the vessel. This article will describe the work for the static analysis of a hull plate. We shall use the software package ANSYS Mechanical 14.5. The program is run on a computer with four Intel Xeon X5260 CPU processors at 3.33 GHz, 32 GB memory installed. In terms of software, the shared memory parallel version of ANSYS refers to running ANSYS across multiple cores on a SMP system. The distributed memory parallel version of ANSYS (Distributed ANSYS) refers to running ANSYS across multiple processors on SMP systems or DMP systems.

  20. An analysis of the effects of aeroelasticity on static longitudinal stability and control of a swept-wing airplane

    NASA Technical Reports Server (NTRS)

    Skoog, Richard B

    1957-01-01

    A theoretical analysis has been made of the effects of aeroelasticity on the static longitudinal stability and elevator angle required for balance of an airplane. The analysis is based on the familiar stability equation expressing the contribution of wing and tail to longitudinal stability. Effects of wing, tail, and fuselage flexibility are considered. Calculated effects are shown for a swept-wing bomber of relatively high flexibility.

  1. MSC products for the simulation of tire behavior

    NASA Technical Reports Server (NTRS)

    Muskivitch, John C.

    1995-01-01

    The modeling of tires and the simulation of tire behavior are complex problems. The MacNeal-Schwendler Corporation (MSC) has a number of finite element analysis products that can be used to address the complexities of tire modeling and simulation. While there are many similarities between the products, each product has a number of capabilities that uniquely enable it to be used for a specific aspect of tire behavior. This paper discusses the following programs: (1) MSC/NASTRAN - general purpose finite element program for linear and nonlinear static and dynamic analysis; (2) MSC/ADAQUS - nonlinear statics and dynamics finite element program; (3) MSC/PATRAN AFEA (Advanced Finite Element Analysis) - general purpose finite element program with a subset of linear and nonlinear static and dynamic analysis capabilities with an integrated version of MSC/PATRAN for pre- and post-processing; and (4) MSC/DYTRAN - nonlinear explicit transient dynamics finite element program.

  2. ScreenRecorder: A Utility for Creating Screenshot Video Using Only Original Equipment Manufacturer (OEM) Software on Microsoft Windows Systems

    DTIC Science & Technology

    2015-01-01

    class within Microsoft Visual Studio . 2 It has been tested on and is compatible with Microsoft Vista, 7, and 8 and Visual Studio Express 2008...the ScreenRecorder utility assumes a basic understanding of compiling and running C++ code within Microsoft Visual Studio . This report does not...of Microsoft Visual Studio , the ScreenRecorder utility was developed as a C++ class that can be compiled as a library (static or dynamic) to be

  3. Excising das All: Evolving Maxwell waves beyond Scri

    NASA Technical Reports Server (NTRS)

    vanMeter, James R.; Fiske, David R.; Misner, Charles W.

    2006-01-01

    We study the numerical propagation of waves through future null infinity in a conformally compactified spacetime. We introduce an artificial cosmological constant, which allows us some control over the causal structure near null infinity. We exploit this freedom to ensure that all light cones are tilted outward in a region near null infinity, which allows us to impose excision-style boundary conditions in our finite difference code. In this preliminary study we consider electromagnetic waves propagating in a static, conformally compactified spacetime.

  4. Summary of experimental heat-transfer results from the turbine hot section facility

    NASA Technical Reports Server (NTRS)

    Gladden, Herbert J.; Yeh, Fredrick C.

    1993-01-01

    Experimental data from the turbine Hot Section Facility are presented and discussed. These data include full-coverage film-cooled airfoil results as well as special instrumentation results obtained at simulated real engine conditions. Local measurements of airfoil wall temperature, airfoil gas-path static-pressure distribution, and local heat-transfer coefficient distributions are presented and discussed. In addition, measured gas and coolant temperatures and pressures are presented. These data are also compared with analyses from Euler and boundary-layer codes.

  5. Air Intakes for High Speed Vehicles (Prises d’Air pour Vehicules a Grande Vitesse)

    DTIC Science & Technology

    1991-09-01

    contributors. The a number of test cases for which rather Working Group wishes to express its detailed experimental data were available sincere thanks to those...larger group . Figs. 3.6.2 and 3.6.3 shows a 3.3.6.3 CFD TECHNIQUES comparison of the computed and experimental static pressure This test case was attempted...by six distributions on the ramp and cowl of different research groups , using seven the intake. Experimental data is shown different codes, as noted

  6. Endwall flows and blading design for axial flow compressors

    NASA Astrophysics Data System (ADS)

    Robinson, Christopher J.

    Literature relevant to blading design in the endwall region is reviewed, and important three dimensional flow phenomena occurring in embedded stages of axial compressors are described. A low speed axial flow four stage compressor rig is described and bladings studied are detailed: two conventional and two with end bends. The application of a three dimensional Navier-Stokes solver to the bladings' stators, to assess the effectiveness of the code, is reported. Calculation results of exit whirl angles, losses, and surface static pressures are compared with experiment.

  7. Pile Driving Analysis for Pile Design and Quality Assurance

    DOT National Transportation Integrated Search

    2017-08-01

    Driven piles are commonly used in foundation engineering. The most accurate measurement of pile capacity is achieved from measurements made during static load tests. Static load tests, however, may be too expensive for certain projects. In these case...

  8. Real-Time Precise Point Positioning (RTPPP) with raw observations and its application in real-time regional ionospheric VTEC modeling

    NASA Astrophysics Data System (ADS)

    Liu, Teng; Zhang, Baocheng; Yuan, Yunbin; Li, Min

    2018-01-01

    Precise Point Positioning (PPP) is an absolute positioning technology mainly used in post data processing. With the continuously increasing demand for real-time high-precision applications in positioning, timing, retrieval of atmospheric parameters, etc., Real-Time PPP (RTPPP) and its applications have drawn more and more research attention in recent years. This study focuses on the models, algorithms and ionospheric applications of RTPPP on the basis of raw observations, in which high-precision slant ionospheric delays are estimated among others in real time. For this purpose, a robust processing strategy for multi-station RTPPP with raw observations has been proposed and realized, in which real-time data streams and State-Space-Representative (SSR) satellite orbit and clock corrections are used. With the RTPPP-derived slant ionospheric delays from a regional network, a real-time regional ionospheric Vertical Total Electron Content (VTEC) modeling method is proposed based on Adjusted Spherical Harmonic Functions and a Moving-Window Filter. SSR satellite orbit and clock corrections from different IGS analysis centers are evaluated. Ten globally distributed real-time stations are used to evaluate the positioning performances of the proposed RTPPP algorithms in both static and kinematic modes. RMS values of positioning errors in static/kinematic mode are 5.2/15.5, 4.7/17.4 and 12.8/46.6 mm, for north, east and up components, respectively. Real-time slant ionospheric delays from RTPPP are compared with those from the traditional Carrier-to-Code Leveling (CCL) method, in terms of function model, formal precision and between-receiver differences of short baseline. Results show that slant ionospheric delays from RTPPP are more precise and have a much better convergence performance than those from the CCL method in real-time processing. 30 real-time stations from the Asia-Pacific Reference Frame network are used to model the ionospheric VTECs over Australia in real time, with slant ionospheric delays from both RTPPP and CCL methods for comparison. RMS of the VTEC differences between RTPPP/CCL method and CODE final products is 0.91/1.09 TECU, and RMS of the VTEC differences between RTPPP and CCL methods is 0.67 TECU. Slant Total Electron Contents retrieved from different VTEC models are also validated with epoch-differenced Geometry-Free combinations of dual-frequency phase observations, and mean RMS values are 2.14, 2.33 and 2.07 TECU for RTPPP method, CCL method and CODE final products, respectively. This shows the superiority of RTPPP-derived slant ionospheric delays in real-time ionospheric VTEC modeling.

  9. Speckle temporal stability in XAO coronagraphic images. II. Refine model for quasi-static speckle temporal evolution for VLT/SPHERE

    NASA Astrophysics Data System (ADS)

    Martinez, P.; Kasper, M.; Costille, A.; Sauvage, J. F.; Dohlen, K.; Puget, P.; Beuzit, J. L.

    2013-06-01

    Context. Observing sequences have shown that the major noise source limitation in high-contrast imaging is the presence of quasi-static speckles. The timescale on which quasi-static speckles evolve is determined by various factors, mechanical or thermal deformations, among others. Aims: Understanding these time-variable instrumental speckles and, especially, their interaction with other aberrations, referred to as the pinning effect, is paramount for the search for faint stellar companions. The temporal evolution of quasi-static speckles is, for instance, required for quantifying the gain expected when using angular differential imaging (ADI) and to determining the interval on which speckle nulling techniques must be carried out. Methods: Following an early analysis of a time series of adaptively corrected, coronagraphic images obtained in a laboratory condition with the high-order test bench (HOT) at ESO Headquarters, we confirm our results with new measurements carried out with the SPHERE instrument during its final test phase in Europe. The analysis of the residual speckle pattern in both direct and differential coronagraphic images enables the characterization of the temporal stability of quasi-static speckles. Data were obtained in a thermally actively controlled environment reproducing realistic conditions encountered at the telescope. Results: The temporal evolution of the quasi-static wavefront error exhibits a linear power law, which can be used to model quasi-static speckle evolution in the context of forthcoming high-contrast imaging instruments, with implications for instrumentation (design, observing strategies, data reduction). Such a model can be used for instance to derive the timescale on which non-common path aberrations must be sensed and corrected. We found in our data that quasi-static wavefront error increases with ~0.7 Å per minute.

  10. Correction of static axial alignment in children with knee varus or valgus deformities through guided growth: Does it also correct dynamic frontal plane moments during walking?

    PubMed

    Böhm, Harald; Stief, Felix; Sander, Klaus; Hösl, Matthias; Döderlein, Leonhard

    2015-09-01

    Malaligned knees are predisposed to the development and progression of unicompartmental degenerations because of the excessive load placed on one side of the knee. Therefore, guided growth in skeletally immature patients is recommended. Indication for correction of varus/valgus deformities are based on static weight bearing radiographs. However, the dynamic knee abduction moment during walking showed only a weak correlation to malalignment determined by static radiographs. Therefore, the aim of the study was to measure the effects of guided growth on the normalization of frontal plane knee joint moments during walking. 15 legs of 8 patients (11-15 years) with idiopathic axial varus or valgus malalignment were analyzed. 16 typically developed peers served as controls. Instrumented gait analysis and clinical assessment were performed the day before implantation and explantation of eight-plates. Correlation between static mechanical tibiofemoral axis angle (MAA) and dynamic frontal plane knee joint moments and their change by guided growth were performed. The changes in dynamic knee moment in the frontal plane following guided growth showed high and significant correlation to the changes in static MAA (R=0.97, p<0.001). Contrary to the correlation of the changes, there was no correlation between static and dynamic measures in both sessions. In consequence two patients that had a natural knee moment before treatment showed a more pathological one after treatment. In conclusion, the changes in the dynamic load situation during walking can be predicted from the changes in static alignment. If pre-surgical gait analysis reveals a natural load situation, despite a static varus or valgus deformity, the intervention must be critically discussed. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Instance Analysis for the Error of Three-pivot Pressure Transducer Static Balancing Method for Hydraulic Turbine Runner

    NASA Astrophysics Data System (ADS)

    Weng, Hanli; Li, Youping

    2017-04-01

    The working principle, process device and test procedure of runner static balancing test method by weighting with three-pivot pressure transducers are introduced in this paper. Based on an actual instance of a V hydraulic turbine runner, the error and sensitivity of the three-pivot pressure transducer static balancing method are analysed. Suggestions about improving the accuracy and the application of the method are also proposed.

  12. STATIC AND KINETIC SITE-SPECIFIC PROTEIN-DNA PHOTOCROSSLINKING: ANALYSIS OF BACTERIAL TRANSCRIPTION INITIATION COMPLEXES

    PubMed Central

    Naryshkin, Nikolai; Druzhinin, Sergei; Revyakin, Andrei; Kim, Younggyu; Mekler, Vladimir; Ebright, Richard H.

    2009-01-01

    Static site-specific protein-DNA photocrosslinking permits identification of protein-DNA interactions within multiprotein-DNA complexes. Kinetic site-specific protein-DNA photocrosslinking--involving rapid-quench-flow mixing and pulsed-laser irradiation--permits elucidation of pathways and kinetics of formation of protein-DNA interactions within multiprotein-DNA complexes. We present detailed protocols for application of static and kinetic site-specific protein-DNA photocrosslinking to bacterial transcription initiation complexes. PMID:19378179

  13. Seals Research at Texas A/M University

    NASA Technical Reports Server (NTRS)

    Morrison, Gerald L.

    1991-01-01

    The Turbomachinery Laboratory at Texas A&M has been providing experimental data and computational codes for the design seals for many years. The program began with the development of a Halon based seal test rig. This facility provided information about the effective stiffness and damping in whirling seals. The Halon effectively simulated cryogenic fluids. Another test facility was developed (using air as the working fluid) where the stiffness and damping matrices can be determined. This data was used to develop bulk flow models of the seal's effect upon rotating machinery; in conjunction with this research, a bulk flow model for calculation of performance and rotordynamic coefficients of annular pressure seals of arbitrary non-uniform clearance for barotropic fluids such as LH2, LOX, LN2, and CH4 was developed. This program is very efficient (fast) and converges for very large eccentricities. Currently, work is being performed on a bulk flow analysis of the effects of the impeller-shroud interaction upon the stability of pumps. The data was used along with data from other researchers to develop an empirical leakage prediction code for MSFC. Presently, the flow field inside labyrinth and annular seals are being studied in detail. An advanced 3-D Doppler anemometer system is being used to measure the mean velocity and entire Reynolds stress tensor distribution throughout the seals. Concentric and statically eccentric seals were studied; presently, whirling seals are being studied. The data obtained are providing valuable information about the flow phenomena occurring inside the seals, as well as a data base for comparison with numerical predictions and for turbulence model development. A finite difference computer code was developed for solving the Reynolds averaged Navier Stokes equation inside labyrinth seals. A multi-scale k-epsilon turbulence model is currently being evaluated. A new seal geometry was designed and patented using a computer code. A large scale, 2-D seal flow visualization facility is also being developed.

  14. Computational and experimental investigation of two-dimensional scramjet inlets and hypersonic flow over a sharp flat plate

    NASA Astrophysics Data System (ADS)

    Messitt, Donald G.

    1999-11-01

    The WIND code was employed to compute the hypersonic flow in the shock wave boundary layer merged region near the leading edge of a sharp flat plate. Solutions were obtained at Mach numbers from 9.86 to 15.0 and free stream Reynolds numbers of 3,467 to 346,700 in-1 (1.365 · 105 to 1.365 · 107 m-1) for perfect gas conditions. The numerical results indicated a merged shock wave and viscous layer near the leading edge. The merged region grew in size with increasing free stream Mach number, proportional to Minfinity 2/Reinfinity. Profiles of the static pressure in the merged region indicated a strong normal pressure gradient (∂p/∂y). The normal pressure gradient has been neglected in previous analyses which used the boundary layer equations. The shock wave near the leading edge was thick, as has been experimentally observed. Computed shock wave locations and surface pressures agreed well within experimental error for values of the rarefaction parameter, chi/M infinity2 < 0.3. A preliminary analysis using kinetic theory indicated that rarefied flow effects became important above this value. In particular, the WIND solution agreed well in the transition region between the merged flow, which was predicted well by the theory of Li and Nagamatsu, and the downstream region where the strong interaction theory applied. Additional computations with the NPARC code, WIND's predecessor, demonstrated the ability of the code to compute hypersonic inlet flows at free stream Mach numbers up to 20. Good qualitative agreement with measured pressure data indicated that the code captured the important physical features of the shock wave - boundary layer interactions. The computed surface and pitot pressures fell within the combined experimental and numerical error bounds for most points. The calculations demonstrated the need for extremely fine grids when computing hypersonic interaction flows.

  15. Integrated analysis on static/dynamic aeroelasticity of curved panels based on a modified local piston theory

    NASA Astrophysics Data System (ADS)

    Yang, Zhichun; Zhou, Jian; Gu, Yingsong

    2014-10-01

    A flow field modified local piston theory, which is applied to the integrated analysis on static/dynamic aeroelastic behaviors of curved panels, is proposed in this paper. The local flow field parameters used in the modification are obtained by CFD technique which has the advantage to simulate the steady flow field accurately. This flow field modified local piston theory for aerodynamic loading is applied to the analysis of static aeroelastic deformation and flutter stabilities of curved panels in hypersonic flow. In addition, comparisons are made between results obtained by using the present method and curvature modified method. It shows that when the curvature of the curved panel is relatively small, the static aeroelastic deformations and flutter stability boundaries obtained by these two methods have little difference, while for curved panels with larger curvatures, the static aeroelastic deformation obtained by the present method is larger and the flutter stability boundary is smaller compared with those obtained by the curvature modified method, and the discrepancy increases with the increasing of curvature of panels. Therefore, the existing curvature modified method is non-conservative compared to the proposed flow field modified method based on the consideration of hypersonic flight vehicle safety, and the proposed flow field modified local piston theory for curved panels enlarges the application range of piston theory.

  16. Some studies on the use of NASTRAN for nuclear power plant structural analysis and design

    NASA Technical Reports Server (NTRS)

    Setlur, A. V.; Valathur, M.

    1973-01-01

    Studies made on the use of NASTRAN for nuclear power plant analysis and design are presented. These studies indicate that NASTRAN could be effectively used for static, dynamic and special purpose problems encountered in the design of such plants. Normal mode capability of NASTRAN is extended through a post-processor program to handle seismic analysis. Static and dynamic substructuring is discussed. Extension of NASTRAN to include the needs in the civil engineering industry is discussed.

  17. Computational Investigation of the Aerodynamic Effects on Fluidic Thrust Vectoring

    NASA Technical Reports Server (NTRS)

    Deere, K. A.

    2000-01-01

    A computational investigation of the aerodynamic effects on fluidic thrust vectoring has been conducted. Three-dimensional simulations of a two-dimensional, convergent-divergent (2DCD) nozzle with fluidic injection for pitch vector control were run with the computational fluid dynamics code PAB using turbulence closure and linear Reynolds stress modeling. Simulations were computed with static freestream conditions (M=0.05) and at Mach numbers from M=0.3 to 1.2, with scheduled nozzle pressure ratios (from 3.6 to 7.2) and secondary to primary total pressure ratios of p(sub t,s)/p(sub t,p)=0.6 and 1.0. Results indicate that the freestream flow decreases vectoring performance and thrust efficiency compared with static (wind-off) conditions. The aerodynamic penalty to thrust vector angle ranged from 1.5 degrees at a nozzle pressure ratio of 6 with M=0.9 freestream conditions to 2.9 degrees at a nozzle pressure ratio of 5.2 with M=0.7 freestream conditions, compared to the same nozzle pressure ratios with static freestream conditions. The aerodynamic penalty to thrust ratio decreased from 4 percent to 0.8 percent as nozzle pressure ratio increased from 3.6 to 7.2. As expected, the freestream flow had little influence on discharge coefficient.

  18. Mental imagery. Effects on static balance and attentional demands of the elderly.

    PubMed

    Hamel, M F; Lajoie, Yves

    2005-06-01

    Several studies have demonstrated the effectiveness of mental imagery in improving motor performance. However, no research has studied the effectiveness of such a technique on static balance in the elderly. This study evaluated the efficiency of a mental imagery technique, aimed at improving static balance by reducing postural oscillations and attentional demands in the elderly. Twenty subjects aged 65 to 90 years old, divided into two groups (8 in Control group and 12 in Experimental group) participated in the study. The experimental participants underwent daily mental imagery training for a period of six weeks. Antero-posterior and lateral oscillations, reaction times during the use of the double-task paradigm were measured, and the Berg Balance Scale, Activities-specific Balance Confidence Scale, and VMIQ questionnaire were answered during both pre-test and post-test. Attentional demands and postural oscillations (antero-posterior) decreased significantly in the group with mental imagery training compared with those of the Control group. Subjects in the mental imagery group became significantly better in their aptitudes to generate clear vivid mental images, as indicated by the VMIQ questionnaire, whereas no significant difference was observed for the Activities-specific Balance Confidence Scale or Berg Scale. The results support psychoneuromuscular and motor coding theories associated with mental imagery.

  19. Deformation behavior of welded steel sandwich panels under quasi-static loading

    DOT National Transportation Integrated Search

    2011-03-16

    This paper summarizes basic research (i.e., testing and analysis) : conducted to examine the deformation behavior of flat-welded : steel sandwich panels under two types of quasi-static loading: : (1) uniaxial compression; and (2) bending through an i...

  20. Commuter rail seat testing and analysis of facing seats

    DOT National Transportation Integrated Search

    2003-12-01

    Tests have been conducted on the Bombardier back-to-back commuter rail car seat in a facing-seat configuration to evaluate its performance under static and dynamic loading conditions. Quasi-static tests have been conducted to establish the load defle...

  1. The static evolution of the new Italian code of medical ethics.

    PubMed

    Montanari Vergallo, G; Busardò, F P; Zaami, S; Marinelli, E

    2016-01-01

    Eight years since the last revision, in May 2014 the Italian code of medical ethics has been updated. Here, the Authors examine the reform in the light of the increasing difficulties of the medical profession arising from the severity of the Italian law Courts. The most significant aspects of this new code are firstly, the patient's freedom of self-determination and secondly, risk prevention through the disclosure of errors and adverse events. However, in both areas the reform seems to be less effective if we compare the ethical codes of France, the United Kingdom and the United States. In particular, the non-taking into consideration of the said code quality standards and scientific evidence which should guide doctors in their clinical practice is to say the least questionable. Since these are the most significant changes in the new code, it seems inevitable to conclude that the 2014 edition is essentially in line with previous versions. Now more than ever it is necessary that medical ethics acknowledges that medicine, society and medical jurisprudence have changed and doctors must be given new rules in order to protect both patients' rights and dignity of the profession. The physician's right to refuse to perform treatment at odds with his own clinical beliefs cannot be the only mean to safeguard the dignity of the profession. A clear boundary must also be established between medicine and professionalism as well as the criteria in determining the scientific evidences that physicians must follow. This has not been done in the Italian code of ethics, despite all the controversy caused by the Stamina case.

  2. Determination of the static friction coefficient from circular motion

    NASA Astrophysics Data System (ADS)

    Molina-Bolívar, J. A.; Cabrerizo-Vílchez, M. A.

    2014-07-01

    This paper describes a physics laboratory exercise for determining the coefficient of static friction between two surfaces. The circular motion of a coin placed on the surface of a rotating turntable has been studied. For this purpose, the motion is recorded with a high-speed digital video camera recording at 240 frames s-1, and the videos are analyzed using Tracker video-analysis software, allowing the students to dynamically model the motion of the coin. The students have to obtain the static friction coefficient by comparing the centripetal and maximum static friction forces. The experiment only requires simple and inexpensive materials. The dynamics of circular motion and static friction forces are difficult for many students to understand. The proposed laboratory exercise addresses these topics, which are relevant to the physics curriculum.

  3. Neural coding of sound envelope in reverberant environments.

    PubMed

    Slama, Michaël C C; Delgutte, Bertrand

    2015-03-11

    Speech reception depends critically on temporal modulations in the amplitude envelope of the speech signal. Reverberation encountered in everyday environments can substantially attenuate these modulations. To assess the effect of reverberation on the neural coding of amplitude envelope, we recorded from single units in the inferior colliculus (IC) of unanesthetized rabbit using sinusoidally amplitude modulated (AM) broadband noise stimuli presented in simulated anechoic and reverberant environments. Although reverberation degraded both rate and temporal coding of AM in IC neurons, in most neurons, the degradation in temporal coding was smaller than the AM attenuation in the stimulus. This compensation could largely be accounted for by the compressive shape of the modulation input-output function (MIOF), which describes the nonlinear transformation of modulation depth from acoustic stimuli into neural responses. Additionally, in a subset of neurons, the temporal coding of AM was better for reverberant stimuli than for anechoic stimuli having the same modulation depth at the ear. Using hybrid anechoic stimuli that selectively possess certain properties of reverberant sounds, we show that this reverberant advantage is not caused by envelope distortion, static interaural decorrelation, or spectral coloration. Overall, our results suggest that the auditory system may possess dual mechanisms that make the coding of amplitude envelope relatively robust in reverberation: one general mechanism operating for all stimuli with small modulation depths, and another mechanism dependent on very specific properties of reverberant stimuli, possibly the periodic fluctuations in interaural correlation at the modulation frequency. Copyright © 2015 the authors 0270-6474/15/354452-17$15.00/0.

  4. Studying Regional Wave Source Time Functions Using the Empirical Green's Function Method: Application to Central Asia

    NASA Astrophysics Data System (ADS)

    Xie, J.; Schaff, D. P.; Chen, Y.; Schult, F.

    2013-12-01

    Reliably estimated source time functions (STFs) from high-frequency regional waveforms, such as Lg, Pn and Pg, provide important input for seismic source studies, explosion detection and discrimination, and minimization of parameter trade-off in attenuation studies. We have searched for candidate pairs of larger and small earthquakes in and around China that share the same focal mechanism but significantly differ in magnitudes, so that the empirical Green's function (EGF) method can be applied to study the STFs of the larger events. We conducted about a million deconvolutions using waveforms from 925 earthquakes, and screened the deconvolved traces to exclude those that are from event pairs that involved different mechanisms. Only 2,700 traces passed this screening and could be further analyzed using the EGF method. We have developed a series of codes for speeding up the final EGF analysis by implementing automations and user-graphic interface procedures. The codes have been fully tested with a subset of screened data and we are currently applying them to all the screened data. We will present a large number of deconvolved STFs retrieved using various phases (Lg, Pn, Sn and Pg and coda) with information on any directivities, any possible dependence of pulse durations on the wave types, on scaling relations for the pulse durations and event sizes, and on the estimated source static stress drops.

  5. Targeting multiple heterogeneous hardware platforms with OpenCL

    NASA Astrophysics Data System (ADS)

    Fox, Paul A.; Kozacik, Stephen T.; Humphrey, John R.; Paolini, Aaron; Kuller, Aryeh; Kelmelis, Eric J.

    2014-06-01

    The OpenCL API allows for the abstract expression of parallel, heterogeneous computing, but hardware implementations have substantial implementation differences. The abstractions provided by the OpenCL API are often insufficiently high-level to conceal differences in hardware architecture. Additionally, implementations often do not take advantage of potential performance gains from certain features due to hardware limitations and other factors. These factors make it challenging to produce code that is portable in practice, resulting in much OpenCL code being duplicated for each hardware platform being targeted. This duplication of effort offsets the principal advantage of OpenCL: portability. The use of certain coding practices can mitigate this problem, allowing a common code base to be adapted to perform well across a wide range of hardware platforms. To this end, we explore some general practices for producing performant code that are effective across platforms. Additionally, we explore some ways of modularizing code to enable optional optimizations that take advantage of hardware-specific characteristics. The minimum requirement for portability implies avoiding the use of OpenCL features that are optional, not widely implemented, poorly implemented, or missing in major implementations. Exposing multiple levels of parallelism allows hardware to take advantage of the types of parallelism it supports, from the task level down to explicit vector operations. Static optimizations and branch elimination in device code help the platform compiler to effectively optimize programs. Modularization of some code is important to allow operations to be chosen for performance on target hardware. Optional subroutines exploiting explicit memory locality allow for different memory hierarchies to be exploited for maximum performance. The C preprocessor and JIT compilation using the OpenCL runtime can be used to enable some of these techniques, as well as to factor in hardware-specific optimizations as necessary.

  6. Evaluation of Load Analysis Methods for NASAs GIII Adaptive Compliant Trailing Edge Project

    NASA Technical Reports Server (NTRS)

    Cruz, Josue; Miller, Eric J.

    2016-01-01

    The Air Force Research Laboratory (AFRL), NASA Armstrong Flight Research Center (AFRC), and FlexSys Inc. (Ann Arbor, Michigan) have collaborated to flight test the Adaptive Compliant Trailing Edge (ACTE) flaps. These flaps were installed on a Gulfstream Aerospace Corporation (GAC) GIII aircraft and tested at AFRC at various deflection angles over a range of flight conditions. External aerodynamic and inertial load analyses were conducted with the intention to ensure that the change in wing loads due to the deployed ACTE flap did not overload the existing baseline GIII wing box structure. The objective of this paper was to substantiate the analysis tools used for predicting wing loads at AFRC. Computational fluid dynamics (CFD) models and distributed mass inertial models were developed for predicting the loads on the wing. The analysis tools included TRANAIR (full potential) and CMARC (panel) models. Aerodynamic pressure data from the analysis codes were validated against static pressure port data collected in-flight. Combined results from the CFD predictions and the inertial load analysis were used to predict the normal force, bending moment, and torque loads on the wing. Wing loads obtained from calibrated strain gages installed on the wing were used for substantiation of the load prediction tools. The load predictions exhibited good agreement compared to the flight load results obtained from calibrated strain gage measurements.

  7. Structural response of existing spatial truss roof construction based on Cosserat rod theory

    NASA Astrophysics Data System (ADS)

    Miśkiewicz, Mikołaj

    2018-04-01

    Paper presents the application of the Cosserat rod theory and newly developed associated finite elements code as the tools that support in the expert-designing engineering practice. Mechanical principles of the 3D spatially curved rods, dynamics (statics) laws, principle of virtual work are discussed. Corresponding FEM approach with interpolation and accumulation techniques of state variables are shown that enable the formulation of the C0 Lagrangian rod elements with 6-degrees of freedom per node. Two test examples are shown proving the correctness and suitability of the proposed formulation. Next, the developed FEM code is applied to assess the structural response of the spatial truss roof of the "Olivia" Sports Arena Gdansk, Poland. The numerical results are compared with load test results. It is shown that the proposed FEM approach yields correct results.

  8. Measuring and Specifying Combinatorial Coverage of Test Input Configurations

    PubMed Central

    Kuhn, D. Richard; Kacker, Raghu N.; Lei, Yu

    2015-01-01

    A key issue in testing is how many tests are needed for a required level of coverage or fault detection. Estimates are often based on error rates in initial testing, or on code coverage. For example, tests may be run until a desired level of statement or branch coverage is achieved. Combinatorial methods present an opportunity for a different approach to estimating required test set size, using characteristics of the test set. This paper describes methods for estimating the coverage of, and ability to detect, t-way interaction faults of a test set based on a covering array. We also develop a connection between (static) combinatorial coverage and (dynamic) code coverage, such that if a specific condition is satisfied, 100% branch coverage is assured. Using these results, we propose practical recommendations for using combinatorial coverage in specifying test requirements. PMID:28133442

  9. Static and Dynamical Structural Investigations of Metal-Oxide Nanocrystals by Powder X-ray Diffraction: Colloidal Tungsten Oxide as a Case Study

    DOE PAGES

    Caliandro, Rocco; Sibillano, Teresa; Belviso, B. Danilo; ...

    2016-02-02

    In this study, we have developed a general X-ray powder diffraction (XPD) methodology for the simultaneous structural and compositional characterization of inorganic nanomaterials. The approach is validated on colloidal tungsten oxide nanocrystals (WO 3-x NCs), as a model polymorphic nanoscale material system. Rod-shaped WO 3-x NCs with different crystal structure and stoichiometry are comparatively investigated under an inert atmosphere and after prolonged air exposure. An initial structural model for the as-synthesized NCs is preliminarily identified by means of Rietveld analysis against several reference crystal phases, followed by atomic pair distribution function (PDF) refinement of the best-matching candidates (static analysis). Subtlemore » stoichiometry deviations from the corresponding bulk standards are revealed. NCs exposed to air at room temperature are monitored by XPD measurements at scheduled time intervals. The static PDF analysis is complemented with an investigation into the evolution of the WO 3-x NC structure, performed by applying the modulation enhanced diffraction technique to the whole time series of XPD profiles (dynamical analysis). Prolonged contact with ambient air is found to cause an appreciable increase in the static disorder of the O atoms in the WO 3-x NC lattice, rather than a variation in stoichiometry. Finally, the time behavior of such structural change is identified on the basis of multivariate analysis.« less

  10. An Analysis of the Effects of Wing Aspect Ratio and Tail Location on Static Longitudinal Stability Below the Mach Number of Lift Divergence

    NASA Technical Reports Server (NTRS)

    Axelson, John A.; Crown, J. Conrad

    1948-01-01

    An analysis is presented of the influence of wing aspect ratio and tail location on the effects of compressibility upon static longitudinal stability. The investigation showed that the use of reduced wing aspect ratios or short tail lengths leads to serious reductions in high-speed stability and the possibility of high-speed instability.

  11. Static Analysis Alert Audits: Lexicon and Rules

    DTIC Science & Technology

    2016-11-04

    collaborators • Includes a standard set of well-defined determinations for static analysis alerts • Includes a set of auditing rules to help auditors make...consistent decisions in commonly-encountered situations Different auditors should make the same determination for a given alert! Improve the quality and...scenarios • Establish assumptions auditors can make • Overall: help make audit determinations more consistent We developed 12 rules • Drew on our own

  12. Analysis for lateral deflection of railroad track under quasi-static loading

    DOT National Transportation Integrated Search

    2013-10-15

    This paper describes analyses to examine the lateral : deflection of railroad track subjected to quasi-static loading. : Rails are assumed to behave as beams in bending. Movement : of the track in the lateral plane is constrained by idealized : resis...

  13. Static Analysis of Large-Scale Multibody System Using Joint Coordinates and Spatial Algebra Operator

    PubMed Central

    Omar, Mohamed A.

    2014-01-01

    Initial transient oscillations inhibited in the dynamic simulations responses of multibody systems can lead to inaccurate results, unrealistic load prediction, or simulation failure. These transients could result from incompatible initial conditions, initial constraints violation, and inadequate kinematic assembly. Performing static equilibrium analysis before the dynamic simulation can eliminate these transients and lead to stable simulation. Most exiting multibody formulations determine the static equilibrium position by minimizing the system potential energy. This paper presents a new general purpose approach for solving the static equilibrium in large-scale articulated multibody. The proposed approach introduces an energy drainage mechanism based on Baumgarte constraint stabilization approach to determine the static equilibrium position. The spatial algebra operator is used to express the kinematic and dynamic equations of the closed-loop multibody system. The proposed multibody system formulation utilizes the joint coordinates and modal elastic coordinates as the system generalized coordinates. The recursive nonlinear equations of motion are formulated using the Cartesian coordinates and the joint coordinates to form an augmented set of differential algebraic equations. Then system connectivity matrix is derived from the system topological relations and used to project the Cartesian quantities into the joint subspace leading to minimum set of differential equations. PMID:25045732

  14. Static analysis of large-scale multibody system using joint coordinates and spatial algebra operator.

    PubMed

    Omar, Mohamed A

    2014-01-01

    Initial transient oscillations inhibited in the dynamic simulations responses of multibody systems can lead to inaccurate results, unrealistic load prediction, or simulation failure. These transients could result from incompatible initial conditions, initial constraints violation, and inadequate kinematic assembly. Performing static equilibrium analysis before the dynamic simulation can eliminate these transients and lead to stable simulation. Most exiting multibody formulations determine the static equilibrium position by minimizing the system potential energy. This paper presents a new general purpose approach for solving the static equilibrium in large-scale articulated multibody. The proposed approach introduces an energy drainage mechanism based on Baumgarte constraint stabilization approach to determine the static equilibrium position. The spatial algebra operator is used to express the kinematic and dynamic equations of the closed-loop multibody system. The proposed multibody system formulation utilizes the joint coordinates and modal elastic coordinates as the system generalized coordinates. The recursive nonlinear equations of motion are formulated using the Cartesian coordinates and the joint coordinates to form an augmented set of differential algebraic equations. Then system connectivity matrix is derived from the system topological relations and used to project the Cartesian quantities into the joint subspace leading to minimum set of differential equations.

  15. Diversity of coding profiles of mechanoreceptors in glabrous skin of kittens.

    PubMed

    Gibson, J M; Beitel, R E; Welker, W

    1975-03-21

    We examined stimulul-response (S-R) profiles of 35 single mechanoreceptive afferent units having small receptive fields in glabrous forepaw skin of 24 anesthetized domestic kittens. Single unit activity was recorded with tungsten microelectrodes from cervical dorsal root ganglia. The study was designed to be as quantitatively descriptive as possible. We indented each unit's receptive field with a broad battery of simple, carefully controlled stimuli whose major parameters, including amplitude, velocity, acceleration, duration, and interstimulus interval were systematically varied. Stimuli were delivered by a small probe driven by a feedback-controlled axial displacement generator. Single unit discharge data were analyzed by a variety of direct and derived measures including dot patterns, peristimulus histograms, instantaneous and mean instantaneous firing rates, tuning curves, thresholds for amplitude and velocity, adaptation rates, dynamic and static sensitivities, and others. We found that with respect to any of the S-R transactions examined, the properties of our sample of units were continuously and broadly distributed. Any one unit might exhibit either a slow or rapid rate of adaptation, or might superficially appear to preferentially code a single stimulus parameter such as amplitude or velocity. But when the entire range of responsiveness of units to the entire stimulus battery was surveyed by a variety of analytic techniques, we were unable to find any justifiable basis for designation of discrete categories of S-R profiles. Intermediate response types were always found, and in general, all units were both broadly tuned and capable of responding to integrals of several stimulus parameters, our data argue against the usefulness of evaluating a unit's S-R coding capabilities by means of a limited ste of stimulation of response analysis procedures.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keefer, Donald A.; Shaffer, Eric G.; Storsved, Brynne

    A free software application, RVA, has been developed as a plugin to the US DOE-funded ParaView visualization package, to provide support in the visualization and analysis of complex reservoirs being managed using multi-fluid EOR techniques. RVA, for Reservoir Visualization and Analysis, was developed as an open-source plugin to the 64 bit Windows version of ParaView 3.14. RVA was developed at the University of Illinois at Urbana-Champaign, with contributions from the Illinois State Geological Survey, Department of Computer Science and National Center for Supercomputing Applications. RVA was designed to utilize and enhance the state-of-the-art visualization capabilities within ParaView, readily allowing jointmore » visualization of geologic framework and reservoir fluid simulation model results. Particular emphasis was placed on enabling visualization and analysis of simulation results highlighting multiple fluid phases, multiple properties for each fluid phase (including flow lines), multiple geologic models and multiple time steps. Additional advanced functionality was provided through the development of custom code to implement data mining capabilities. The built-in functionality of ParaView provides the capacity to process and visualize data sets ranging from small models on local desktop systems to extremely large models created and stored on remote supercomputers. The RVA plugin that we developed and the associated User Manual provide improved functionality through new software tools, and instruction in the use of ParaView-RVA, targeted to petroleum engineers and geologists in industry and research. The RVA web site (http://rva.cs.illinois.edu) provides an overview of functions, and the development web site (https://github.com/shaffer1/RVA) provides ready access to the source code, compiled binaries, user manual, and a suite of demonstration data sets. Key functionality has been included to support a range of reservoirs visualization and analysis needs, including: sophisticated connectivity analysis, cross sections through simulation results between selected wells, simplified volumetric calculations, global vertical exaggeration adjustments, ingestion of UTChem simulation results, ingestion of Isatis geostatistical framework models, interrogation of joint geologic and reservoir modeling results, joint visualization and analysis of well history files, location-targeted visualization, advanced correlation analysis, visualization of flow paths, and creation of static images and animations highlighting targeted reservoir features.« less

  17. RVA: A Plugin for ParaView 3.14

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-09-04

    RVA is a plugin developed for the 64-bit Windows version of the ParaView 3.14 visualization package. RVA is designed to provide support in the visualization and analysis of complex reservoirs being managed using multi-fluid EOR techniques. RVA, for Reservoir Visualization and Analysis, was developed at the University of Illinois at Urbana-Champaign, with contributions from the Illinois State Geological Survey, Department of Computer Science and National Center for Supercomputing Applications. RVA was designed to utilize and enhance the state-of-the-art visualization capabilities within ParaView, readily allowing joint visualization of geologic framework and reservoir fluid simulation model results. Particular emphasis was placed onmore » enabling visualization and analysis of simulation results highlighting multiple fluid phases, multiple properties for each fluid phase (including flow lines), multiple geologic models and multiple time steps. Additional advanced functionality was provided through the development of custom code to implement data mining capabilities. The built-in functionality of ParaView provides the capacity to process and visualize data sets ranging from small models on local desktop systems to extremely large models created and stored on remote supercomputers. The RVA plugin that we developed and the associated User Manual provide improved functionality through new software tools, and instruction in the use of ParaView-RVA, targeted to petroleum engineers and geologists in industry and research. The RVA web site (http://rva.cs.illinois.edu) provides an overview of functions, and the development web site (https://github.com/shaffer1/RVA) provides ready access to the source code, compiled binaries, user manual, and a suite of demonstration data sets. Key functionality has been included to support a range of reservoirs visualization and analysis needs, including: sophisticated connectivity analysis, cross sections through simulation results between selected wells, simplified volumetric calculations, global vertical exaggeration adjustments, ingestion of UTChem simulation results, ingestion of Isatis geostatistical framework models, interrogation of joint geologic and reservoir modeling results, joint visualization and analysis of well history files, location-targeted visualization, advanced correlation analysis, visualization of flow paths, and creation of static images and animations highlighting targeted reservoir features.« less

  18. The Perception of Dynamic and Static Facial Expressions of Happiness and Disgust Investigated by ERPs and fMRI Constrained Source Analysis

    PubMed Central

    Trautmann-Lengsfeld, Sina Alexa; Domínguez-Borràs, Judith; Escera, Carles; Herrmann, Manfred; Fehr, Thorsten

    2013-01-01

    A recent functional magnetic resonance imaging (fMRI) study by our group demonstrated that dynamic emotional faces are more accurately recognized and evoked more widespread patterns of hemodynamic brain responses than static emotional faces. Based on this experimental design, the present study aimed at investigating the spatio-temporal processing of static and dynamic emotional facial expressions in 19 healthy women by means of multi-channel electroencephalography (EEG), event-related potentials (ERP) and fMRI-constrained regional source analyses. ERP analysis showed an increased amplitude of the LPP (late posterior positivity) over centro-parietal regions for static facial expressions of disgust compared to neutral faces. In addition, the LPP was more widespread and temporally prolonged for dynamic compared to static faces of disgust and happiness. fMRI constrained source analysis on static emotional face stimuli indicated the spatio-temporal modulation of predominantly posterior regional brain activation related to the visual processing stream for both emotional valences when compared to the neutral condition in the fusiform gyrus. The spatio-temporal processing of dynamic stimuli yielded enhanced source activity for emotional compared to neutral conditions in temporal (e.g., fusiform gyrus), and frontal regions (e.g., ventromedial prefrontal cortex, medial and inferior frontal cortex) in early and again in later time windows. The present data support the view that dynamic facial displays trigger more information reflected in complex neural networks, in particular because of their changing features potentially triggering sustained activation related to a continuing evaluation of those faces. A combined fMRI and EEG approach thus provides an advanced insight to the spatio-temporal characteristics of emotional face processing, by also revealing additional neural generators, not identifiable by the only use of an fMRI approach. PMID:23818974

  19. A mechanism to explain the variations of tropopause and tropopause inversion layer in the Arctic region during a sudden stratospheric warming in 2009

    NASA Astrophysics Data System (ADS)

    Wang, Rui; Tomikawa, Yoshihiro; Nakamura, Takuji; Huang, Kaiming; Zhang, Shaodong; Zhang, Yehui; Yang, Huigen; Hu, Hongqiao

    2016-10-01

    The mechanism to explain the variations of tropopause and tropopause inversion layer (TIL) in the Arctic region during a sudden stratospheric warming (SSW) in 2009 was studied with the Modern-Era Retrospective analysis for Research and Applications reanalysis data and GPS/Constellation Observing system for Meteorology, Ionosphere, and Climate (COSMIC) temperature data. During the prominent SSW in 2009, the cyclonic system changed to the anticyclonic system due to the planetary wave with wave number 2 (wave2). The GPS/COSMIC temperature data showed that during the SSW in 2009, the tropopause height in the Arctic decreased accompanied with the tropopause temperature increase and the TIL enhancement. The variations of the tropopause and TIL were larger in higher latitudes. A static stability analysis showed that the variations of the tropopause and TIL were associated with the variations of the residual circulation and the static stability due to the SSW. Larger static stability appeared in the upper stratosphere and moved downward to the narrow region just above the tropopause. The descent of strong downward flow was faster in higher latitudes. The static stability tendency analysis showed that the strong downward residual flow induced the static stability change in the stratosphere and around the tropopause. The strong downwelling in the stratosphere was mainly induced by wave2, which led to the tropopause height and temperature changes due to the adiabatic heating. Around the tropopause, a pair of downwelling above the tropopause and upwelling below the tropopause due to wave2 contributed to the enhancement of static stability in the TIL immediately after the SSW.

  20. QED multi-dimensional vacuum polarization finite-difference solver

    NASA Astrophysics Data System (ADS)

    Carneiro, Pedro; Grismayer, Thomas; Silva, Luís; Fonseca, Ricardo

    2015-11-01

    The Extreme Light Infrastructure (ELI) is expected to deliver peak intensities of 1023 - 1024 W/cm2 allowing to probe nonlinear Quantum Electrodynamics (QED) phenomena in an unprecedented regime. Within the framework of QED, the second order process of photon-photon scattering leads to a set of extended Maxwell's equations [W. Heisenberg and H. Euler, Z. Physik 98, 714] effectively creating nonlinear polarization and magnetization terms that account for the nonlinear response of the vacuum. To model this in a self-consistent way, we present a multi dimensional generalized Maxwell equation finite difference solver with significantly enhanced dispersive properties, which was implemented in the OSIRIS particle-in-cell code [R.A. Fonseca et al. LNCS 2331, pp. 342-351, 2002]. We present a detailed numerical analysis of this electromagnetic solver. As an illustration of the properties of the solver, we explore several examples in extreme conditions. We confirm the theoretical prediction of vacuum birefringence of a pulse propagating in the presence of an intense static background field [arXiv:1301.4918 [quant-ph

  1. Comparison of theoretical and experimental thrust performance of a 1030:1 area ratio rocket nozzle at a chamber pressure of 2413 kN/sq m (350 psia)

    NASA Technical Reports Server (NTRS)

    Smith, Tamara A.; Pavli, Albert J.; Kacynski, Kenneth J.

    1987-01-01

    The Joint Army, Navy, NASA, Air Force (JANNAF) rocket-engine performance-prediction procedure is based on the use of various reference computer programs. One of the reference programs for nozzle analysis is the Two-Dimensional Kinetics (TDK) Program. The purpose of this report is to calibrate the JANNAF procedure that has been incorporated into the December 1984 version of the TDK program for the high-area-ratio rocket-engine regime. The calibration was accomplished by modeling the performance of a 1030:1 rocket nozzle tested at NASA Lewis. A detailed description of the test conditions and TDK input parameters is given. The reuslts indicate that the computer code predicts delivered vacuum specific impulse to within 0.12 to 1.9 percent of the experimental data. Vacuum thrust coefficient predictions were within + or - 1.3 percent of experimental results. Predictions of wall static pressure were within approximately + or - 5 percent of the measured values.

  2. CFS MATLAB toolbox: An experiment builder for continuous flash suppression (CFS) task.

    PubMed

    Nuutinen, Mikko; Mustonen, Terhi; Häkkinen, Jukka

    2017-09-15

    CFS toolbox is an open-source collection of MATLAB functions that utilizes PsychToolbox-3 (PTB-3). It is designed to allow a researcher to create and run continuous flash suppression experiments using a variety of experimental parameters (i.e., stimulus types and locations, noise characteristics, and experiment window settings). In a CFS experiment, one of the eyes at a time is presented with a dynamically changing noise pattern, while the other eye is concurrently presented with a static target stimulus, such as a Gabor patch. Due to the strong interocular suppression created by the dominant noise pattern mask, the target stimulus is rendered invisible for an extended duration. Very little knowledge of MATLAB is required for using the toolbox; experiments are generated by modifying csv files with the required parameters, and result data are output to text files for further analysis. The open-source code is available on the project page under a Creative Commons License ( http://www.mikkonuutinen.arkku.net/CFS_toolbox/ and https://bitbucket.org/mikkonuutinen/cfs_toolbox ).

  3. Supersonic dynamic stability characteristics of the test technique demonstrator NASP configuration

    NASA Technical Reports Server (NTRS)

    Dress, David A.; Boyden, Richmond P.; Cruz, Christopher I.

    1992-01-01

    Wind tunnel tests of a National Aero-Space Plane (NASP) configuration were conducted in both test sections of the Langley Unitary Plan Wind Tunnel. The model used is a Langley designed blended body NASP configuration. Dynamic stability characteristics were measured on this configuration at Mach numbers of 2.0, 2.5, 3.5, and 4.5. In addition to tests of the baseline configuration, component buildup tests were conducted. The test results show that the baseline configuration generally has positive damping about all three axes with only isolated exceptions. In addition, there was generally good agreement between the in-pulse dynamic parameters and the corresponding static data which were measured during another series of tests in the Unitary Plan Wind Tunnel. Also included are comparisons of the experimental damping parameters with results from the engineering predictive code APAS (Aerodynamic Preliminary Analysis System). These comparisons show good agreement at low angles of attack; however, the comparisons are generally not as good at the higher angles of attack.

  4. Airplane wing deformation and flight flutter detection method by using three-dimensional speckle image correlation technology.

    PubMed

    Wu, Jun; Yu, Zhijing; Wang, Tao; Zhuge, Jingchang; Ji, Yue; Xue, Bin

    2017-06-01

    Airplane wing deformation is an important element of aerodynamic characteristics, structure design, and fatigue analysis for aircraft manufacturing, as well as a main test content of certification regarding flutter for airplanes. This paper presents a novel real-time detection method for wing deformation and flight flutter detection by using three-dimensional speckle image correlation technology. Speckle patterns whose positions are determined through the vibration characteristic of the aircraft are coated on the wing; then the speckle patterns are imaged by CCD cameras which are mounted inside the aircraft cabin. In order to reduce the computation, a matching technique based on Geodetic Systems Incorporated coded points combined with the classical epipolar constraint is proposed, and a displacement vector map for the aircraft wing can be obtained through comparing the coordinates of speckle points before and after deformation. Finally, verification experiments containing static and dynamic tests by using an aircraft wing model demonstrate the accuracy and effectiveness of the proposed method.

  5. Preliminary 2-D shell analysis of the space shuttle solid rocket boosters

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Gillian, Ronnie E.; Nemeth, Michael P.

    1987-01-01

    A two-dimensional shell model of an entire solid rocket booster (SRB) has been developed using the STAGSC-1 computer code and executed on the Ames CRAY computer. The purpose of these analyses is to calculate the overall deflection and stress distributions for the SRB when subjected to mechanical loads corresponding to critical times during the launch sequence. The mechanical loading conditions for the full SRB arise from the external tank (ET) attachment points, the solid rocket motor (SRM) pressure load, and the SRB hold down posts. The ET strut loads vary with time after the Space Shuttle main engine (SSME) ignition. The SRM internal pressure varies axially by approximately 100 psi. Static analyses of the full SRB are performed using a snapshot picture of the loads. The field and factory joints are modeled by using equivalent stiffness joints instead of detailed models of the joint. As such, local joint behavior cannot be obtained from this global model.

  6. Integrated multidisciplinary design optimization using discrete sensitivity analysis for geometrically complex aeroelastic configurations

    NASA Astrophysics Data System (ADS)

    Newman, James Charles, III

    1997-10-01

    The first two steps in the development of an integrated multidisciplinary design optimization procedure capable of analyzing the nonlinear fluid flow about geometrically complex aeroelastic configurations have been accomplished in the present work. For the first step, a three-dimensional unstructured grid approach to aerodynamic shape sensitivity analysis and design optimization has been developed. The advantage of unstructured grids, when compared with a structured-grid approach, is their inherent ability to discretize irregularly shaped domains with greater efficiency and less effort. Hence, this approach is ideally suited for geometrically complex configurations of practical interest. In this work the time-dependent, nonlinear Euler equations are solved using an upwind, cell-centered, finite-volume scheme. The discrete, linearized systems which result from this scheme are solved iteratively by a preconditioned conjugate-gradient-like algorithm known as GMRES for the two-dimensional cases and a Gauss-Seidel algorithm for the three-dimensional; at steady-state, similar procedures are used to solve the accompanying linear aerodynamic sensitivity equations in incremental iterative form. As shown, this particular form of the sensitivity equation makes large-scale gradient-based aerodynamic optimization possible by taking advantage of memory efficient methods to construct exact Jacobian matrix-vector products. Various surface parameterization techniques have been employed in the current study to control the shape of the design surface. Once this surface has been deformed, the interior volume of the unstructured grid is adapted by considering the mesh as a system of interconnected tension springs. Grid sensitivities are obtained by differentiating the surface parameterization and the grid adaptation algorithms with ADIFOR, an advanced automatic-differentiation software tool. To demonstrate the ability of this procedure to analyze and design complex configurations of practical interest, the sensitivity analysis and shape optimization has been performed for several two- and three-dimensional cases. In twodimensions, an initially symmetric NACA-0012 airfoil and a high-lift multielement airfoil were examined. For the three-dimensional configurations, an initially rectangular wing with uniform NACA-0012 cross-sections was optimized; in addition, a complete Boeing 747-200 aircraft was studied. Furthermore, the current study also examines the effect of inconsistency in the order of spatial accuracy between the nonlinear fluid and linear shape sensitivity equations. The second step was to develop a computationally efficient, high-fidelity, integrated static aeroelastic analysis procedure. To accomplish this, a structural analysis code was coupled with the aforementioned unstructured grid aerodynamic analysis solver. The use of an unstructured grid scheme for the aerodynamic analysis enhances the interaction compatibility with the wing structure. The structural analysis utilizes finite elements to model the wing so that accurate structural deflections may be obtained. In the current work, parameters have been introduced to control the interaction of the computational fluid dynamics and structural analyses; these control parameters permit extremely efficient static aeroelastic computations. To demonstrate and evaluate this procedure, static aeroelastic analysis results for a flexible wing in low subsonic, high subsonic (subcritical), transonic (supercritical), and supersonic flow conditions are presented.

  7. Static Holdup of Liquid Slag in Simulated Packed Coke Bed Under Oxygen Blast Furnace Ironmaking Conditions

    NASA Astrophysics Data System (ADS)

    Wang, Guang; Liu, Yingli; Zhou, Zhenfeng; Wang, Jingsong; Xue, Qingguo

    2018-01-01

    The liquid-phase flow behavior of slag in the lower zone of a blast furnace affects the furnace permeability, performance, and productivity. The effects of pulverized coal injection (PCI) on the behavior of simulated primary slag flow were investigated by quantifying the effect of key variables including Al/Si ratio [Al2O3 (wt.%) to SiO2 (wt.%)] and the amount of unburnt pulverized coal (UPC) at 1500°C. Viscosity analysis demonstrated that the slag fluidity decreased as the Al/Si ratio was increased (from 0.35 to 0.50), resulting in gradual increase of the static holdup. Increasing the amount of UPC resulted in a significant increase of the static holdup. Flooding analysis was applied to determine the maximum static holdup, which was found to be 11.5%. It was inferred that the burnout rates of pulverized coal should exceed 78.6% and 83.9% in traditional and oxygen blast furnaces, respectively.

  8. Functional Programming with C++ Template Metaprograms

    NASA Astrophysics Data System (ADS)

    Porkoláb, Zoltán

    Template metaprogramming is an emerging new direction of generative programming. With the clever definitions of templates we can force the C++ compiler to execute algorithms at compilation time. Among the application areas of template metaprograms are the expression templates, static interface checking, code optimization with adaption, language embedding and active libraries. However, as template metaprogramming was not an original design goal, the C++ language is not capable of elegant expression of metaprograms. The complicated syntax leads to the creation of code that is hard to write, understand and maintain. Although template metaprogramming has a strong relationship with functional programming, this is not reflected in the language syntax and existing libraries. In this paper we give a short and incomplete introduction to C++ templates and the basics of template metaprogramming. We will enlight the role of template metaprograms, and some important and widely used idioms. We give an overview of the possible application areas as well as debugging and profiling techniques. We suggest a pure functional style programming interface for C++ template metaprograms in the form of embedded Haskell code which is transformed to standard compliant C++ source.

  9. SEURAT: SPH scheme extended with ultraviolet line radiative transfer

    NASA Astrophysics Data System (ADS)

    Abe, Makito; Suzuki, Hiroyuki; Hasegawa, Kenji; Semelin, Benoit; Yajima, Hidenobu; Umemura, Masayuki

    2018-05-01

    We present a novel Lyman alpha (Ly α) radiative transfer code, SEURAT (SPH scheme Extended with Ultraviolet line RAdiative Transfer), where line scatterings are solved adaptively with the resolution of the smoothed particle hydrodynamics (SPH). The radiative transfer method implemented in SEURAT is based on a Monte Carlo algorithm in which the scattering and absorption by dust are also incorporated. We perform standard test calculations to verify the validity of the code; (i) emergent spectra from a static uniform sphere, (ii) emergent spectra from an expanding uniform sphere, and (iii) escape fraction from a dusty slab. Thereby, we demonstrate that our code solves the {Ly} α radiative transfer with sufficient accuracy. We emphasize that SEURAT can treat the transfer of {Ly} α photons even in highly complex systems that have significantly inhomogeneous density fields. The high adaptivity of SEURAT is desirable to solve the propagation of {Ly} α photons in the interstellar medium of young star-forming galaxies like {Ly} α emitters (LAEs). Thus, SEURAT provides a powerful tool to model the emergent spectra of {Ly} α emission, which can be compared to the observations of LAEs.

  10. Characterization of Triaxial Braided Composite Material Properties for Impact Simulation

    NASA Technical Reports Server (NTRS)

    Roberts, Gary D.; Goldberg, Robert K.; Biniendak, Wieslaw K.; Arnold, William A.; Littell, Justin D.; Kohlman, Lee W.

    2009-01-01

    The reliability of impact simulations for aircraft components made with triaxial braided carbon fiber composites is currently limited by inadequate material property data and lack of validated material models for analysis. Improvements to standard quasi-static test methods are needed to account for the large unit cell size and localized damage within the unit cell. The deformation and damage of a triaxial braided composite material was examined using standard quasi-static in-plane tension, compression, and shear tests. Some modifications to standard test specimen geometries are suggested, and methods for measuring the local strain at the onset of failure within the braid unit cell are presented. Deformation and damage at higher strain rates is examined using ballistic impact tests on 61- by 61- by 3.2-mm (24- by 24- by 0.125-in.) composite panels. Digital image correlation techniques were used to examine full-field deformation and damage during both quasi-static and impact tests. An impact analysis method is presented that utilizes both local and global deformation and failure information from the quasi-static tests as input for impact simulations. Improvements that are needed in test and analysis methods for better predictive capability are examined.

  11. Flight calibration of compensated and uncompensated pitot-static airspeed probes and application of the probes to supersonic cruise vehicles

    NASA Technical Reports Server (NTRS)

    Webb, L. D.; Washington, H. P.

    1972-01-01

    Static pressure position error calibrations for a compensated and an uncompensated XB-70 nose boom pitot static probe were obtained in flight. The methods (Pacer, acceleration-deceleration, and total temperature) used to obtain the position errors over a Mach number range from 0.5 to 3.0 and an altitude range from 25,000 feet to 70,000 feet are discussed. The error calibrations are compared with the position error determined from wind tunnel tests, theoretical analysis, and a standard NACA pitot static probe. Factors which influence position errors, such as angle of attack, Reynolds number, probe tip geometry, static orifice location, and probe shape, are discussed. Also included are examples showing how the uncertainties caused by position errors can affect the inlet controls and vertical altitude separation of a supersonic transport.

  12. UCLA Final Technical Report for the "Community Petascale Project for Accelerator Science and Simulation”.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mori, Warren

    The UCLA Plasma Simulation Group is a major partner of the “Community Petascale Project for Accelerator Science and Simulation”. This is the final technical report. We include an overall summary, a list of publications, progress for the most recent year, and individual progress reports for each year. We have made tremendous progress during the three years. SciDAC funds have contributed to the development of a large number of skeleton codes that illustrate how to write PIC codes with a hierarchy of parallelism. These codes cover 2D and 3D as well as electrostatic solvers (which are used in beam dynamics codesmore » and quasi-static codes) and electromagnetic solvers (which are used in plasma based accelerator codes). We also used these ideas to develop a GPU enabled version of OSIRIS. SciDAC funds were also contributed to the development of strategies to eliminate the Numerical Cerenkov Instability (NCI) which is an issue when carrying laser wakefield accelerator (LWFA) simulations in a boosted frame and when quantifying the emittance and energy spread of self-injected electron beams. This work included the development of a new code called UPIC-EMMA which is an FFT based electromagnetic PIC code and to new hybrid algorithms in OSIRIS. A new hybrid (PIC in r-z and gridless in φ) algorithm was implemented into OSIRIS. In this algorithm the fields and current are expanded into azimuthal harmonics and the complex amplitude for each harmonic is calculated separately. The contributions from each harmonic are summed and then used to push the particles. This algorithm permits modeling plasma based acceleration with some 3D effects but with the computational load of an 2D r-z PIC code. We developed a rigorously charge conserving current deposit for this algorithm. Very recently, we made progress in combining the speed up from the quasi-3D algorithm with that from the Lorentz boosted frame. SciDAC funds also contributed to the improvement and speed up of the quasi-static PIC code QuickPIC. We have also used our suite of PIC codes to make scientific discovery. Highlights include supporting FACET experiments which achieved the milestones of showing high beam loading and energy transfer efficiency from a drive electron beam to a witness electron beam and the discovery of a self-loading regime a for high gradient acceleration of a positron beam. Both of these experimental milestones were published in Nature together with supporting QuickPIC simulation results. Simulation results from QuickPIC were used on the cover of Nature in one case. We are also making progress on using highly resolved QuickPIC simulations to show that ion motion may not lead to catastrophic emittance growth for tightly focused electron bunches loaded into nonlinear wakefields. This could mean that fully self-consistent beam loading scenarios are possible. This work remains in progress. OSIRIS simulations were used to discover how 200 MeV electron rings are formed in LWFA experiments, on how to generate electrons that have a series of bunches on nanometer scale, and how to transport electron beams from (into) plasma sections into (from) conventional beam optic sections.« less

  13. Size Effects in Impact Damage of Composite Sandwich Panels

    NASA Technical Reports Server (NTRS)

    Dobyns, Alan; Jackson, Wade

    2003-01-01

    Panel size has a large effect on the impact response and resultant damage level of honeycomb sandwich panels. It has been observed during impact testing that panels of the same design but different panel sizes will show large differences in damage when impacted with the same impact energy. To study this effect, a test program was conducted with instrumented impact testing of three different sizes of sandwich panels to obtain data on panel response and residual damage. In concert with the test program. a closed form analysis method was developed that incorporates the effects of damage on the impact response. This analysis method will predict both the impact response and the residual damage of a simply-supported sandwich panel impacted at any position on the panel. The damage is incorporated by the use of an experimental load-indentation curve obtained for the face-sheet/honeycomb and indentor combination under study. This curve inherently includes the damage response and can be obtained quasi-statically from a rigidly-backed specimen or a specimen with any support conditions. Good correlation has been obtained between the test data and the analysis results for the maximum force and residual indentation. The predictions can be improved by using a dynamic indentation curve. Analyses have also been done using the MSC/DYTRAN finite element code.

  14. Reconstructing dynamic mental models of facial expressions in prosopagnosia reveals distinct representations for identity and expression.

    PubMed

    Richoz, Anne-Raphaëlle; Jack, Rachael E; Garrod, Oliver G B; Schyns, Philippe G; Caldara, Roberto

    2015-04-01

    The human face transmits a wealth of signals that readily provide crucial information for social interactions, such as facial identity and emotional expression. Yet, a fundamental question remains unresolved: does the face information for identity and emotional expression categorization tap into common or distinct representational systems? To address this question we tested PS, a pure case of acquired prosopagnosia with bilateral occipitotemporal lesions anatomically sparing the regions that are assumed to contribute to facial expression (de)coding (i.e., the amygdala, the insula and the posterior superior temporal sulcus--pSTS). We previously demonstrated that PS does not use information from the eye region to identify faces, but relies on the suboptimal mouth region. PS's abnormal information use for identity, coupled with her neural dissociation, provides a unique opportunity to probe the existence of a dichotomy in the face representational system. To reconstruct the mental models of the six basic facial expressions of emotion in PS and age-matched healthy observers, we used a novel reverse correlation technique tracking information use on dynamic faces. PS was comparable to controls, using all facial features to (de)code facial expressions with the exception of fear. PS's normal (de)coding of dynamic facial expressions suggests that the face system relies either on distinct representational systems for identity and expression, or dissociable cortical pathways to access them. Interestingly, PS showed a selective impairment for categorizing many static facial expressions, which could be accounted for by her lesion in the right inferior occipital gyrus. PS's advantage for dynamic facial expressions might instead relate to a functionally distinct and sufficient cortical pathway directly connecting the early visual cortex to the spared pSTS. Altogether, our data provide critical insights on the healthy and impaired face systems, question evidence of deficits obtained from patients by using static images of facial expressions, and offer novel routes for patient rehabilitation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Numerical analysis of stiffened shells of revolution. Volume 1: Theory manual for STARS-2S, 2B, 2V digital computer programs

    NASA Technical Reports Server (NTRS)

    Svalbonas, V.

    1973-01-01

    The theoretical analysis background for the STARS-2 (shell theory automated for rotational structures) program is presented. The theory involved in the axisymmetric nonlinear and unsymmetric linear static analyses, and the stability and vibrations (including critical rotation speed) analyses involving axisymmetric prestress are discussed. The theory for nonlinear static, stability, and vibrations analyses, involving shells with unsymmetric loadings are included.

  16. Design, Static Analysis And Fabrication Of Composite Joints

    NASA Astrophysics Data System (ADS)

    Mathiselvan, G.; Gobinath, R.; Yuvaraja, S.; Raja, T.

    2017-05-01

    The Bonded joints will be having one of the important issues in the composite technology is the repairing of aging in aircraft applications. In these applications and also for joining various composite material parts together, the composite materials fastened together either using adhesives or mechanical fasteners. In this paper, we have carried out design, static analysis of 3-D models and fabrication of the composite joints (bonded, riveted and hybrid). The 3-D model of the composite structure will be fabricated by using the materials such as epoxy resin, glass fibre material and aluminium rivet for preparing the joints. The static analysis was carried out with different joint by using ANSYS software. After fabrication, parametric study was also conducted to compare the performance of the hybrid joint with varying adherent width, adhesive thickness and overlap length. Different joint and its materials tensile test result have compared.

  17. Observation of the limit cycle in asymmetric plasma divided by a magnetic filter

    NASA Astrophysics Data System (ADS)

    Ohi, Kazuo; Naitou, Hiroshi; Tauchi, Yasushi; Fukumasa, Osamu

    2001-01-01

    An asymmetric plasma divided by a magnetic filter is numerically simulated by the one-dimensional particle-in-cell code VSIM1D [Koga et al., J. Phys. Soc. Jpn. 68, 1578 (1999)]. Depending on the asymmetry, the system behavior is static or dynamic. In the static state, the potentials of the main plasma and the subplasma are given by the sheath potentials, φM˜3TMe/e and φS˜3TSe/e, respectively, with e being an electron charge and TMe and TSe being electron temperatures (TMe>TSe). In the dynamic state, while φM˜3TMe/e, φS oscillates periodically between φS,min˜3TSe/e and φS,max˜3TMe/e. The ions accelerated by the time varying potential gap get into the subplasma and excite the laminar shock waves. The period of the limit cycle is determined by the transit time of the shock wave structure.

  18. Horizontal lifelines - review of regulations and simple design method considering anchorage rigidity.

    PubMed

    Galy, Bertrand; Lan, André

    2018-03-01

    Among the many occupational risks construction workers encounter every day falling from a height is the most dangerous. The objective of this article is to propose a simple analytical design method for horizontal lifelines (HLLs) that considers anchorage flexibility. The article presents a short review of the standards and regulations/acts/codes concerning HLLs in Canada the USA and Europe. A static analytical approach is proposed considering anchorage flexibility. The analytical results are compared with a series of 42 dynamic fall tests and a SAP2000 numerical model. The experimental results show that the analytical method is a little conservative and overestimates the line tension in most cases with a maximum of 17%. The static SAP2000 results show a maximum 2.1% difference with the analytical method. The analytical method is accurate enough to safely design HLLs and quick design abaci are provided to allow the engineer to make quick on-site verification if needed.

  19. Static and dynamic pressure measurements on a NACA 0012 airfoil in the Ames High Reynolds Number Facility

    NASA Technical Reports Server (NTRS)

    Mcdevitt, J. B.; Okuno, A. F.

    1985-01-01

    The supercritical flows at high subsonic speeds over a NACA 0012 airfoil were studied to acquire aerodynamic data suitable for evaluating numerical-flow codes. The measurements consisted primarily of static and dynamic pressures on the airfoil and test-channel walls. Shadowgraphs were also taken of the flow field near the airfoil. The tests were performed at free-stream Mach numbers from approximately 0.7 to 0.8, at angles of attack sufficient to include the onset of buffet, and at Reynolds numbers from 1 million to 14 million. A test action was designed specifically to obtain two-dimensional airfoil data with a minimum of wall interference effects. Boundary-layer suction panels were used to minimize sidewall interference effects. Flexible upper and lower walls allow test-channel area-ruling to nullify Mach number changes induced by the mass removal, to correct for longitudinal boundary-layer growth, and to provide contouring compatible with the streamlines of the model in free air.

  20. The 1.06 micrometer wideband laser modulator: Fabrication and life testing

    NASA Technical Reports Server (NTRS)

    Teague, J. R.

    1975-01-01

    The design, fabrication, testing and delivery of an optical modulator which will operate with a mode-locked Nd:YAG laser at 1.06 micrometers were performed. The system transfers data at a nominal rate of 400 Mbps. This wideband laser modulator can transmit either Pulse Gated Binary Modulation (PGBM) or Pulse Polarization Binary Modulation (PPBM) formats. The laser beam enters the modulator and passes through both crystals; approximately 1% of the transmitted beam is split from the main beam and analyzed for the AEC signal; the remaining part of the beam exits the modulator. The delivered modulator when initially aligned and integrated with laser and electronics performed very well. The optical transmission was 69.5%. The static extinction ratio was 69:1. A 1000 hour life test was conducted with the delivered modulator. A 63 bit pseudorandom code signal was used as a driver input. At the conclusion of the life test the modulator optical transmission was 71.5% and the static extinction ratio 65:1.

Top