Sample records for advanced analytical simulation

  1. Big data analytics : predicting traffic flow regimes from simulated connected vehicle messages using data analytics and machine learning.

    DOT National Transportation Integrated Search

    2016-12-25

    The key objectives of this study were to: 1. Develop advanced analytical techniques that make use of a dynamically configurable connected vehicle message protocol to predict traffic flow regimes in near-real time in a virtual environment and examine ...

  2. Web-based Visual Analytics for Extreme Scale Climate Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steed, Chad A; Evans, Katherine J; Harney, John F

    In this paper, we introduce a Web-based visual analytics framework for democratizing advanced visualization and analysis capabilities pertinent to large-scale earth system simulations. We address significant limitations of present climate data analysis tools such as tightly coupled dependencies, ineffi- cient data movements, complex user interfaces, and static visualizations. Our Web-based visual analytics framework removes critical barriers to the widespread accessibility and adoption of advanced scientific techniques. Using distributed connections to back-end diagnostics, we minimize data movements and leverage HPC platforms. We also mitigate system dependency issues by employing a RESTful interface. Our framework embraces the visual analytics paradigm via newmore » visual navigation techniques for hierarchical parameter spaces, multi-scale representations, and interactive spatio-temporal data mining methods that retain details. Although generalizable to other science domains, the current work focuses on improving exploratory analysis of large-scale Community Land Model (CLM) and Community Atmosphere Model (CAM) simulations.« less

  3. FANS Simulation of Propeller Wash at Navy Harbors (ESTEP Project ER-201031)

    DTIC Science & Technology

    2016-08-01

    this study, the Finite-Analytic Navier–Stokes code was employed to solve the Reynolds-Averaged Navier–Stokes equations in conjunction with advanced...site-specific harbor configurations, it is desirable to perform propeller wash study by solving the Navier–Stokes equations directly in conjunction ...Analytic Navier–Stokes code was employed to solve the Reynolds-Averaged Navier–Stokes equations in conjunction with advanced near-wall turbulence

  4. Manufacturing data analytics using a virtual factory representation.

    PubMed

    Jain, Sanjay; Shao, Guodong; Shin, Seung-Jun

    2017-01-01

    Large manufacturers have been using simulation to support decision-making for design and production. However, with the advancement of technologies and the emergence of big data, simulation can be utilised to perform and support data analytics for associated performance gains. This requires not only significant model development expertise, but also huge data collection and analysis efforts. This paper presents an approach within the frameworks of Design Science Research Methodology and prototyping to address the challenge of increasing the use of modelling, simulation and data analytics in manufacturing via reduction of the development effort. The use of manufacturing simulation models is presented as data analytics applications themselves and for supporting other data analytics applications by serving as data generators and as a tool for validation. The virtual factory concept is presented as the vehicle for manufacturing modelling and simulation. Virtual factory goes beyond traditional simulation models of factories to include multi-resolution modelling capabilities and thus allowing analysis at varying levels of detail. A path is proposed for implementation of the virtual factory concept that builds on developments in technologies and standards. A virtual machine prototype is provided as a demonstration of the use of a virtual representation for manufacturing data analytics.

  5. Simulations of binary black hole mergers

    NASA Astrophysics Data System (ADS)

    Lovelace, Geoffrey

    2017-01-01

    Advanced LIGO's observations of merging binary black holes have inaugurated the era of gravitational wave astronomy. Accurate models of binary black holes and the gravitational waves they emit are helping Advanced LIGO to find as many gravitational waves as possible and to learn as much as possible about the waves' sources. These models require numerical-relativity simulations of binary black holes, because near the time when the black holes merge, all analytic approximations break down. Following breakthroughs in 2005, many research groups have built numerical-relativity codes capable of simulating binary black holes. In this talk, I will discuss current challenges in simulating binary black holes for gravitational-wave astronomy, and I will discuss the tremendous progress that has already enabled such simulations to become an essential tool for Advanced LIGO.

  6. G-189A analytical simulation of the integrated waste management-water system using radioisotopes for thermal energy

    NASA Technical Reports Server (NTRS)

    Coggi, J. V.; Loscutoff, A. V.; Barker, R. S.

    1973-01-01

    An analytical simulation of the RITE-Integrated Waste Management and Water Recovery System using radioisotopes for thermal energy was prepared for the NASA-Manned Space Flight Center (MSFC). The RITE system is the most advanced concept water-waste management system currently under development and has undergone extended duration testing. It has the capability of disposing of nearly all spacecraft wastes including feces and trash and of recovering water from usual waste water sources: urine, condensate, wash water, etc. All of the process heat normally used in the system is produced from low penalty radioisotope heat sources. The analytical simulation was developed with the G189A computer program. The objective of the simulation was to obtain an analytical simulation which can be used to (1) evaluate the current RITE system steady state and transient performance during normal operating conditions, and also during off normal operating conditions including failure modes; and (2) evaluate the effects of variations in component design parameters and vehicle interface parameters on system performance.

  7. ATTIRE (analytical tools for thermal infrared engineering): A sensor simulation and modeling package

    NASA Astrophysics Data System (ADS)

    Jaggi, S.

    1993-02-01

    The Advanced Sensor Development Laboratory (ASDL) at the Stennis Space Center develops, maintains and calibrates remote sensing instruments for the National Aeronautics & Space Administration (NASA). To perform system design trade-offs, analysis, and establish system parameters, ASDL has developed a software package for analytical simulation of sensor systems. This package called 'Analytical Tools for Thermal InfraRed Engineering' - ATTIRE, simulates the various components of a sensor system. The software allows each subsystem of the sensor to be analyzed independently for its performance. These performance parameters are then integrated to obtain system level information such as Signal-to-Noise Ratio (SNR), Noise Equivalent Radiance (NER), Noise Equivalent Temperature Difference (NETD) etc. This paper describes the uses of the package and the physics that were used to derive the performance parameters.

  8. Let's Go Off the Grid: Subsurface Flow Modeling With Analytic Elements

    NASA Astrophysics Data System (ADS)

    Bakker, M.

    2017-12-01

    Subsurface flow modeling with analytic elements has the major advantage that no grid or time stepping are needed. Analytic element formulations exist for steady state and transient flow in layered aquifers and unsaturated flow in the vadose zone. Analytic element models are vector-based and consist of points, lines and curves that represent specific features in the subsurface. Recent advances allow for the simulation of partially penetrating wells and multi-aquifer wells, including skin effect and wellbore storage, horizontal wells of poly-line shape including skin effect, sharp changes in subsurface properties, and surface water features with leaky beds. Input files for analytic element models are simple, short and readable, and can easily be generated from, for example, GIS databases. Future plans include the incorporation of analytic element in parts of grid-based models where additional detail is needed. This presentation will give an overview of advanced flow features that can be modeled, many of which are implemented in free and open-source software.

  9. Optical eye simulator for laser dazzle events.

    PubMed

    Coelho, João M P; Freitas, José; Williamson, Craig A

    2016-03-20

    An optical simulator of the human eye and its application to laser dazzle events are presented. The simulator combines optical design software (ZEMAX) with a scientific programming language (MATLAB) and allows the user to implement and analyze a dazzle scenario using practical, real-world parameters. Contrary to conventional analytical glare analysis, this work uses ray tracing and the scattering model and parameters for each optical element of the eye. The theoretical background of each such element is presented in relation to the model. The overall simulator's calibration, validation, and performance analysis are achieved by comparison with a simpler model based uponCIE disability glare data. Results demonstrate that this kind of advanced optical eye simulation can be used to represent laser dazzle and has the potential to extend the range of applicability of analytical models.

  10. Development of Advanced Life Prediction Tools for Elastic-Plastic Fatigue Crack Growth

    NASA Technical Reports Server (NTRS)

    Gregg, Wayne; McGill, Preston; Swanson, Greg; Wells, Doug; Throckmorton, D. A. (Technical Monitor)

    2001-01-01

    The objective of this viewgraph presentation is to develop a systematic approach to improving the fracture control process, including analytical tools, standards, guidelines, and awareness. Analytical tools specifically for elastic-plastic fracture analysis is a regime that is currently empirical for the Space Shuttle External Tank (ET) and is handled by simulated service testing of pre-cracked panels.

  11. Collaboration and Synergy among Government, Industry and Academia in M&S Domain: Turkey’s Approach

    DTIC Science & Technology

    2009-10-01

    Analysis, Decision Support System Design and Implementation, Simulation Output Analysis, Statistical Data Analysis, Virtual Reality , Artificial... virtual and constructive visual simulation systems as well as integrated advanced analytical models. Collaboration and Synergy among Government...simulation systems that are ready to use, credible, integrated with C4ISR systems.  Creating synthetic environments and/or virtual prototypes of concepts

  12. Rotor systems research aircraft simulation mathematical model

    NASA Technical Reports Server (NTRS)

    Houck, J. A.; Moore, F. L.; Howlett, J. J.; Pollock, K. S.; Browne, M. M.

    1977-01-01

    An analytical model developed for evaluating and verifying advanced rotor concepts is discussed. The model was used during in both open loop and real time man-in-the-loop simulation during the rotor systems research aircraft design. Future applications include: pilot training, preflight of test programs, and the evaluation of promising concepts before their implementation on the flight vehicle.

  13. Technical Basis for Physical Fidelity of NRC Control Room Training Simulators for Advanced Reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Minsk, Brian S.; Branch, Kristi M.; Bates, Edward K.

    2009-10-09

    The objective of this study is to determine how simulator physical fidelity influences the effectiveness of training the regulatory personnel responsible for examination and oversight of operating personnel and inspection of technical systems at nuclear power reactors. It seeks to contribute to the U.S. Nuclear Regulatory Commission’s (NRC’s) understanding of the physical fidelity requirements of training simulators. The goal of the study is to provide an analytic framework, data, and analyses that inform NRC decisions about the physical fidelity requirements of the simulators it will need to train its staff for assignment at advanced reactors. These staff are expected tomore » come from increasingly diverse educational and experiential backgrounds.« less

  14. Real-time simulation of an automotive gas turbine using the hybrid computer

    NASA Technical Reports Server (NTRS)

    Costakis, W.; Merrill, W. C.

    1984-01-01

    A hybrid computer simulation of an Advanced Automotive Gas Turbine Powertrain System is reported. The system consists of a gas turbine engine, an automotive drivetrain with four speed automatic transmission, and a control system. Generally, dynamic performance is simulated on the analog portion of the hybrid computer while most of the steady state performance characteristics are calculated to run faster than real time and makes this simulation a useful tool for a variety of analytical studies.

  15. Experimental and analytical studies of advanced air cushion landing systems

    NASA Technical Reports Server (NTRS)

    Lee, E. G. S.; Boghani, A. B.; Captain, K. M.; Rutishauser, H. J.; Farley, H. L.; Fish, R. B.; Jeffcoat, R. L.

    1981-01-01

    Several concepts are developed for air cushion landing systems (ACLS) which have the potential for improving performance characteristics (roll stiffness, heave damping, and trunk flutter), and reducing fabrication cost and complexity. After an initial screening, the following five concepts were evaluated in detail: damped trunk, filled trunk, compartmented trunk, segmented trunk, and roll feedback control. The evaluation was based on tests performed on scale models. An ACLS dynamic simulation developed earlier is updated so that it can be used to predict the performance of full-scale ACLS incorporating these refinements. The simulation was validated through scale-model tests. A full-scale ACLS based on the segmented trunk concept was fabricated and installed on the NASA ACLS test vehicle, where it is used to support advanced system development. A geometrically-scaled model (one third full scale) of the NASA test vehicle was fabricated and tested. This model, evaluated by means of a series of static and dynamic tests, is used to investigate scaling relationships between reduced and full-scale models. The analytical model developed earlier is applied to simulate both the one third scale and the full scale response.

  16. Simulation Study of Nano Aqueous Flow Sensor Based on Amperometric Measurement

    PubMed Central

    Wu, Jian; Zhou, Qingli; Liu, Jun; Lou, Zhengguo

    2006-01-01

    In this paper, a novel nano aqueous flow sensor which consists of two closely spaced amperometric sensors is investigated by digital simulation. The simulation results indicate that the ratio of the responses of two closely spaced amperometric sensors is only related to flow rates in the channel, insensitive to the analyte concentration in the solution. By comparing the output of two amperometric sensors, the flow rate in the channel can be deduced. It is not necessary to determine the analyte concentration in advance. The simulation results show it is able to detect flow rate by in the range of several nano-liters per minute when the distance between the working electrodes of two amperometric sensors is 200 nm and the cross-section of the channel is 1 μm × 1 μm.

  17. Use of the MATRIXx Integrated Toolkit on the Microwave Anisotropy Probe Attitude Control System

    NASA Technical Reports Server (NTRS)

    Ward, David K.; Andrews, Stephen F.; McComas, David C.; ODonnell, James R., Jr.

    1999-01-01

    Recent advances in analytical software tools allow the analysis, simulation, flight code, and documentation of an algorithm to be generated from a single source, all within one integrated analytical design package. NASA's Microwave Anisotropy Probe project has used one such package, Integrated Systems' MATRIXx suite, in the design of the spacecraft's Attitude Control System. The project's experience with the linear analysis, simulation, code generation, and documentation tools will be presented and compared with more traditional development tools. In particular, the quality of the flight software generated will be examined in detail. Finally, lessons learned on each of the tools will be shared.

  18. Analyses of the dynamic docking test system for advanced mission docking system test programs. [Apollo Soyuz Test Project

    NASA Technical Reports Server (NTRS)

    Gates, R. M.; Williams, J. E.

    1974-01-01

    Results are given of analytical studies performed in support of the design, implementation, checkout and use of NASA's dynamic docking test system (DDTS). Included are analyses of simulator components, a list of detailed operational test procedures, a summary of simulator performance, and an analysis and comparison of docking dynamics and loads obtained by test and analysis.

  19. Advanced simulation study on bunch gap transient effect

    NASA Astrophysics Data System (ADS)

    Kobayashi, Tetsuya; Akai, Kazunori

    2016-06-01

    Bunch phase shift along the train due to a bunch gap transient is a concern in high-current colliders. In KEKB operation, the measured phase shift along the train agreed well with a simulation and a simple analytical form in most part of the train. However, a rapid phase change was observed at the leading part of the train, which was not predicted by the simulation or by the analytical form. In order to understand the cause of this observation, we have developed an advanced simulation, which treats the transient loading in each of the cavities of the three-cavity system of the accelerator resonantly coupled with energy storage (ARES) instead of the equivalent single cavities used in the previous simulation, operating in the accelerating mode. In this paper, we show that the new simulation reproduces the observation, and clarify that the rapid phase change at the leading part of the train is caused by a transient loading in the three-cavity system of ARES. KEKB is being upgraded to SuperKEKB, which is aiming at 40 times higher luminosity than KEKB. The gap transient in SuperKEKB is investigated using the new simulation, and the result shows that the rapid phase change at the leading part of the train is much larger due to higher beam currents. We will also present measures to mitigate possible luminosity reduction or beam performance deterioration due to the rapid phase change caused by the gap transient.

  20. Analytical and simulator study of advanced transport

    NASA Technical Reports Server (NTRS)

    Levison, W. H.; Rickard, W. W.

    1982-01-01

    An analytic methodology, based on the optimal-control pilot model, was demonstrated for assessing longitidunal-axis handling qualities of transport aircraft in final approach. Calibration of the methodology is largely in terms of closed-loop performance requirements, rather than specific vehicle response characteristics, and is based on a combination of published criteria, pilot preferences, physical limitations, and engineering judgment. Six longitudinal-axis approach configurations were studied covering a range of handling qualities problems, including the presence of flexible aircraft modes. The analytical procedure was used to obtain predictions of Cooper-Harper ratings, a solar quadratic performance index, and rms excursions of important system variables.

  1. Concept for a Satellite-Based Advanced Air Traffic Management System : Volume 9. System and Subsystem Performance Models.

    DOT National Transportation Integrated Search

    1973-02-01

    The volume presents the models used to analyze basic features of the system, establish feasibility of techniques, and evaluate system performance. The models use analytical expressions and computer simulations to represent the relationship between sy...

  2. Formulation of the linear model from the nonlinear simulation for the F18 HARV

    NASA Technical Reports Server (NTRS)

    Hall, Charles E., Jr.

    1991-01-01

    The F-18 HARV is a modified F-18 Aircraft which is capable of flying in the post-stall regime in order to achieve superagility. The onset of aerodynamic stall, and continued into the post-stall region, is characterized by nonlinearities in the aerodynamic coefficients. These aerodynamic coefficients are not expressed as analytic functions, but rather in the form of tabular data. The nonlinearities in the aerodynamic coefficients yield a nonlinear model of the aircraft's dynamics. Nonlinear system theory has made many advances, but this area is not sufficiently developed to allow its application to this problem, since many of the theorems are existance theorems and that the systems are composed of analytic functions. Thus, the feedback matrices and the state estimators are obtained from linear system theory techniques. It is important, in order to obtain the correct feedback matrices and state estimators, that the linear description of the nonlinear flight dynamics be as accurate as possible. A nonlinear simulation is run under the Advanced Continuous Simulation Language (ACSL). The ACSL simulation uses FORTRAN subroutines to interface to the look-up tables for the aerodynamic data. ACSL has commands to form the linear representation for the system. Other aspects of this investigation are discussed.

  3. Autonomous Energy Grids | Grid Modernization | NREL

    Science.gov Websites

    control themselves using advanced machine learning and simulation to create resilient, reliable, and affordable optimized energy systems. Current frameworks to monitor, control, and optimize large-scale energy of optimization theory, control theory, big data analytics, and complex system theory and modeling to

  4. Aberration measurement technique based on an analytical linear model of a through-focus aerial image.

    PubMed

    Yan, Guanyong; Wang, Xiangzhao; Li, Sikun; Yang, Jishuo; Xu, Dongbo; Erdmann, Andreas

    2014-03-10

    We propose an in situ aberration measurement technique based on an analytical linear model of through-focus aerial images. The aberrations are retrieved from aerial images of six isolated space patterns, which have the same width but different orientations. The imaging formulas of the space patterns are investigated and simplified, and then an analytical linear relationship between the aerial image intensity distributions and the Zernike coefficients is established. The linear relationship is composed of linear fitting matrices and rotation matrices, which can be calculated numerically in advance and utilized to retrieve Zernike coefficients. Numerical simulations using the lithography simulators PROLITH and Dr.LiTHO demonstrate that the proposed method can measure wavefront aberrations up to Z(37). Experiments on a real lithography tool confirm that our method can monitor lens aberration offset with an accuracy of 0.7 nm.

  5. Hybrid test on building structures using electrodynamic fatigue test machine

    NASA Astrophysics Data System (ADS)

    Xu, Zhao-Dong; Wang, Kai-Yang; Guo, Ying-Qing; Wu, Min-Dong; Xu, Meng

    2017-01-01

    Hybrid simulation is an advanced structural dynamic experimental method that combines experimental physical models with analytical numerical models. It has increasingly been recognised as a powerful methodology to evaluate structural nonlinear components and systems under realistic operating conditions. One of the barriers for this advanced testing is the lack of flexible software for hybrid simulation using heterogeneous experimental equipment. In this study, an electrodynamic fatigue test machine is made and a MATLAB program is developed for hybrid simulation. Compared with the servo-hydraulic system, electrodynamic fatigue test machine has the advantages of small volume, easy operation and fast response. A hybrid simulation is conducted to verify the flexibility and capability of the whole system whose experimental substructure is one spring brace and numerical substructure is a two-storey steel frame structure. Experimental and numerical results show the feasibility and applicability of the whole system.

  6. Decision making in trauma settings: simulation to improve diagnostic skills.

    PubMed

    Murray, David J; Freeman, Brad D; Boulet, John R; Woodhouse, Julie; Fehr, James J; Klingensmith, Mary E

    2015-06-01

    In the setting of acute injury, a wrong, missed, or delayed diagnosis can impact survival. Clinicians rely on pattern recognition and heuristics to rapidly assess injuries, but an overreliance on these approaches can result in a diagnostic error. Simulation has been advocated as a method for practitioners to learn how to recognize the limitations of heuristics and develop better diagnostic skills. The objective of this study was to determine whether simulation could be used to provide teams the experiences in managing scenarios that require the use of heuristic as well as analytic diagnostic skills to effectively recognize and treat potentially life-threatening injuries. Ten scenarios were developed to assess the ability of trauma teams to provide initial care to a severely injured patient. Seven standard scenarios simulated severe injuries that once diagnosed could be effectively treated using standard Advanced Trauma Life Support algorithms. Because diagnostic error occurs more commonly in complex clinical settings, 3 complex scenarios required teams to use more advanced diagnostic skills to uncover a coexisting condition and treat the patient. Teams composed of 3 to 5 practitioners were evaluated in the performance of 7 (of 10) randomly selected scenarios (5 standard, 2 complex). Expert rates scored teams using standardized checklists and global scores. Eighty-three surgery, emergency medicine, and anesthesia residents constituted 21 teams. Expert raters were able to reliably score the scenarios. Teams accomplished fewer checklist actions and received lower global scores on the 3 analytic scenarios (73.8% [12.3%] and 5.9 [1.6], respectively) compared with the 7 heuristic scenarios (83.2% [11.7%] and 6.6 [1.3], respectively; P < 0.05 for both). Teams led by more junior residents received higher global scores on the analytic scenarios (6.4 [1.3]) than the more senior team leaders (5.3 [1.7]). This preliminary study indicates that teams led by more senior residents received higher scores when managing heuristic scenarios but were less effective when managing the scenarios that require a more analytic approach. Simulation can be used to provide teams with decision-making experiences in trauma settings and could be used to improve diagnostic skills as well as study the decision-making process.

  7. Integrated control and display research for transition and vertical flight on the NASA V/STOL Research Aircraft (VSRA)

    NASA Technical Reports Server (NTRS)

    Foster, John D.; Moralez, Ernesto, III; Franklin, James A.; Schroeder, Jeffery A.

    1987-01-01

    Results of a substantial body of ground-based simulation experiments indicate that a high degree of precision of operation for recovery aboard small ships in heavy seas and low visibility with acceptable levels of effort by the pilot can be achieved by integrating the aircraft flight and propulsion controls. The availability of digital fly-by-wire controls makes it feasible to implement an integrated control design to achieve and demonstrate in flight the operational benefits promised by the simulation experience. It remains to validate these systems concepts in flight to establish their value for advanced short takeoff vertical landing (STOVL) aircraft designs. This paper summarizes analytical studies and simulation experiments which provide a basis for the flight research program that will develop and validate critical technologies for advanced STOVL aircraft through the development and evaluation of advanced, integrated control and display concepts, and lays out the plan for the flight program that will be conducted on NASA's V/STOL Research Aircraft (VSRA).

  8. Social Networks and Smoking: Exploring the Effects of Peer Influence and Smoker Popularity through Simulations

    ERIC Educational Resources Information Center

    Schaefer, David R.; adams, jimi; Haas, Steven A.

    2013-01-01

    Adolescent smoking and friendship networks are related in many ways that can amplify smoking prevalence. Understanding and developing interventions within such a complex system requires new analytic approaches. We draw on recent advances in dynamic network modeling to develop a technique that explores the implications of various intervention…

  9. Advanced Engineering Environments: Implications for Aerospace Manufacturing

    NASA Technical Reports Server (NTRS)

    Thomas, D.

    2001-01-01

    There are significant challenges facing today's aerospace industry. Global competition, more complex products, geographically-distributed design teams, demands for lower cost, higher reliability and safer vehicles, and the need to incorporate the latest technologies quicker all face the developer of aerospace systems. New information technologies offer promising opportunities to develop advanced engineering environments (AEEs) to meet these challenges. Significant advances in the state-of-the-art of aerospace engineering practice are envisioned in the areas of engineering design and analytical tools, cost and risk tools, collaborative engineering, and high-fidelity simulations early in the development cycle. These advances will enable modeling and simulation of manufacturing methods, which will in turn allow manufacturing considerations to be included much earlier in the system development cycle. Significant cost savings, increased quality, and decreased manufacturing cycle time are expected to result. This paper will give an overview of the NASA's Intelligent Synthesis Environment, the agency initiative to develop an AEE, with a focus on the anticipated benefits in aerospace manufacturing.

  10. Converging technologies: a critical analysis of cognitive enhancement for public policy application.

    PubMed

    Makridis, Christos

    2013-09-01

    This paper investigates cognitive enhancement, specifically biological cognitive enhancement (BCE), as a converging technology, and its implications for public policy. With an increasing rate of technological advancements, the legal, social, and economic frameworks lag behind the scientific advancements that they support. This lag poses significant challenges for policymakers if it is not dealt with sufficiently within the right analytical context. Therefore, the driving question behind this paper is, "What contingencies inform the advancement of biological cognitive enhancement, and what would society look like under this set of assumptions?" The paper is divided into five components: (1) defining the current policy context for BCEs, (2) analyzing the current social and economic outcomes to BCEs, (3) investigating the context of cost-benefit arguments in relation to BCEs, (4) proposing an analytical model for evaluating contingencies for BCE development, and (5) evaluating a simulated policy, social, technological, and economic context given the contingencies. In order to manage the risk and uncertainty inherent in technological change, BCEs' drivers must be scrutinized and evaluated.

  11. HYTESS 2: A Hypothetical Turbofan Engine Simplified Simulation with multivariable control and sensor analytical redundancy

    NASA Technical Reports Server (NTRS)

    Merrill, W. C.

    1986-01-01

    A hypothetical turbofan engine simplified simulation with a multivariable control and sensor failure detection, isolation, and accommodation logic (HYTESS II) is presented. The digital program, written in FORTRAN, is self-contained, efficient, realistic and easily used. Simulated engine dynamics were developed from linearized operating point models. However, essential nonlinear effects are retained. The simulation is representative of the hypothetical, low bypass ratio turbofan engine with an advanced control and failure detection logic. Included is a description of the engine dynamics, the control algorithm, and the sensor failure detection logic. Details of the simulation including block diagrams, variable descriptions, common block definitions, subroutine descriptions, and input requirements are given. Example simulation results are also presented.

  12. Scientific Discovery through Advanced Computing in Plasma Science

    NASA Astrophysics Data System (ADS)

    Tang, William

    2005-03-01

    Advanced computing is generally recognized to be an increasingly vital tool for accelerating progress in scientific research during the 21st Century. For example, the Department of Energy's ``Scientific Discovery through Advanced Computing'' (SciDAC) Program was motivated in large measure by the fact that formidable scientific challenges in its research portfolio could best be addressed by utilizing the combination of the rapid advances in super-computing technology together with the emergence of effective new algorithms and computational methodologies. The imperative is to translate such progress into corresponding increases in the performance of the scientific codes used to model complex physical systems such as those encountered in high temperature plasma research. If properly validated against experimental measurements and analytic benchmarks, these codes can provide reliable predictive capability for the behavior of a broad range of complex natural and engineered systems. This talk reviews recent progress and future directions for advanced simulations with some illustrative examples taken from the plasma science applications area. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by the combination of access to powerful new computational resources together with innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning a huge range in time and space scales. In particular, the plasma science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPP's to produce three-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of plasma turbulence in magnetically-confined high temperature plasmas. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to stimulate improved cross-cutting collaborations with other fields and also to help attract bright young talent to the computational science area.

  13. Dynamic Impact Testing and Model Development in Support of NASA's Advanced Composites Program

    NASA Technical Reports Server (NTRS)

    Melis, Matthew E.; Pereira, J. Michael; Goldberg, Robert; Rassaian, Mostafa

    2018-01-01

    The purpose of this paper is to provide an executive overview of the HEDI effort for NASA's Advanced Composites Program and establish the foundation for the remaining papers to follow in the 2018 SciTech special session NASA ACC High Energy Dynamic Impact. The paper summarizes the work done for the Advanced Composites Program to advance our understanding of the behavior of composite materials during high energy impact events and to advance the ability of analytical tools to provide predictive simulations. The experimental program carried out at GRC is summarized and a status on the current development state for MAT213 will be provided. Future work will be discussed as the HEDI effort transitions from fundamental analysis and testing to investigating sub-component structural concept response to impact events.

  14. Advanced ETC/LSS computerized analytical models, CO2 concentration. Volume 1: Summary document

    NASA Technical Reports Server (NTRS)

    Taylor, B. N.; Loscutoff, A. V.

    1972-01-01

    Computer simulations have been prepared for the concepts of C02 concentration which have the potential for maintaining a C02 partial pressure of 3.0 mmHg, or less, in a spacecraft environment. The simulations were performed using the G-189A Generalized Environmental Control computer program. In preparing the simulations, new subroutines to model the principal functional components for each concept were prepared and integrated into the existing program. Sample problems were run to demonstrate the methods of simulation and performance characteristics of the individual concepts. Comparison runs for each concept can be made for parametric values of cabin pressure, crew size, cabin air dry and wet bulb temperatures, and mission duration.

  15. Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Dali; Yuan, Fengming; Hernandez, Benjamin

    Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation. A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation. Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering. A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presentedmore » to demonstrate the implementation details and system innovations.« less

  16. Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations

    DOE PAGES

    Wang, Dali; Yuan, Fengming; Hernandez, Benjamin; ...

    2017-01-01

    Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation. A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation. Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering. A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presentedmore » to demonstrate the implementation details and system innovations.« less

  17. Implementation and simulation of a cone dielectric elastomer actuator

    NASA Astrophysics Data System (ADS)

    Wang, Huaming; Zhu, Jianying

    2008-11-01

    The purpose is to investigate the performance of cone dielectric elastomer actuator (DEA) by experiment and FEM simulation. Two working equilibrium positions of cone DEA, which correspond to its initial displacement and displacement output with voltage off and on respectively, are determined through the analysis on its working principle. Experiments show that analytical results accord with experimental ones, and work output in a workcycle is hereby calculated. Actuator can respond quickly when voltage is applied and can return to its original position rapidly when voltage is released. Also, FEM simulation is used to obtain the movement of cone DEA in advance. Simulation results agree well with experimental ones and prove the feasibility of simulation. Also, causes for small difference between them in displacement output are analyzed.

  18. Comparing numerical and analytic approximate gravitational waveforms

    NASA Astrophysics Data System (ADS)

    Afshari, Nousha; Lovelace, Geoffrey; SXS Collaboration

    2016-03-01

    A direct observation of gravitational waves will test Einstein's theory of general relativity under the most extreme conditions. The Laser Interferometer Gravitational-Wave Observatory, or LIGO, began searching for gravitational waves in September 2015 with three times the sensitivity of initial LIGO. To help Advanced LIGO detect as many gravitational waves as possible, a major research effort is underway to accurately predict the expected waves. In this poster, I will explore how the gravitational waveform produced by a long binary-black-hole inspiral, merger, and ringdown is affected by how fast the larger black hole spins. In particular, I will present results from simulations of merging black holes, completed using the Spectral Einstein Code (black-holes.org/SpEC.html), including some new, long simulations designed to mimic black hole-neutron star mergers. I will present comparisons of the numerical waveforms with analytic approximations.

  19. Genetic Simulation Tools for Post-Genome Wide Association Studies of Complex Diseases

    PubMed Central

    Amos, Christopher I.; Bafna, Vineet; Hauser, Elizabeth R.; Hernandez, Ryan D.; Li, Chun; Liberles, David A.; McAllister, Kimberly; Moore, Jason H.; Paltoo, Dina N.; Papanicolaou, George J.; Peng, Bo; Ritchie, Marylyn D.; Rosenfeld, Gabriel; Witte, John S.

    2014-01-01

    Genetic simulation programs are used to model data under specified assumptions to facilitate the understanding and study of complex genetic systems. Standardized data sets generated using genetic simulation are essential for the development and application of novel analytical tools in genetic epidemiology studies. With continuing advances in high-throughput genomic technologies and generation and analysis of larger, more complex data sets, there is a need for updating current approaches in genetic simulation modeling. To provide a forum to address current and emerging challenges in this area, the National Cancer Institute (NCI) sponsored a workshop, entitled “Genetic Simulation Tools for Post-Genome Wide Association Studies of Complex Diseases” at the National Institutes of Health (NIH) in Bethesda, Maryland on March 11-12, 2014. The goals of the workshop were to: (i) identify opportunities, challenges and resource needs for the development and application of genetic simulation models; (ii) improve the integration of tools for modeling and analysis of simulated data; and (iii) foster collaborations to facilitate development and applications of genetic simulation. During the course of the meeting the group identified challenges and opportunities for the science of simulation, software and methods development, and collaboration. This paper summarizes key discussions at the meeting, and highlights important challenges and opportunities to advance the field of genetic simulation. PMID:25371374

  20. X-ray optics simulation and beamline design for the APS upgrade

    NASA Astrophysics Data System (ADS)

    Shi, Xianbo; Reininger, Ruben; Harder, Ross; Haeffner, Dean

    2017-08-01

    The upgrade of the Advanced Photon Source (APS) to a Multi-Bend Achromat (MBA) will increase the brightness of the APS by between two and three orders of magnitude. The APS upgrade (APS-U) project includes a list of feature beamlines that will take full advantage of the new machine. Many of the existing beamlines will be also upgraded to profit from this significant machine enhancement. Optics simulations are essential in the design and optimization of these new and existing beamlines. In this contribution, the simulation tools used and developed at APS, ranging from analytical to numerical methods, are summarized. Three general optical layouts are compared in terms of their coherence control and focusing capabilities. The concept of zoom optics, where two sets of focusing elements (e.g., CRLs and KB mirrors) are used to provide variable beam sizes at a fixed focal plane, is optimized analytically. The effects of figure errors on the vertical spot size and on the local coherence along the vertical direction of the optimized design are investigated.

  1. NIMROD: A computational laboratory for studying nonlinear fusion magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Sovinec, C. R.; Gianakon, T. A.; Held, E. D.; Kruger, S. E.; Schnack, D. D.

    2003-05-01

    Nonlinear numerical studies of macroscopic modes in a variety of magnetic fusion experiments are made possible by the flexible high-order accurate spatial representation and semi-implicit time advance in the NIMROD simulation code [A. H. Glasser et al., Plasma Phys. Controlled Fusion 41, A747 (1999)]. Simulation of a resistive magnetohydrodynamics mode in a shaped toroidal tokamak equilibrium demonstrates computation with disparate time scales, simulations of discharge 87009 in the DIII-D tokamak [J. L. Luxon et al., Plasma Physics and Controlled Nuclear Fusion Research 1986 (International Atomic Energy Agency, Vienna, 1987), Vol. I, p. 159] confirm an analytic scaling for the temporal evolution of an ideal mode subject to plasma-β increasing beyond marginality, and a spherical torus simulation demonstrates nonlinear free-boundary capabilities. A comparison of numerical results on magnetic relaxation finds the n=1 mode and flux amplification in spheromaks to be very closely related to the m=1 dynamo modes and magnetic reversal in reversed-field pinch configurations. Advances in local and nonlocal closure relations developed for modeling kinetic effects in fluid simulation are also described.

  2. Gauge Conditions for Moving Black Holes Without Excision

    NASA Technical Reports Server (NTRS)

    van Meter, James; Baker, John G.; Koppitz, Michael; Dae-IL, Choi

    2006-01-01

    Recent demonstrations of unexcised, puncture black holes traversing freely across computational grids represent a significant advance in numerical relativity. Stable an$ accurate simulations of multiple orbits, and their radiated waves, result. This capability is critically undergirded by a careful choice of gauge. Here we present analytic considerations which suggest certain gauge choices, and numerically demonstrate their efficacy in evolving a single moving puncture.

  3. Optimal attitude maneuver execution for the Advanced Composition Explorer (ACE) mission

    NASA Technical Reports Server (NTRS)

    Woodard, Mark A.; Baker, David

    1995-01-01

    The Advanced Composition Explorer (ACE) spacecraft will require frequent attitude reorientations in order to maintain the spacecraft high gain antenna (HGA) within 3 deg of earth-pointing. These attitude maneuvers will be accomplished by employing a series of ground-commanded thruster pulses, computed by ground operations personnel, to achieve the desired change in the spacecraft angular momentum vector. With each maneuver, attitude nutation will be excited. Large nutation angles are undesirable from a science standpoint. It is important that the thruster firings be phased properly in order to minimize the nutation angle at the end of the maneuver so that science collection time is maximized. The analysis presented derives a simple approximation for the nutation contribution resulting from a series of short thruster burns. Analytic equations are derived which give the induced nutation angle as a function of the number of small thruster burns used to execute the attitude maneuver and the phasing of the burns. The results show that by properly subdividing the attitude burns, the induced nutation can be kept low. The analytic equations are also verified through attitude dynamics simulation and simulation results are presented. Finally, techniques for quantifying the post-maneuver nutation are discussed.

  4. Modified symplectic schemes with nearly-analytic discrete operators for acoustic wave simulations

    NASA Astrophysics Data System (ADS)

    Liu, Shaolin; Yang, Dinghui; Lang, Chao; Wang, Wenshuai; Pan, Zhide

    2017-04-01

    Using a structure-preserving algorithm significantly increases the computational efficiency of solving wave equations. However, only a few explicit symplectic schemes are available in the literature, and the capabilities of these symplectic schemes have not been sufficiently exploited. Here, we propose a modified strategy to construct explicit symplectic schemes for time advance. The acoustic wave equation is transformed into a Hamiltonian system. The classical symplectic partitioned Runge-Kutta (PRK) method is used for the temporal discretization. Additional spatial differential terms are added to the PRK schemes to form the modified symplectic methods and then two modified time-advancing symplectic methods with all of positive symplectic coefficients are then constructed. The spatial differential operators are approximated by nearly-analytic discrete (NAD) operators, and we call the fully discretized scheme modified symplectic nearly analytic discrete (MSNAD) method. Theoretical analyses show that the MSNAD methods exhibit less numerical dispersion and higher stability limits than conventional methods. Three numerical experiments are conducted to verify the advantages of the MSNAD methods, such as their numerical accuracy, computational cost, stability, and long-term calculation capability.

  5. Development of an advanced system identification technique for comparing ADAMS analytical results with modal test data for a MICON 65/13 wind turbine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bialasiewicz, J.T.

    1995-07-01

    This work uses the theory developed in NREL/TP--442-7110 to analyze simulated data from an ADAMS (Automated Dynamic Analysis of Mechanical Systems) model of the MICON 65/13 wind turbine. The Observer/Kalman Filter identification approach is expanded to use input-output time histories from ADAMS simulations or structural test data. A step by step outline is offered on how the tools developed in this research, can be used for validation of the ADAMS model.

  6. Simulation and Modeling of Positrons and Electrons in advanced Time-of-Flight Positron Annihilation Induced Auger Electron Spectroscopy Systems

    NASA Astrophysics Data System (ADS)

    Joglekar, Prasad; Shastry, Karthik; Satyal, Suman; Weiss, Alexander

    2011-10-01

    Time of Flight Positron Annihilation Induced Auger Electron Spectroscopy (T-O-F PAES) is a highly surface selective analytical technique in which elemental identification is accomplished through a measurement of the flight time distributions of Auger electrons resulting from the annihilation of core electron by positrons. SIMION charged particle optics simulation software was used to model the trajectories both the incident positrons and outgoing electrons in our existing T-O-F PAES system as well as in a new system currently under construction in our laboratory. The implication of these simulation regarding the instrument design and performance are discussed.

  7. An Advanced Buffet Load Alleviation System

    NASA Technical Reports Server (NTRS)

    Burnham, Jay K.; Pitt, Dale M.; White, Edward V.; Henderson, Douglas A.; Moses, Robert W.

    2001-01-01

    This paper describes the development of an advanced buffet load alleviation (BLA) system that utilizes distributed piezoelectric actuators in conjunction with an active rudder to reduce the structural dynamic response of the F/A-18 aircraft vertical tails to buffet loads. The BLA system was defined analytically with a detailed finite-element-model of the tail structure and piezoelectric actuators. Oscillatory aerodynamics were included along with a buffet forcing function to complete the aeroservoelastic model of the tail with rudder control surface. Two single-input-single-output (SISO) controllers were designed, one for the active rudder and one for the active piezoelectric actuators. The results from the analytical open and closed loop simulations were used to predict the system performance. The objective of this BLA system is to extend the life of vertical tail structures and decrease their life-cycle costs. This system can be applied to other aircraft designs to address suppression of structural vibrations on military and commercial aircraft.

  8. Modeling the liquid filling in capillary well microplates for analyte preconcentration.

    PubMed

    Yu, Yang; Wang, Xuewei; Ng, Tuck Wah

    2012-06-15

    An attractive advantage of the capillary well microplate approach is the ability to conduct evaporative analyte preconcentration. We advance the use of hydrophobic materials for the wells which apart from reducing material loss through wetting also affords self entry into the well when the droplet size reduces below a critical value. Using Surface Evolver simulation without gravity, we find the critical diameters D(c) fitting very well with theoretical results. When simulating the critical diameters D(c)(G) with gravity included, the gravitational effect could only be ignored when the liquid volumes were small (difference of 5.7% with 5 μL of liquid), but not when the liquid volumes were large (differences of more than 22% with 50 μL of liquid). From this, we developed a modifying equation from a series of simulation results made to describe the gravitational effect. This modifying equation fitted the simulation results well in our simulation range (100°≤θ≤135° and 1 μL≤V≤200 μL). In simulating the condition of multiple wells underneath each droplet, we found that having more holes did not alter the critical diameters significantly. Consequently, the modifying relation should also generally express the critical diameter for multiple wells under a droplet. Crown Copyright © 2012. Published by Elsevier Inc. All rights reserved.

  9. Advanced computations in plasma physics

    NASA Astrophysics Data System (ADS)

    Tang, W. M.

    2002-05-01

    Scientific simulation in tandem with theory and experiment is an essential tool for understanding complex plasma behavior. In this paper we review recent progress and future directions for advanced simulations in magnetically confined plasmas with illustrative examples chosen from magnetic confinement research areas such as microturbulence, magnetohydrodynamics, magnetic reconnection, and others. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales together with access to powerful new computational resources. In particular, the fusion energy science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPP's to produce three-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of turbulence self-regulation by zonal flows. It should be emphasized that these calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to stimulate improved cross-cutting collaborations with other fields and also to help attract bright young talent to plasma science.

  10. Advanced Computation in Plasma Physics

    NASA Astrophysics Data System (ADS)

    Tang, William

    2001-10-01

    Scientific simulation in tandem with theory and experiment is an essential tool for understanding complex plasma behavior. This talk will review recent progress and future directions for advanced simulations in magnetically-confined plasmas with illustrative examples chosen from areas such as microturbulence, magnetohydrodynamics, magnetic reconnection, and others. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales together with access to powerful new computational resources. In particular, the fusion energy science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop MPP's to produce 3-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of turbulence self-regulation by zonal flows. It should be emphasized that these calculations, which typically utilized billions of particles for tens of thousands time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to stimulate improved cross-cutting collaborations with other fields and also to help attract bright young talent to plasma science.

  11. Generalized dynamic engine simulation techniques for the digital computer

    NASA Technical Reports Server (NTRS)

    Sellers, J.; Teren, F.

    1974-01-01

    Recently advanced simulation techniques have been developed for the digital computer and used as the basis for development of a generalized dynamic engine simulation computer program, called DYNGEN. This computer program can analyze the steady state and dynamic performance of many kinds of aircraft gas turbine engines. Without changes to the basic program, DYNGEN can analyze one- or two-spool turbofan engines. The user must supply appropriate component performance maps and design-point information. Examples are presented to illustrate the capabilities of DYNGEN in the steady state and dynamic modes of operation. The analytical techniques used in DYNGEN are briefly discussed, and its accuracy is compared with a comparable simulation using the hybrid computer. The impact of DYNGEN and similar all-digital programs on future engine simulation philosophy is also discussed.

  12. Generalized dynamic engine simulation techniques for the digital computer

    NASA Technical Reports Server (NTRS)

    Sellers, J.; Teren, F.

    1974-01-01

    Recently advanced simulation techniques have been developed for the digital computer and used as the basis for development of a generalized dynamic engine simulation computer program, called DYNGEN. This computer program can analyze the steady state and dynamic performance of many kinds of aircraft gas turbine engines. Without changes to the basic program DYNGEN can analyze one- or two-spool turbofan engines. The user must supply appropriate component performance maps and design-point information. Examples are presented to illustrate the capabilities of DYNGEN in the steady state and dynamic modes of operation. The analytical techniques used in DYNGEN are briefly discussed, and its accuracy is compared with a comparable simulation using the hybrid computer. The impact of DYNGEN and similar all-digital programs on future engine simulation philosophy is also discussed.

  13. Generalized dynamic engine simulation techniques for the digital computers

    NASA Technical Reports Server (NTRS)

    Sellers, J.; Teren, F.

    1975-01-01

    Recently advanced simulation techniques have been developed for the digital computer and used as the basis for development of a generalized dynamic engine simulation computer program, called DYNGEN. This computer program can analyze the steady state and dynamic performance of many kinds of aircraft gas turbine engines. Without changes to the basic program, DYNGEN can analyze one- or two-spool turbofan engines. The user must supply appropriate component performance maps and design point information. Examples are presented to illustrate the capabilities of DYNGEN in the steady state and dynamic modes of operation. The analytical techniques used in DYNGEN are briefly discussed, and its accuracy is compared with a comparable simulation using the hybrid computer. The impact of DYNGEN and similar digital programs on future engine simulation philosophy is also discussed.

  14. Combining Advanced Turbulent Mixing and Combustion Models with Advanced Multi-Phase CFD Code to Simulate Detonation and Post-Detonation Bio-Agent Mixing and Destruction

    DTIC Science & Technology

    2017-10-01

    perturbations in the energetic material to study their effects on the blast wave formation. The last case also makes use of the same PBX, however, the...configuration, Case A: Spore cloud located on the top of the charge at an angle 45 degree, Case B: Spore cloud located at an angle 45 degree from the charge...theoretical validation. The first is the Sedov case where the pressure decay and blast wave front are validated based on analytical solutions. In this test

  15. Advanced detection, isolation, and accommodation of sensor failures in turbofan engines: Real-time microcomputer implementation

    NASA Technical Reports Server (NTRS)

    Delaat, John C.; Merrill, Walter C.

    1990-01-01

    The objective of the Advanced Detection, Isolation, and Accommodation Program is to improve the overall demonstrated reliability of digital electronic control systems for turbine engines. For this purpose, an algorithm was developed which detects, isolates, and accommodates sensor failures by using analytical redundancy. The performance of this algorithm was evaluated on a real time engine simulation and was demonstrated on a full scale F100 turbofan engine. The real time implementation of the algorithm is described. The implementation used state-of-the-art microprocessor hardware and software, including parallel processing and high order language programming.

  16. CERT: Center of Excellence in Rotorcraft Technology

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The research objectives of this effort are to understand the physical processes that influence the formation of the tip vortex of a rotor in advancing flight, and to develop active and passive means of weakening the tip vortex during conditions when strong blade-vortex-interaction effects are expected. A combined experimental, analytical, and computational effort is being employed. Specifically, the following efforts are being pursued: 1. Analytical evaluation and design of combined elastic tailoring and active material actuators applicable to rotor blade tips. 2. Numerical simulations of active and passive tip devices. 3. LDV Measurement of the near and far wake behind rotors in forward flight.

  17. Results from Binary Black Hole Simulations in Astrophysics Applications

    NASA Technical Reports Server (NTRS)

    Baker, John G.

    2007-01-01

    Present and planned gravitational wave observatories are opening a new astronomical window to the sky. A key source of gravitational waves is the merger of two black holes. The Laser Interferometer Space Antenna (LISA), in particular, is expected to observe these events with signal-to-noise ratio's in the thousands. To fully reap the scientific benefits of these observations requires a detailed understanding, based on numerical simulations, of the predictions of General Relativity for the waveform signals. New techniques for simulating binary black hole mergers, introduced two years ago, have led to dramatic advances in applied numerical simulation work. Over the last two years, numerical relativity researchers have made tremendous strides in understanding the late stages of binary black hole mergers. Simulations have been applied to test much of the basic physics of binary black hole interactions, showing robust results for merger waveform predictions, and illuminating such phenomena as spin-precession. Calculations have shown that merging systems can be kicked at up to 2500 km/s by the thrust from asymmetric emission. Recently, long lasting simulations of ten or more orbits allow tests of post-Newtonian (PN) approximation results for radiation from the last orbits of the binary's inspiral. Already, analytic waveform models based PN techniques with incorporated information from numerical simulations may be adequate for observations with current ground based observatories. As new advances in simulations continue to rapidly improve our theoretical understanding of the systems, it seems certain that high-precision predictions will be available in time for LISA and other advanced ground-based instruments. Future gravitational wave observatories are expected to make precision.

  18. Experimental and Analytical Seismic Studies of a Four-Span Bridge System with Innovative Materials

    NASA Astrophysics Data System (ADS)

    Cruz Noguez, Carlos Alonso

    As part of a multi-university project utilizing the NSF Network for Earthquake Engineering Simulation (NEES), a quarter-scale model of a four-span bridge incorporating plastic hinges with different advanced materials was tested to failure on the three shake table system at the University of Nevada, Reno (UNR). The bridge was the second test model in a series of three 4-span bridges, with the first model being a conventional reinforced-concrete (RC) structure. The purpose of incorporating advanced materials was to improve the seismic performance of the bridge with respect to two damage indicators: (1) column damage and (2) permanent deformations. The goals of the study presented in this document were to (1) evaluate the seismic performance of a 4-span bridge system incorporating SMA/ECC and built-in rubber pad plastic hinges as well as post-tensioned piers, (2) quantify the relative merit of these advanced materials and details compared to each other and to conventional reinforced concrete plastic hinges, (3) determine the influence of abutment-superstructure interaction on the response, (4) examine the ability of available elaborate analytical modeling techniques to model the performance of advanced materials and details, and (5) conduct an extensive parametric study of different variations of the bridge model to study several important issues in bridge earthquake engineering. The bridge model included six columns, each pair of which utilized a different advanced detail at bottom plastic hinges: shape memory alloys (SMA), special engineered cementitious composites (ECC), elastomeric pads embedded into columns, and post-tensioning tendons. The design of the columns, location of the bents, and selection of the loading protocol were based on pre-test analyses conducted using computer program OpenSees. The bridge model was subjected to two-horizontal components of simulated earthquake records of the 1994 Northridge earthquake. Over 340 channels of data were collected. The test results showed the effectiveness of the advanced materials in reducing damage and permanent displacements. The damage was minimal in plastic hinges with SMA/ECC and those with built-in elastomeric pads. Conventional RC plastic hinges were severely damaged due to spalling of concrete and rupture of the longitudinal and transverse reinforcement. Extensive post-test analytical studies were conducted and it was determined that a computational model of the bridge that included bridge-abutment interaction using OpenSees was able to provide satisfactory estimations of key structural parameters such as superstructure displacements and base shears. The analytical model was also used to conduct parametric studies on single-column and bridge-system response under near-fault ground motions. The effects of vertical excitations and transverse shear-keys at the bridge abutments on the superstructure displacement and column drifts were also explored.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romanov, Gennady

    Typically the RFQs are designed using the Parmteq, DesRFQ and other similar specialized codes, which produces the files containing the field and geometrical parameters for every cell. The beam dynamic simulations with these analytical fields a re, of course, ideal realizations of the designed RFQs. The new advanced computing capabilities made it possible to simulate beam and even dark current in the realistic 3D electromagnetic fields in the RFQs that may reflect cavity tuning, presence of tune rs and couplers, RFQ segmentation etc. The paper describes the utilization of full 3D field distribution obtained with CST Studio Suite for beammore » dynamic simulations using both PIC solver of CST Particle Studio and the beam dynamic code TRACK.« less

  20. Users guide: The LaRC human-operator-simulator-based pilot model

    NASA Technical Reports Server (NTRS)

    Bogart, E. H.; Waller, M. C.

    1985-01-01

    A Human Operator Simulator (HOS) based pilot model has been developed for use at NASA LaRC for analysis of flight management problems. The model is currently configured to simulate piloted flight of an advanced transport airplane. The generic HOS operator and machine model was originally developed under U.S. Navy sponsorship by Analytics, Inc. and through a contract with LaRC was configured to represent a pilot flying a transport airplane. A version of the HOS program runs in batch mode on LaRC's (60-bit-word) central computer system. This document provides a guide for using the program and describes in some detail the assortment of files used during its operation.

  1. TOPICAL REVIEW: Advances and challenges in computational plasma science

    NASA Astrophysics Data System (ADS)

    Tang, W. M.; Chan, V. S.

    2005-02-01

    Scientific simulation, which provides a natural bridge between theory and experiment, is an essential tool for understanding complex plasma behaviour. Recent advances in simulations of magnetically confined plasmas are reviewed in this paper, with illustrative examples, chosen from associated research areas such as microturbulence, magnetohydrodynamics and other topics. Progress has been stimulated, in particular, by the exponential growth of computer speed along with significant improvements in computer technology. The advances in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics have produced increasingly good agreement between experimental observations and computational modelling. This was enabled by two key factors: (a) innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales and (b) access to powerful new computational resources. Excellent progress has been made in developing codes for which computer run-time and problem-size scale well with the number of processors on massively parallel processors (MPPs). Examples include the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPPs to produce three-dimensional, general geometry, nonlinear particle simulations that have accelerated advances in understanding the nature of turbulence self-regulation by zonal flows. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In looking towards the future, the current results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. This should produce the scientific excitement which will help to (a) stimulate enhanced cross-cutting collaborations with other fields and (b) attract the bright young talent needed for the future health of the field of plasma science.

  2. Advances and challenges in computational plasma science

    NASA Astrophysics Data System (ADS)

    Tang, W. M.

    2005-02-01

    Scientific simulation, which provides a natural bridge between theory and experiment, is an essential tool for understanding complex plasma behaviour. Recent advances in simulations of magnetically confined plasmas are reviewed in this paper, with illustrative examples, chosen from associated research areas such as microturbulence, magnetohydrodynamics and other topics. Progress has been stimulated, in particular, by the exponential growth of computer speed along with significant improvements in computer technology. The advances in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics have produced increasingly good agreement between experimental observations and computational modelling. This was enabled by two key factors: (a) innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales and (b) access to powerful new computational resources. Excellent progress has been made in developing codes for which computer run-time and problem-size scale well with the number of processors on massively parallel processors (MPPs). Examples include the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPPs to produce three-dimensional, general geometry, nonlinear particle simulations that have accelerated advances in understanding the nature of turbulence self-regulation by zonal flows. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In looking towards the future, the current results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. This should produce the scientific excitement which will help to (a) stimulate enhanced cross-cutting collaborations with other fields and (b) attract the bright young talent needed for the future health of the field of plasma science.

  3. Exploring simulated early star formation in the context of the ultrafaint dwarf galaxies

    NASA Astrophysics Data System (ADS)

    Corlies, Lauren; Johnston, Kathryn V.; Wise, John H.

    2018-04-01

    Ultrafaint dwarf galaxies (UFDs) are typically assumed to have simple, stellar populations with star formation ending at reionization. Yet as the observations of these galaxies continue to improve, their star formation histories (SFHs) are revealed to be more complicated than previously thought. In this paper, we study how star formation, chemical enrichment, and mixing proceed in small, dark matter haloes at early times using a high-resolution, cosmological, hydrodynamical simulation. The goals are to inform the future use of analytic models and to explore observable properties of the simulated haloes in the context of UFD data. Specifically, we look at analytic approaches that might inform metal enrichment within and beyond small galaxies in the early Universe. We find that simple assumptions for modelling the extent of supernova-driven winds agree with the simulation on average, whereas inhomogeneous mixing and gas flows have a large effect on the spread in simulated stellar metallicities. In the context of the UFDs, this work demonstrates that simulations can form haloes with a complex SFH and a large spread in the metallicity distribution function within a few hundred Myr in the early Universe. In particular, bursty and continuous star formation are seen in the simulation and both scenarios have been argued from the data. Spreads in the simulated metallicities, however, remain too narrow and too metal-rich when compared to the UFDs. Future work is needed to help reduce these discrepancies and advance our interpretation of the data.

  4. Liquid Behavior at Critical and Supercritical Conditions

    NASA Technical Reports Server (NTRS)

    Chiu, Huei-Huang; Gross, Klaus W.

    1989-01-01

    At a JANNAF workshop, the issue of fluids at and above the critical point was discussed to obtain a better understanding of similar conditions in combustion chambers of rocket engines. Invited experts from academic, industrial, and government institutions presented the most recent physical, numerical, and experimental advances. During the final discussion period, it was agreed that: (1) no analytical capability exists to simulate subject conditions; (2) mechanisms reflected by opalescence, the solubility of gases, other interfacial phenomena listed, and fluorescence diagnostics are new and important; (3) multicomponent mixtures, radiation, critical fluctuation, and other recorded ones pose unknown effects; and (4) various identified analytical and experimental actions must be initiated in a mutually supporting sequence.

  5. Big data to smart data in Alzheimer's disease: The brain health modeling initiative to foster actionable knowledge.

    PubMed

    Geerts, Hugo; Dacks, Penny A; Devanarayan, Viswanath; Haas, Magali; Khachaturian, Zaven S; Gordon, Mark Forrest; Maudsley, Stuart; Romero, Klaus; Stephenson, Diane

    2016-09-01

    Massive investment and technological advances in the collection of extensive and longitudinal information on thousands of Alzheimer patients results in large amounts of data. These "big-data" databases can potentially advance CNS research and drug development. However, although necessary, they are not sufficient, and we posit that they must be matched with analytical methods that go beyond retrospective data-driven associations with various clinical phenotypes. Although these empirically derived associations can generate novel and useful hypotheses, they need to be organically integrated in a quantitative understanding of the pathology that can be actionable for drug discovery and development. We argue that mechanism-based modeling and simulation approaches, where existing domain knowledge is formally integrated using complexity science and quantitative systems pharmacology can be combined with data-driven analytics to generate predictive actionable knowledge for drug discovery programs, target validation, and optimization of clinical development. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Using spatial principles to optimize distributed computing for enabling the physical science discoveries

    PubMed Central

    Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing

    2011-01-01

    Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century. PMID:21444779

  7. Using spatial principles to optimize distributed computing for enabling the physical science discoveries.

    PubMed

    Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing

    2011-04-05

    Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century.

  8. The underdamped Brownian duet and stochastic linear irreversible thermodynamics

    NASA Astrophysics Data System (ADS)

    Proesmans, Karel; Van den Broeck, Christian

    2017-10-01

    Building on our earlier work [Proesmans et al., Phys. Rev. X 6, 041010 (2016)], we introduce the underdamped Brownian duet as a prototype model of a dissipative system or of a work-to-work engine. Several recent advances from the theory of stochastic thermodynamics are illustrated with explicit analytic calculations and corresponding Langevin simulations. In particular, we discuss the Onsager-Casimir symmetry, the trade-off relations between power, efficiency and dissipation, and stochastic efficiency.

  9. Combining Modeling and Gaming for Predictive Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riensche, Roderick M.; Whitney, Paul D.

    2012-08-22

    Many of our most significant challenges involve people. While human behavior has long been studied, there are recent advances in computational modeling of human behavior. With advances in computational capabilities come increases in the volume and complexity of data that humans must understand in order to make sense of and capitalize on these modeling advances. Ultimately, models represent an encapsulation of human knowledge. One inherent challenge in modeling is efficient and accurate transfer of knowledge from humans to models, and subsequent retrieval. The simulated real-world environment of games presents one avenue for these knowledge transfers. In this paper we describemore » our approach of combining modeling and gaming disciplines to develop predictive capabilities, using formal models to inform game development, and using games to provide data for modeling.« less

  10. Optimal clinical trial design based on a dichotomous Markov-chain mixed-effect sleep model.

    PubMed

    Steven Ernest, C; Nyberg, Joakim; Karlsson, Mats O; Hooker, Andrew C

    2014-12-01

    D-optimal designs for discrete-type responses have been derived using generalized linear mixed models, simulation based methods and analytical approximations for computing the fisher information matrix (FIM) of non-linear mixed effect models with homogeneous probabilities over time. In this work, D-optimal designs using an analytical approximation of the FIM for a dichotomous, non-homogeneous, Markov-chain phase advanced sleep non-linear mixed effect model was investigated. The non-linear mixed effect model consisted of transition probabilities of dichotomous sleep data estimated as logistic functions using piecewise linear functions. Theoretical linear and nonlinear dose effects were added to the transition probabilities to modify the probability of being in either sleep stage. D-optimal designs were computed by determining an analytical approximation the FIM for each Markov component (one where the previous state was awake and another where the previous state was asleep). Each Markov component FIM was weighted either equally or by the average probability of response being awake or asleep over the night and summed to derive the total FIM (FIM(total)). The reference designs were placebo, 0.1, 1-, 6-, 10- and 20-mg dosing for a 2- to 6-way crossover study in six dosing groups. Optimized design variables were dose and number of subjects in each dose group. The designs were validated using stochastic simulation/re-estimation (SSE). Contrary to expectations, the predicted parameter uncertainty obtained via FIM(total) was larger than the uncertainty in parameter estimates computed by SSE. Nevertheless, the D-optimal designs decreased the uncertainty of parameter estimates relative to the reference designs. Additionally, the improvement for the D-optimal designs were more pronounced using SSE than predicted via FIM(total). Through the use of an approximate analytic solution and weighting schemes, the FIM(total) for a non-homogeneous, dichotomous Markov-chain phase advanced sleep model was computed and provided more efficient trial designs and increased nonlinear mixed-effects modeling parameter precision.

  11. Application of system identification to analytic rotor modeling from simulated and wind tunnel dynamic test data, part 2

    NASA Technical Reports Server (NTRS)

    Hohenemser, K. H.; Banerjee, D.

    1977-01-01

    An introduction to aircraft state and parameter identification methods is presented. A simplified form of the maximum likelihood method is selected to extract analytical aeroelastic rotor models from simulated and dynamic wind tunnel test results for accelerated cyclic pitch stirring excitation. The dynamic inflow characteristics for forward flight conditions from the blade flapping responses without direct inflow measurements were examined. The rotor blades are essentially rigid for inplane bending and for torsion within the frequency range of study, but flexible in out-of-plane bending. Reverse flow effects are considered for high rotor advance ratios. Two inflow models are studied; the first is based on an equivalent blade Lock number, the second is based on a time delayed momentum inflow. In addition to the inflow parameters, basic rotor parameters like the blade natural frequency and the actual blade Lock number are identified together with measurement bias values. The effect of the theoretical dynamic inflow on the rotor eigenvalues is evaluated.

  12. Simulating a High-Spin Black Hole-Neutron Star Binary

    NASA Astrophysics Data System (ADS)

    Derby, John; Lovelace, Geoffrey; Duez, Matt; Foucart, Francois; Simulating Extreme Spacetimes (SXS) Collaboration

    2017-01-01

    During their first observing run (fall 2015) Advanced LIGO detected gravitational waves from merging black holes. In its future observations LIGO could detect black hole neutron star binaries (BHNS). It is important to have numerical simulations to predict these waves, to help find as many of these waves as possible and to estimate the sources properties, because at times near merger analytic approximations fail. Also, numerical models of the disk formed when the black hole tears apart the neutron star can help us learn about these systems' potential electromagnetic counterparts. One area of the parameter space for BHNS systems that is particularly challenging is simulations with high black hole spin. I will present results from a new BHNS simulation that has a black hole spin of 90% of the theoretical maximum. We are part of SXS but not all.

  13. Man-Vehicle Systems Research Facility - Design and operating characteristics

    NASA Technical Reports Server (NTRS)

    Shiner, Robert J.; Sullivan, Barry T.

    1992-01-01

    This paper describes the full-mission flight simulation facility at the NASA Ames Research Center. The Man-Vehicle Systems Research Facility (MVSRF) supports aeronautical human factors research and consists of two full-mission flight simulators and an air-traffic-control simulator. The facility is used for a broad range of human factors research in both conventional and advanced aviation systems. The objectives of the research are to improve the understanding of the causes and effects of human errors in aviation operations, and to limit their occurrence. The facility is used to: (1) develop fundamental analytical expressions of the functional performance characteristics of aircraft flight crews; (2) formulate principles and design criteria for aviation environments; (3) evaluate the integration of subsystems in contemporary flight and air traffic control scenarios; and (4) develop training and simulation technologies.

  14. Composite Cure Process Modeling and Simulations using COMPRO(Registered Trademark) and Validation of Residual Strains using Fiber Optics Sensors

    NASA Technical Reports Server (NTRS)

    Sreekantamurthy, Thammaiah; Hudson, Tyler B.; Hou, Tan-Hung; Grimsley, Brian W.

    2016-01-01

    Composite cure process induced residual strains and warping deformations in composite components present significant challenges in the manufacturing of advanced composite structure. As a part of the Manufacturing Process and Simulation initiative of the NASA Advanced Composite Project (ACP), research is being conducted on the composite cure process by developing an understanding of the fundamental mechanisms by which the process induced factors influence the residual responses. In this regard, analytical studies have been conducted on the cure process modeling of composite structural parts with varied physical, thermal, and resin flow process characteristics. The cure process simulation results were analyzed to interpret the cure response predictions based on the underlying physics incorporated into the modeling tool. In the cure-kinetic analysis, the model predictions on the degree of cure, resin viscosity and modulus were interpreted with reference to the temperature distribution in the composite panel part and tool setup during autoclave or hot-press curing cycles. In the fiber-bed compaction simulation, the pore pressure and resin flow velocity in the porous media models, and the compaction strain responses under applied pressure were studied to interpret the fiber volume fraction distribution predictions. In the structural simulation, the effect of temperature on the resin and ply modulus, and thermal coefficient changes during curing on predicted mechanical strains and chemical cure shrinkage strains were studied to understand the residual strains and stress response predictions. In addition to computational analysis, experimental studies were conducted to measure strains during the curing of laminated panels by means of optical fiber Bragg grating sensors (FBGs) embedded in the resin impregnated panels. The residual strain measurements from laboratory tests were then compared with the analytical model predictions. The paper describes the cure process procedures and residual strain predications, and discusses pertinent experimental results from the validation studies.

  15. Analytical solutions to dissolved contaminant plume evolution with source depletion during carbon dioxide storage.

    PubMed

    Yang, Yong; Liu, Yongzhong; Yu, Bo; Ding, Tian

    2016-06-01

    Volatile contaminants may migrate with carbon dioxide (CO2) injection or leakage in subsurface formations, which leads to the risk of the CO2 storage and the ecological environment. This study aims to develop an analytical model that could predict the contaminant migration process induced by CO2 storage. The analytical model with two moving boundaries is obtained through the simplification of the fully coupled model for the CO2-aqueous phase -stagnant phase displacement system. The analytical solutions are confirmed and assessed through the comparison with the numerical simulations of the fully coupled model. Then, some key variables in the analytical solutions, including the critical time, the locations of the dual moving boundaries and the advance velocity, are discussed to present the characteristics of contaminant migration in the multi-phase displacement system. The results show that these key variables are determined by four dimensionless numbers, Pe, RD, Sh and RF, which represent the effects of the convection, the dispersion, the interphase mass transfer and the retention factor of contaminant, respectively. The proposed analytical solutions could be used for tracking the migration of the injected CO2 and the contaminants in subsurface formations, and also provide an analytical tool for other solute transport in multi-phase displacement system. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Advancing Air Force Scheduling through Modeling Problem Topologies

    DTIC Science & Technology

    2006-08-03

    Merrill on August 23, 2005 and corresponded with Major David Van Veldhuizen in Fall 2005 about obtaining data. 3.4.3 Transitions Analytical Graphics and...observation satellite orbit. Technical Report CRT-2003-27, Centre de recherche sur les transports, July 2003. [5] Van -Dat Cung. ROADEF 2003: Results of the...collaborateurs/etd/default.htm. January, 2004. [15] P.J.M van Laarhoven, E.H.L. Aarts, and J.K. Lenstra. Job shop scheduling by simulated annealing

  17. HOST turbine heat transfer program summary

    NASA Technical Reports Server (NTRS)

    Gladden, Herbert J.; Simoneau, Robert J.

    1988-01-01

    The objectives of the HOST Turbine Heat Transfer subproject were to obtain a better understanding of the physics of the aerothermodynamic phenomena and to assess and improve the analytical methods used to predict the flow and heat transfer in high temperature gas turbines. At the time the HOST project was initiated, an across-the-board improvement in turbine design technology was needed. A building-block approach was utilized and the research ranged from the study of fundamental phenomena and modeling to experiments in simulated real engine environments. Experimental research accounted for approximately 75 percent of the funding with the remainder going to analytical efforts. A healthy government/industry/university partnership, with industry providing almost half of the research, was created to advance the turbine heat transfer design technology base.

  18. Diesel fuel to dc power: Navy & Marine Corps Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bloomfield, D.P.

    1996-12-31

    During the past year Analytic Power has tested fuel cell stacks and diesel fuel processors for US Navy and Marine Corps applications. The units are 10 kW demonstration power plants. The USN power plant was built to demonstrate the feasibility of diesel fueled PEM fuel cell power plants for 250 kW and 2.5 MW shipboard power systems. We designed and tested a ten cell, 1 kW USMC substack and fuel processor. The complete 10 kW prototype power plant, which has application to both power and hydrogen generation, is now under construction. The USN and USMC fuel cell stacks have beenmore » tested on both actual and simulated reformate. Analytic Power has accumulated operating experience with autothermal reforming based fuel processors operating on sulfur bearing diesel fuel, jet fuel, propane and natural gas. We have also completed the design and fabrication of an advanced regenerative ATR for the USMC. One of the significant problems with small fuel processors is heat loss which limits its ability to operate with the high steam to carbon ratios required for coke free high efficiency operation. The new USMC unit specifically addresses these heat transfer issues. The advances in the mill programs have been incorporated into Analytic Power`s commercial units which are now under test.« less

  19. Advanced Earth-to-orbit propulsion technology program overview: Impact of civil space technology initiative

    NASA Technical Reports Server (NTRS)

    Stephenson, Frank W., Jr.

    1988-01-01

    The NASA Earth-to-Orbit (ETO) Propulsion Technology Program is dedicated to advancing rocket engine technologies for the development of fully reusable engine systems that will enable space transportation systems to achieve low cost, routine access to space. The program addresses technology advancements in the areas of engine life extension/prediction, performance enhancements, reduced ground operations costs, and in-flight fault tolerant engine operations. The primary objective is to acquire increased knowledge and understanding of rocket engine chemical and physical processes in order to evolve more realistic analytical simulations of engine internal environments, to derive more accurate predictions of steady and unsteady loads, and using improved structural analyses, to more accurately predict component life and performance, and finally to identify and verify more durable advanced design concepts. In addition, efforts were focused on engine diagnostic needs and advances that would allow integrated health monitoring systems to be developed for enhanced maintainability, automated servicing, inspection, and checkout, and ultimately, in-flight fault tolerant engine operations.

  20. Modeling the drugs' passive transfer in the body based on their chromatographic behavior.

    PubMed

    Kouskoura, Maria G; Kachrimanis, Kyriakos G; Markopoulou, Catherine K

    2014-11-01

    One of the most challenging aims in modern analytical chemistry and pharmaceutical analysis is to create models for drugs' behavior based on simulation experiments. Since drugs' effects are closely related to their molecular properties, numerous characteristics of drugs are used in order to acquire a model of passive absorption and transfer in the human body. Importantly, such direction in innovative bioanalytical methodologies is also of stressful need in the area of personalized medicine to implement nanotechnological and genomics advancements. Simulation experiments were carried out by examining and interpreting the chromatographic behavior of 113 analytes/drugs (400 observations) in RP-HPLC. The dataset employed for this purpose included 73 descriptors which are referring to the physicochemical properties of the mobile phase mixture in different proportions, the physicochemical properties of the analytes and the structural characteristics of their molecules. A series of different software packages was used to calculate all the descriptors apart from those referring to the structure of analytes. The correlation of the descriptors with the retention time of the analytes eluted from a C4 column with an aqueous mobile phase was employed as dataset to introduce the behavior models in the human body. Their evaluation with a Partial Least Squares (PLS) software proved that the chromatographic behavior of a drug on a lipophilic stationary and a polar mobile phase is directly related to its drug-ability. At the same time, the behavior of an unknown drug in the human body can be predicted with reliability via the Artificial Neural Networks (ANNs) software. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Simulation of inclined air showers

    NASA Astrophysics Data System (ADS)

    Dorofeev, Alexei V.

    The purpose of this research is simulation of Horizontal Air Showers (HAS) - Extensive Air Showers (EAS), where the cascade of particles is initiated by a primary particle with Ultra High Energy, entering the atmosphere of the Earth at zenith angles more than 70°. Particles from these HAS are detected at the ground level by the Surface Detector part of the Auger Observatory. Existing simulation models (most of them are Monte-Carlo) have limitations which come from the fact that one can't follow each and every particle and interaction in the EAS. The proposed model is a semi-analytic solution to the cascade equations, which incorporates probability functions for the most advanced hadronic interaction models available today--UrQMD for the low-energy region and NEXUS for the high energy region.

  2. Recent advances in engineering science; Proceedings of the A. Cemal Eringen Symposium, University of California, Berkeley, June 20-22, 1988

    NASA Technical Reports Server (NTRS)

    Koh, Severino L. (Editor); Speziale, Charles G. (Editor)

    1989-01-01

    Various papers on recent advances in engineering science are presented. Some individual topics addressed include: advances in adaptive methods in computational fluid mechanics, mixtures of two medicomorphic materials, computer tests of rubber elasticity, shear bands in isotropic micropolar elastic materials, nonlinear surface wave and resonator effects in magnetostrictive crystals, simulation of electrically enhanced fibrous filtration, plasticity theory of granular materials, dynamics of viscoelastic media with internal oscillators, postcritical behavior of a cantilever bar, boundary value problems in nonlocal elasticity, stability of flexible structures with random parameters, electromagnetic tornadoes in earth's ionosphere and magnetosphere, helicity fluctuations and the energy cascade in turbulence, mechanics of interfacial zones in bonded materials, propagation of a normal shock in a varying area duct, analytical mechanics of fracture and fatigue.

  3. Towards an Airframe Noise Prediction Methodology: Survey of Current Approaches

    NASA Technical Reports Server (NTRS)

    Farassat, Fereidoun; Casper, Jay H.

    2006-01-01

    In this paper, we present a critical survey of the current airframe noise (AFN) prediction methodologies. Four methodologies are recognized. These are the fully analytic method, CFD combined with the acoustic analogy, the semi-empirical method and fully numerical method. It is argued that for the immediate need of the aircraft industry, the semi-empirical method based on recent high quality acoustic database is the best available method. The method based on CFD and the Ffowcs William- Hawkings (FW-H) equation with penetrable data surface (FW-Hpds ) has advanced considerably and much experience has been gained in its use. However, more research is needed in the near future particularly in the area of turbulence simulation. The fully numerical method will take longer to reach maturity. Based on the current trends, it is predicted that this method will eventually develop into the method of choice. Both the turbulence simulation and propagation methods need to develop more for this method to become useful. Nonetheless, the authors propose that the method based on a combination of numerical and analytical techniques, e.g., CFD combined with FW-H equation, should also be worked on. In this effort, the current symbolic algebra software will allow more analytical approaches to be incorporated into AFN prediction methods.

  4. Design and optimization of a nanoprobe comprising amphiphilic chitosan colloids and Au-nanorods: Sensitive detection of human serum albumin in simulated urine

    NASA Astrophysics Data System (ADS)

    Jean, Ren-Der; Larsson, Mikael; Cheng, Wei-Da; Hsu, Yu-Yuan; Bow, Jong-Shing; Liu, Dean-Mo

    2016-12-01

    Metallic nanoparticles have been utilized as analytical tools to detect a wide range of organic analytes. In most reports, gold (Au)-based nanosensors have been modified with ligands to introduce selectivity towards a specific target molecule. However, in a recent study a new concept was presented where bare Au-nanorods on self-assembled carboxymethyl-hexanoyl chitosan (CHC) nanocarriers achieved sensitive and selective detection of human serum albumin (HSA) after manipulation of the solution pH. Here this concept was further advanced through optimization of the ratio between Au-nanorods and CHC nanocarriers to create a nanotechnology-based sensor (termed CHC-AuNR nanoprobe) with an outstanding lower detection limit (LDL) for HSA. The CHC-AuNR nanoprobe was evaluated in simulated urine solution and a LDL as low as 1.5 pM was achieved at an estimated AuNR/CHC ratio of 2. Elemental mapping and protein adsorption kinetics over three orders of magnitude in HSA concentration confirmed accumulation of HSA on the nanorods and revealed the adsorption to be completed within 15 min for all investigated concentrations. The results suggest that the CHC-AuNR nanoprobe has potential to be utilized for cost-effective detection of analytes in complex liquids.

  5. Magnetic biosensors: Modelling and simulation.

    PubMed

    Nabaei, Vahid; Chandrawati, Rona; Heidari, Hadi

    2018-04-30

    In the past few years, magnetoelectronics has emerged as a promising new platform technology in various biosensors for detection, identification, localisation and manipulation of a wide spectrum of biological, physical and chemical agents. The methods are based on the exposure of the magnetic field of a magnetically labelled biomolecule interacting with a complementary biomolecule bound to a magnetic field sensor. This Review presents various schemes of magnetic biosensor techniques from both simulation and modelling as well as analytical and numerical analysis points of view, and the performance variations under magnetic fields at steady and nonstationary states. This is followed by magnetic sensors modelling and simulations using advanced Multiphysics modelling software (e.g. Finite Element Method (FEM) etc.) and home-made developed tools. Furthermore, outlook and future directions of modelling and simulations of magnetic biosensors in different technologies and materials are critically discussed. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.

  6. Extracting and identifying concrete structural defects in GPR images

    NASA Astrophysics Data System (ADS)

    Ye, Qiling; Jiao, Liangbao; Liu, Chuanxin; Cao, Xuehong; Huston, Dryver; Xia, Tian

    2018-03-01

    Traditionally most GPR data interpretations are performed manually. With the advancement of computing technologies, how to automate GPR data interpretation to achieve high efficiency and accuracy has become an active research subject. In this paper, analytical characterizations of major defects in concrete structures, including delamination, air void and moisture in GPR images, are performed. In the study, the image features of different defects are compared. Algorithms are developed for defect feature extraction and identification. For validations, both simulation results and field test data are utilized.

  7. Dynamic Rod Worth Measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chao, Y.A.; Chapman, D.M.; Hill, D.J.

    2000-12-15

    The dynamic rod worth measurement (DRWM) technique is a method of quickly validating the predicted bank worth of control rods and shutdown rods. The DRWM analytic method is based on three-dimensional, space-time kinetic simulations of the rapid rod movements. Its measurement data is processed with an advanced digital reactivity computer. DRWM has been used as the method of bank worth validation at numerous plant startups with excellent results. The process and methodology of DRWM are described, and the measurement results of using DRWM are presented.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Yang; Liu, Zhiqiang, E-mail: lzq@semi.ac.cn, E-mail: spring@semi.ac.cn; Yi, Xiaoyan, E-mail: lzq@semi.ac.cn, E-mail: spring@semi.ac.cn

    To evaluate electron leakage in InGaN/GaN multiple quantum well (MQW) light emitting diodes (LEDs), analytic models of ballistic and quasi-ballistic transport are developed. With this model, the impact of critical variables effecting electron leakage, including the electron blocking layer (EBL), structure of multiple quantum wells (MQWs), polarization field, and temperature are explored. The simulated results based on this model shed light on previously reported experimental observations and provide basic criteria for suppressing electron leakage, advancing the design of InGaN/GaN LEDs.

  9. Relative frequencies of constrained events in stochastic processes: An analytical approach.

    PubMed

    Rusconi, S; Akhmatskaya, E; Sokolovski, D; Ballard, N; de la Cal, J C

    2015-10-01

    The stochastic simulation algorithm (SSA) and the corresponding Monte Carlo (MC) method are among the most common approaches for studying stochastic processes. They relies on knowledge of interevent probability density functions (PDFs) and on information about dependencies between all possible events. Analytical representations of a PDF are difficult to specify in advance, in many real life applications. Knowing the shapes of PDFs, and using experimental data, different optimization schemes can be applied in order to evaluate probability density functions and, therefore, the properties of the studied system. Such methods, however, are computationally demanding, and often not feasible. We show that, in the case where experimentally accessed properties are directly related to the frequencies of events involved, it may be possible to replace the heavy Monte Carlo core of optimization schemes with an analytical solution. Such a replacement not only provides a more accurate estimation of the properties of the process, but also reduces the simulation time by a factor of order of the sample size (at least ≈10(4)). The proposed analytical approach is valid for any choice of PDF. The accuracy, computational efficiency, and advantages of the method over MC procedures are demonstrated in the exactly solvable case and in the evaluation of branching fractions in controlled radical polymerization (CRP) of acrylic monomers. This polymerization can be modeled by a constrained stochastic process. Constrained systems are quite common, and this makes the method useful for various applications.

  10. The changing paradigm for integrated simulation in support of Command and Control (C2)

    NASA Astrophysics Data System (ADS)

    Riecken, Mark; Hieb, Michael

    2016-05-01

    Modern software and network technologies are on the verge of enabling what has eluded the simulation and operational communities for more than two decades, truly integrating simulation functionality into operational Command and Control (C2) capabilities. This deep integration will benefit multiple stakeholder communities from experimentation and test to training by providing predictive and advanced analytics. There is a new opportunity to support operations with simulation once a deep integration is achieved. While it is true that doctrinal and acquisition issues remain to be addressed, nonetheless it is increasingly obvious that few technical barriers persist. How will this change the way in which common simulation and operational data is stored and accessed? As the Services move towards single networks, will there be technical and policy issues associated with sharing those operational networks with simulation data, even if the simulation data is operational in nature (e.g., associated with planning)? How will data models that have traditionally been simulation only be merged in with operational data models? How will the issues of trust be addressed?

  11. Analytical methodologies for aluminium speciation in environmental and biological samples--a review.

    PubMed

    Bi, S P; Yang, X D; Zhang, F P; Wang, X L; Zou, G W

    2001-08-01

    It is recognized that aluminium (Al) is a potential environmental hazard. Acidic deposition has been linked to increased Al concentrations in natural waters. Elevated levels of Al might have serious consequences for biological communities. Of particular interest is the speciation of Al in aquatic environments, because Al toxicity depends on its forms and concentrations. In this paper, advances in analytical methodologies for Al speciation in environmental and biological samples during the past five years are reviewed. Concerns about the specific problems of Al speciation and highlights of some important methods are elucidated in sections devoted to hybrid techniques (HPLC or FPLC coupled with ET-AAS, ICP-AES, or ICP-MS), flow-injection analysis (FIA), nuclear magnetic resonance (27Al NMR), electrochemical analysis, and computer simulation. More than 130 references are cited.

  12. Review and assessment of the database and numerical modeling for turbine heat transfer

    NASA Technical Reports Server (NTRS)

    Gladden, H. J.; Simoneau, R. J.

    1988-01-01

    The objectives of the HOST Turbine Heat Transfer subproject were to obtain a better understanding of the physics of the aerothermodynamic phenomena and to assess and improve the analytical methods used to predict the flow and heat transfer in high-temperature gas turbines. At the time the HOST project was initiated, an across-the-board improvement in turbine design technology was needed. A building-block approach was utilized and the research ranged from the study of fundamental phenomena and modeling to experiments in simulated real engine environments. Experimental research accounted for approximately 75 percent of the funding while the analytical efforts were approximately 25 percent. A healthy government/industry/university partnership, with industry providing almost half of the research, was created to advance the turbine heat transfer design technology base.

  13. Finite element analysis simulations for ultrasonic array NDE inspections

    NASA Astrophysics Data System (ADS)

    Dobson, Jeff; Tweedie, Andrew; Harvey, Gerald; O'Leary, Richard; Mulholland, Anthony; Tant, Katherine; Gachagan, Anthony

    2016-02-01

    Advances in manufacturing techniques and materials have led to an increase in the demand for reliable and robust inspection techniques to maintain safety critical features. The application of modelling methods to develop and evaluate inspections is becoming an essential tool for the NDE community. Current analytical methods are inadequate for simulation of arbitrary components and heterogeneous materials, such as anisotropic welds or composite structures. Finite element analysis software (FEA), such as PZFlex, can provide the ability to simulate the inspection of these arrangements, providing the ability to economically prototype and evaluate improved NDE methods. FEA is often seen as computationally expensive for ultrasound problems however, advances in computing power have made it a more viable tool. This paper aims to illustrate the capability of appropriate FEA to produce accurate simulations of ultrasonic array inspections - minimizing the requirement for expensive test-piece fabrication. Validation is afforded via corroboration of the FE derived and experimentally generated data sets for a test-block comprising 1D and 2D defects. The modelling approach is extended to consider the more troublesome aspects of heterogeneous materials where defect dimensions can be of the same length scale as the grain structure. The model is used to facilitate the implementation of new ultrasonic array inspection methods for such materials. This is exemplified by considering the simulation of ultrasonic NDE in a weld structure in order to assess new approaches to imaging such structures.

  14. Optimal design application on the advanced aeroelastic rotor blade

    NASA Technical Reports Server (NTRS)

    Wei, F. S.; Jones, R.

    1985-01-01

    The vibration and performance optimization procedure using regression analysis was successfully applied to an advanced aeroelastic blade design study. The major advantage of this regression technique is that multiple optimizations can be performed to evaluate the effects of various objective functions and constraint functions. The data bases obtained from the rotorcraft flight simulation program C81 and Myklestad mode shape program are analytically determined as a function of each design variable. This approach has been verified for various blade radial ballast weight locations and blade planforms. This method can also be utilized to ascertain the effect of a particular cost function which is composed of several objective functions with different weighting factors for various mission requirements without any additional effort.

  15. Hydrodynamic Simulations of Protoplanetary Disks with GIZMO

    NASA Astrophysics Data System (ADS)

    Rice, Malena; Laughlin, Greg

    2018-01-01

    Over the past several decades, the field of computational fluid dynamics has rapidly advanced as the range of available numerical algorithms and computationally feasible physical problems has expanded. The development of modern numerical solvers has provided a compelling opportunity to reconsider previously obtained results in search for yet undiscovered effects that may be revealed through longer integration times and more precise numerical approaches. In this study, we compare the results of past hydrodynamic disk simulations with those obtained from modern analytical resources. We focus our study on the GIZMO code (Hopkins 2015), which uses meshless methods to solve the homogeneous Euler equations of hydrodynamics while eliminating problems arising as a result of advection between grid cells. By comparing modern simulations with prior results, we hope to provide an improved understanding of the impact of fluid mechanics upon the evolution of protoplanetary disks.

  16. Analytical calculation of electrolyte water content of a Proton Exchange Membrane Fuel Cell for on-board modelling applications

    NASA Astrophysics Data System (ADS)

    Ferrara, Alessandro; Polverino, Pierpaolo; Pianese, Cesare

    2018-06-01

    This paper proposes an analytical model of the water content of the electrolyte of a Proton Exchange Membrane Fuel Cell. The model is designed by accounting for several simplifying assumptions, which make the model suitable for on-board/online water management applications, while ensuring a good accuracy of the considered phenomena, with respect to advanced numerical solutions. The achieved analytical solution, expressing electrolyte water content, is compared with that obtained by means of a complex numerical approach, used to solve the same mathematical problem. The achieved results show that the mean error is below 5% for electrodes water content values ranging from 2 to 15 (given as boundary conditions), and it does not overcome 0.26% for electrodes water content above 5. These results prove the capability of the solution to correctly model electrolyte water content at any operating condition, aiming at embodiment into more complex frameworks (e.g., cell or stack models), related to fuel cell simulation, monitoring, control, diagnosis and prognosis.

  17. A combined analytical formulation and genetic algorithm to analyze the nonlinear damage responses of continuous fiber toughened composites

    NASA Astrophysics Data System (ADS)

    Jeon, Haemin; Yu, Jaesang; Lee, Hunsu; Kim, G. M.; Kim, Jae Woo; Jung, Yong Chae; Yang, Cheol-Min; Yang, B. J.

    2017-09-01

    Continuous fiber-reinforced composites are important materials that have the highest commercialized potential in the upcoming future among existing advanced materials. Despite their wide use and value, their theoretical mechanisms have not been fully established due to the complexity of the compositions and their unrevealed failure mechanisms. This study proposes an effective three-dimensional damage modeling of a fibrous composite by combining analytical micromechanics and evolutionary computation. The interface characteristics, debonding damage, and micro-cracks are considered to be the most influential factors on the toughness and failure behaviors of composites, and a constitutive equation considering these factors was explicitly derived in accordance with the micromechanics-based ensemble volume averaged method. The optimal set of various model parameters in the analytical model were found using modified evolutionary computation that considers human-induced error. The effectiveness of the proposed formulation was validated by comparing a series of numerical simulations with experimental data from available studies.

  18. The effects of display and autopilot functions on pilot workload for Single Pilot Instrument Flight Rule (SPIFR) operations

    NASA Technical Reports Server (NTRS)

    Hoh, Roger H.; Smith, James C.; Hinton, David A.

    1987-01-01

    An analytical and experimental research program was conducted to develop criteria for pilot interaction with advanced controls and displays in single pilot instrument flight rules (SPIFR) operations. The analytic phase reviewed fundamental considerations for pilot workload taking into account existing data, and using that data to develop a divided attention SPIFR pilot workload model. The pilot model was utilized to interpret the two experimental phases. The first experimental phase was a flight test program that evaluated pilot workload in the presence of current and near-term displays and autopilot functions. The second experiment was conducted on a King Air simulator, investigating the effects of co-pilot functions in the presence of very high SPIFR workload. The results indicate that the simplest displays tested were marginal for SPIFR operations. A moving map display aided the most in mental orientation, but had inherent deficiencies as a stand alone replacement for an HSI. Autopilot functions were highly effective for reducing pilot workload. The simulator tests showed that extremely high workload situations can be adequately handled when co-pilot functions are provided.

  19. System engineering techniques for establishing balanced design and performance guidelines for the advanced telerobotic testbed

    NASA Technical Reports Server (NTRS)

    Zimmerman, W. F.; Matijevic, J. R.

    1987-01-01

    Novel system engineering techniques have been developed and applied to establishing structured design and performance objectives for the Telerobotics Testbed that reduce technical risk while still allowing the testbed to demonstrate an advancement in state-of-the-art robotic technologies. To estblish the appropriate tradeoff structure and balance of technology performance against technical risk, an analytical data base was developed which drew on: (1) automation/robot-technology availability projections, (2) typical or potential application mission task sets, (3) performance simulations, (4) project schedule constraints, and (5) project funding constraints. Design tradeoffs and configuration/performance iterations were conducted by comparing feasible technology/task set configurations against schedule/budget constraints as well as original program target technology objectives. The final system configuration, task set, and technology set reflected a balanced advancement in state-of-the-art robotic technologies, while meeting programmatic objectives and schedule/cost constraints.

  20. Integrated optomechanical analysis and testing software development at MIT Lincoln Laboratory

    NASA Astrophysics Data System (ADS)

    Stoeckel, Gerhard P.; Doyle, Keith B.

    2013-09-01

    Advanced analytical software capabilities are being developed to advance the design of prototypical hardware in the Engineering Division at MIT Lincoln Laboratory. The current effort is focused on the integration of analysis tools tailored to the work flow, organizational structure, and current technology demands. These tools are being designed to provide superior insight into the interdisciplinary behavior of optical systems and enable rapid assessment and execution of design trades to optimize the design of optomechanical systems. The custom software architecture is designed to exploit and enhance the functionality of existing industry standard commercial software, provide a framework for centralizing internally developed tools, and deliver greater efficiency, productivity, and accuracy through standardization, automation, and integration. Specific efforts have included the development of a feature-rich software package for Structural-Thermal-Optical Performance (STOP) modeling, advanced Line Of Sight (LOS) jitter simulations, and improved integration of dynamic testing and structural modeling.

  1. Analytical Calculation of the Lower Bound on Timing Resolution for PET Scintillation Detectors Comprising High-Aspect-Ratio Crystal Elements

    PubMed Central

    Cates, Joshua W.; Vinke, Ruud; Levin, Craig S.

    2015-01-01

    Excellent timing resolution is required to enhance the signal-to-noise ratio (SNR) gain available from the incorporation of time-of-flight (ToF) information in image reconstruction for positron emission tomography (PET). As the detector’s timing resolution improves, so does SNR, reconstructed image quality, and accuracy. This directly impacts the challenging detection and quantification tasks in the clinic. The recognition of these benefits has spurred efforts within the molecular imaging community to determine to what extent the timing resolution of scintillation detectors can be improved and develop near-term solutions for advancing ToF-PET. Presented in this work, is a method for calculating the Cramér-Rao lower bound (CRLB) on timing resolution for scintillation detectors with long crystal elements, where the influence of the variation in optical path length of scintillation light on achievable timing resolution is non-negligible. The presented formalism incorporates an accurate, analytical probability density function (PDF) of optical transit time within the crystal to obtain a purely mathematical expression of the CRLB with high-aspect-ratio (HAR) scintillation detectors. This approach enables the statistical limit on timing resolution performance to be analytically expressed for clinically-relevant PET scintillation detectors without requiring Monte Carlo simulation-generated photon transport time distributions. The analytically calculated optical transport PDF was compared with detailed light transport simulations, and excellent agreement was found between the two. The coincidence timing resolution (CTR) between two 3×3×20 mm3 LYSO:Ce crystals coupled to analogue SiPMs was experimentally measured to be 162±1 ps FWHM, approaching the analytically calculated lower bound within 6.5%. PMID:26083559

  2. Analytical calculation of the lower bound on timing resolution for PET scintillation detectors comprising high-aspect-ratio crystal elements

    NASA Astrophysics Data System (ADS)

    Cates, Joshua W.; Vinke, Ruud; Levin, Craig S.

    2015-07-01

    Excellent timing resolution is required to enhance the signal-to-noise ratio (SNR) gain available from the incorporation of time-of-flight (ToF) information in image reconstruction for positron emission tomography (PET). As the detector’s timing resolution improves, so does SNR, reconstructed image quality, and accuracy. This directly impacts the challenging detection and quantification tasks in the clinic. The recognition of these benefits has spurred efforts within the molecular imaging community to determine to what extent the timing resolution of scintillation detectors can be improved and develop near-term solutions for advancing ToF-PET. Presented in this work, is a method for calculating the Cramér-Rao lower bound (CRLB) on timing resolution for scintillation detectors with long crystal elements, where the influence of the variation in optical path length of scintillation light on achievable timing resolution is non-negligible. The presented formalism incorporates an accurate, analytical probability density function (PDF) of optical transit time within the crystal to obtain a purely mathematical expression of the CRLB with high-aspect-ratio (HAR) scintillation detectors. This approach enables the statistical limit on timing resolution performance to be analytically expressed for clinically-relevant PET scintillation detectors without requiring Monte Carlo simulation-generated photon transport time distributions. The analytically calculated optical transport PDF was compared with detailed light transport simulations, and excellent agreement was found between the two. The coincidence timing resolution (CTR) between two 3× 3× 20 mm3 LYSO:Ce crystals coupled to analogue SiPMs was experimentally measured to be 162+/- 1 ps FWHM, approaching the analytically calculated lower bound within 6.5%.

  3. Incorporation of RAM techniques into simulation modeling

    NASA Astrophysics Data System (ADS)

    Nelson, S. C., Jr.; Haire, M. J.; Schryver, J. C.

    1995-01-01

    This work concludes that reliability, availability, and maintainability (RAM) analytical techniques can be incorporated into computer network simulation modeling to yield an important new analytical tool. This paper describes the incorporation of failure and repair information into network simulation to build a stochastic computer model to represent the RAM Performance of two vehicles being developed for the US Army: The Advanced Field Artillery System (AFAS) and the Future Armored Resupply Vehicle (FARV). The AFAS is the US Army's next generation self-propelled cannon artillery system. The FARV is a resupply vehicle for the AFAS. Both vehicles utilize automation technologies to improve the operational performance of the vehicles and reduce manpower. The network simulation model used in this work is task based. The model programmed in this application requirements a typical battle mission and the failures and repairs that occur during that battle. Each task that the FARV performs--upload, travel to the AFAS, refuel, perform tactical/survivability moves, return to logistic resupply, etc.--is modeled. Such a model reproduces a model reproduces operational phenomena (e.g., failures and repairs) that are likely to occur in actual performance. Simulation tasks are modeled as discrete chronological steps; after the completion of each task decisions are programmed that determine the next path to be followed. The result is a complex logic diagram or network. The network simulation model is developed within a hierarchy of vehicle systems, subsystems, and equipment and includes failure management subnetworks. RAM information and other performance measures are collected which have impact on design requirements. Design changes are evaluated through 'what if' questions, sensitivity studies, and battle scenario changes.

  4. Assessing the thermo-mechanical TaMeTirE model in offline vehicle simulation and driving simulator tests

    NASA Astrophysics Data System (ADS)

    Durand-Gasselin, Benoit; Dailliez, Thibault; Mössner-Beigel, Monika; Knorr, Stephanie; Rauh, Jochen

    2010-12-01

    This paper presents the experiences using Michelin's thermo-mechanical TaMeTirE tyre model for real-time handling applications in the field of advanced passenger car simulation. Passenger car handling simulations were performed using the tyre model in a full-vehicle real-time environment in order to assess TaMeTirE's level of consistency with real on-track handling behaviour. To achieve this goal, a first offline comparison with a state-of-the-art handling tyre model was carried out on three handling manoeuvres. Then, online real-time simulations of steering wheel steps and slaloms in straight line were run on Daimler's driving simulator by skilled and unskilled drivers. Two analytical tyre temperature effects and two inflation pressure effects were carried out in order to feel their impact on the handling behaviour of the vehicle. This paper underlines the realism of the handling simulation results performed with TaMeTirE, and shows the significant impact of a pressure or a temperature effect on the handling behaviour of a car.

  5. Functionalization and Characterization of Nanomaterial Gated Field-Effect Transistor-Based Biosensors and the Design of a Multi-Analyte Implantable Biosensing Platform

    NASA Astrophysics Data System (ADS)

    Croce, Robert A., Jr.

    Advances in semiconductor research and complementary-metal-oxide semiconductor fabrication allow for the design and implementation of miniaturized metabolic monitoring systems, as well as advanced biosensor design. The first part of this dissertation will focus on the design and fabrication of nanomaterial (single-walled carbon nanotube and quantum dot) gated field-effect transistors configured as protein sensors. These novel device structures have been functionalized with single-stranded DNA aptamers, and have shown sensor operation towards the protein Thrombin. Such advanced transistor-based sensing schemes present considerable advantages over traditional sensing methodologies in view of its miniaturization, low cost, and facile fabrication, paving the way for the ultimate realization of a multi-analyte lab-on-chip. The second part of this dissertation focuses on the design and fabrication of a needle-implantable glucose sensing platform which is based solely on photovoltaic powering and optical communication. By employing these powering and communication schemes, this design negates the need for bulky on-chip RF-based transmitters and batteries in an effort to attain extreme miniaturization required for needle-implantable/extractable applications. A complete single-sensor system coupled with a miniaturized amperometric glucose sensor has been demonstrated to exhibit reality of this technology. Furthermore, an optical selection scheme of multiple potentiostats for four different analytes (glucose, lactate, O 2 and CO2) as well as the optical transmission of sensor data has been designed for multi-analyte applications. The last part of this dissertation will focus on the development of a computational model for the amperometric glucose sensors employed in the aforementioned implantable platform. This model has been applied to single-layer single-enzyme systems, as well as multi-layer (single enzyme) systems utilizing glucose flux limiting layer-by-layer assembled outer membranes. The concentration of glucose and hydrogen peroxide within the sensor geometry, the transient response and the device response time has been simulated for both systems.

  6. Automated Deployment of Advanced Controls and Analytics in Buildings

    NASA Astrophysics Data System (ADS)

    Pritoni, Marco

    Buildings use 40% of primary energy in the US. Recent studies show that developing energy analytics and enhancing control strategies can significantly improve their energy performance. However, the deployment of advanced control software applications has been mostly limited to academic studies. Larger-scale implementations are prevented by the significant engineering time and customization required, due to significant differences among buildings. This study demonstrates how physics-inspired data-driven models can be used to develop portable analytics and control applications for buildings. Specifically, I demonstrate application of these models in all phases of the deployment of advanced controls and analytics in buildings: in the first phase, "Site Preparation and Interface with Legacy Systems" I used models to discover or map relationships among building components, automatically gathering metadata (information about data points) necessary to run the applications. During the second phase: "Application Deployment and Commissioning", models automatically learn system parameters, used for advanced controls and analytics. In the third phase: "Continuous Monitoring and Verification" I utilized models to automatically measure the energy performance of a building that has implemented advanced control strategies. In the conclusions, I discuss future challenges and suggest potential strategies for these innovative control systems to be widely deployed in the market. This dissertation provides useful new tools in terms of procedures, algorithms, and models to facilitate the automation of deployment of advanced controls and analytics and accelerate their wide adoption in buildings.

  7. Mission and Objectives for the X-1 Advanced Radiation Source*

    NASA Astrophysics Data System (ADS)

    Rochau, Gary E.; Ramirez, Juan J.; Raglin, Paul S.

    1998-11-01

    Sandia National Laboratories PO Box 5800, MS-1178, Albuquerque, NM 87185 The X-1 Advanced Radiation Source represents a next step in providing the U.S. Department of Energy's Stockpile Stewardship Program with the high-energy, large volume, laboratory x-ray source for the Radiation Effects Science and Simulation, Inertial Confinement Fusion, and Weapon Physics Programs. Advances in fast pulsed power technology and in z-pinch hohlraums on Sandia National Laboratories' Z Accelerator provide sufficient basis for pursuing the development of X-1. The X-1 plan follows a strategy based on scaling the 2 MJ x-ray output on Z via a 3-fold increase in z-pinch load current. The large volume (>5 cm3), high temperature (>150 eV), temporally long (>10 ns) hohlraums are unique outside of underground nuclear weapon testing. Analytical scaling arguments and hydrodynamic simulations indicate that these hohlraums at temperatures of 230-300 eV will ignite thermonuclear fuel and drive the reaction to a yield of 200 to 1,200 MJ in the laboratory. Non-ignition sources will provide cold x-ray environments (<15 keV) and high yield fusion burn sources will provide high fidelity warm x-ray environments (15 keV-80 keV). This paper will introduce the X-1 Advanced Radiation Source Facility Project, describe the project mission, objective, and preliminary schedule.

  8. A Study of Malware Propagation via Online Social Networking

    NASA Astrophysics Data System (ADS)

    Faghani, Mohammad Reza; Nguyen, Uyen Trang

    The popularity of online social networks (OSNs) have attracted malware creators who would use OSNs as a platform to propagate automated worms from one user's computer to another's. On the other hand, the topic of malware propagation in OSNs has only been investigated recently. In this chapter, we discuss recent advances on the topic of malware propagation by way of online social networking. In particular, we present three malware propagation techniques in OSNs, namely cross site scripting (XSS), Trojan and clickjacking types, and their characteristics via analytical models and simulations.

  9. The physics of proton therapy.

    PubMed

    Newhauser, Wayne D; Zhang, Rui

    2015-04-21

    The physics of proton therapy has advanced considerably since it was proposed in 1946. Today analytical equations and numerical simulation methods are available to predict and characterize many aspects of proton therapy. This article reviews the basic aspects of the physics of proton therapy, including proton interaction mechanisms, proton transport calculations, the determination of dose from therapeutic and stray radiations, and shielding design. The article discusses underlying processes as well as selected practical experimental and theoretical methods. We conclude by briefly speculating on possible future areas of research of relevance to the physics of proton therapy.

  10. The physics of proton therapy

    PubMed Central

    Newhauser, Wayne D; Zhang, Rui

    2015-01-01

    The physics of proton therapy has advanced considerably since it was proposed in 1946. Today analytical equations and numerical simulation methods are available to predict and characterize many aspects of proton therapy. This article reviews the basic aspects of the physics of proton therapy, including proton interaction mechanisms, proton transport calculations, the determination of dose from therapeutic and stray radiations, and shielding design. The article discusses underlying processes as well as selected practical experimental and theoretical methods. We conclude by briefly speculating on possible future areas of research of relevance to the physics of proton therapy. PMID:25803097

  11. Cooperating attackers in neural cryptography.

    PubMed

    Shacham, Lanir N; Klein, Einat; Mislovaty, Rachel; Kanter, Ido; Kinzel, Wolfgang

    2004-06-01

    A successful attack strategy in neural cryptography is presented. The neural cryptosystem, based on synchronization of neural networks by mutual learning, has been recently shown to be secure under different attack strategies. The success of the advanced attacker presented here, called the "majority-flipping attacker," does not decay with the parameters of the model. This attacker's outstanding success is due to its using a group of attackers which cooperate throughout the synchronization process, unlike any other attack strategy known. An analytical description of this attack is also presented, and fits the results of simulations.

  12. Advances in simultaneous DSC-FTIR microspectroscopy for rapid solid-state chemical stability studies: some dipeptide drugs as examples.

    PubMed

    Lin, Shan-Yang; Wang, Shun-Li

    2012-04-01

    The solid-state chemistry of drugs has seen growing importance in the pharmaceutical industry for the development of useful API (active pharmaceutical ingredients) of drugs and stable dosage forms. The stability of drugs in various solid dosage forms is an important issue because solid dosage forms are the most common pharmaceutical formulation in clinical use. In solid-state stability studies of drugs, an ideal accelerated method must not only be selected by different complicated methods, but must also detect the formation of degraded product. In this review article, an analytical technique combining differential scanning calorimetry and Fourier-transform infrared (DSC-FTIR) microspectroscopy simulates the accelerated stability test, and simultaneously detects the decomposed products in real time. The pharmaceutical dipeptides aspartame hemihydrate, lisinopril dihydrate, and enalapril maleate either with or without Eudragit E were used as testing examples. This one-step simultaneous DSC-FTIR technique for real-time detection of diketopiperazine (DKP) directly evidenced the dehydration process and DKP formation as an impurity common in pharmaceutical dipeptides. DKP formation in various dipeptides determined by different analytical methods had been collected and compiled. Although many analytical methods have been applied, the combined DSC-FTIR technique is an easy and fast analytical method which not only can simulate the accelerated drug stability testing but also at the same time enable to explore phase transformation as well as degradation due to thermal-related reactions. This technique offers quick and proper interpretations. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. What's the Bottom Line

    EPA Science Inventory

    Advances in analytical instrumentation have not only increased the number and types of chemicals measured, but reduced the quantitation limits, allowing these chemicals to be detected at progressively lower concentrations in various environmental matrices. Such analytical advanc...

  14. Review and assessment of the database and numerical modeling for turbine heat transfer

    NASA Technical Reports Server (NTRS)

    Gladden, H. J.; Simoneau, R. J.

    1989-01-01

    The objectives of the NASA Hot Section Technology (HOST) Turbine Heat Transfer subproject were to obtain a better understanding of the physics of the aerothermodynamic phenomena and to assess and improve the analytical methods used to predict the flow and heat transfer in high-temperature gas turbines. At the time the HOST project was initiated, an across-the-board improvement in turbine design technology was needed. A building-block approach was utilized and the research ranged from the study of fundamental phenomena and modeling to experiments in simulated real engine environments. Experimental research accounted for approximately 75 percent of the funding while the analytical efforts were approximately 25 percent. A healthy government/industry/university partnership, with industry providing almost half of the research, was created to advance the turbine heat transfer design technology base.

  15. Analysis of uncertainties in turbine metal temperature predictions

    NASA Technical Reports Server (NTRS)

    Stepka, F. S.

    1980-01-01

    An analysis was conducted to examine the extent to which various factors influence the accuracy of analytically predicting turbine blade metal temperatures and to determine the uncertainties in these predictions for several accuracies of the influence factors. The advanced turbofan engine gas conditions of 1700 K and 40 atmospheres were considered along with those of a highly instrumented high temperature turbine test rig and a low temperature turbine rig that simulated the engine conditions. The analysis showed that the uncertainty in analytically predicting local blade temperature was as much as 98 K, or 7.6 percent of the metal absolute temperature, with current knowledge of the influence factors. The expected reductions in uncertainties in the influence factors with additional knowledge and tests should reduce the uncertainty in predicting blade metal temperature to 28 K, or 2.1 percent of the metal absolute temperature.

  16. Advanced analytical modeling of double-gate Tunnel-FETs - A performance evaluation

    NASA Astrophysics Data System (ADS)

    Graef, Michael; Hosenfeld, Fabian; Horst, Fabian; Farokhnejad, Atieh; Hain, Franziska; Iñíguez, Benjamín; Kloes, Alexander

    2018-03-01

    The Tunnel-FET is one of the most promising devices to be the successor of the standard MOSFET due to its alternative current transport mechanism, which allows a smaller subthreshold slope than the physically limited 60 mV/dec of the MOSFET. Recently fabricated devices show smaller slopes already but mostly not over multiple decades of the current transfer characteristics. In this paper the performance limiting effects, occurring during the fabrication process of the device, such as doping profiles and midgap traps are analyzed by physics-based analytical models and their performance limiting abilities are determined. Additionally, performance enhancing possibilities, such as hetero-structures and ambipolarity improvements are introduced and discussed. An extensive double-gate n-Tunnel-FET model is presented, which meets the versatile device requirements and shows a good fit with TCAD simulations and measurement data.

  17. Simulation of sampling effects in FPAs

    NASA Astrophysics Data System (ADS)

    Cook, Thomas H.; Hall, Charles S.; Smith, Frederick G.; Rogne, Timothy J.

    1991-09-01

    The use of multiplexers and large focal plane arrays in advanced thermal imaging systems has drawn renewed attention to sampling and aliasing issues in imaging applications. As evidenced by discussions in a recent workshop, there is no clear consensus among experts whether aliasing in sensor designs can be readily tolerated, or must be avoided at all cost. Further, there is no straightforward, analytical method that can answer the question, particularly when considering image interpreters as different as humans and autonomous target recognizers (ATR). However, the means exist for investigating sampling and aliasing issues through computer simulation. The U.S. Army Tank-Automotive Command (TACOM) Thermal Image Model (TTIM) provides realistic sensor imagery that can be evaluated by both human observers and TRs. This paper briefly describes the history and current status of TTIM, explains the simulation of FPA sampling effects, presents validation results of the FPA sensor model, and demonstrates the utility of TTIM for investigating sampling effects in imagery.

  18. Delamination Defect Detection Using Ultrasonic Guided Waves in Advanced Hybrid Structural Elements

    NASA Astrophysics Data System (ADS)

    Yan, Fei; Qi, Kevin ``Xue''; Rose, Joseph L.; Weiland, Hasso

    2010-02-01

    Nondestructive testing for multilayered structures is challenging because of increased numbers of layers and plate thicknesses. In this paper, ultrasonic guided waves are applied to detect delamination defects inside a 23-layer Alcoa Advanced Hybrid Structural plate. A semi-analytical finite element (SAFE) method generates dispersion curves and wave structures in order to select appropriate wave structures to detect certain defects. One guided wave mode and frequency is chosen to achieve large in-plane displacements at regions of interest. The interactions of the selected mode with defects are simulated using finite element models. Experiments are conducted and compared with bulk wave measurements. It is shown that guided waves can detect deeply embedded damages inside thick multilayer fiber-metal laminates with suitable mode and frequency selection.

  19. NFFA-Europe: enhancing European competitiveness in nanoscience research and innovation (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Carsughi, Flavio; Fonseca, Luis

    2017-06-01

    NFFA-EUROPE is an European open access resource for experimental and theoretical nanoscience and sets out a platform to carry out comprehensive projects for multidisciplinary research at the nanoscale extending from synthesis to nanocharacterization to theory and numerical simulation. Advanced infrastructures specialized on growth, nano-lithography, nano-characterization, theory and simulation and fine-analysis with Synchrotron, FEL and Neutron radiation sources are integrated in a multi-site combination to develop frontier research on methods for reproducible nanoscience research and to enable European and international researchers from diverse disciplines to carry out advanced proposals impacting science and innovation. NFFA-EUROPE will enable coordinated access to infrastructures on different aspects of nanoscience research that is not currently available at single specialized ones and without duplicating their specific scopes. Approved user projects will have access to the best suited instruments and support competences for performing the research, including access to analytical large scale facilities, theory and simulation and high-performance computing facilities. Access is offered free of charge to European users and users will receive a financial contribution for their travel, accommodation and subsistence costs. The users access will include several "installations" and will be coordinated through a single entry point portal that will activate an advanced user-infrastructure dialogue to build up a personalized access programme with an increasing return on science and innovation production. The own research activity of NFFA-EUROPE will address key bottlenecks of nanoscience research: nanostructure traceability, protocol reproducibility, in-operando nano-manipulation and analysis, open data.

  20. Analytical modeling of helium turbomachinery using FORTRAN 77

    NASA Astrophysics Data System (ADS)

    Balaji, Purushotham

    Advanced Generation IV modular reactors, including Very High Temperature Reactors (VHTRs), utilize helium as the working fluid, with a potential for high efficiency power production utilizing helium turbomachinery. Helium is chemically inert and nonradioactive which makes the gas ideal for a nuclear power-plant environment where radioactive leaks are a high concern. These properties of helium gas helps to increase the safety features as well as to decrease the aging process of plant components. The lack of sufficient helium turbomachinery data has made it difficult to study the vital role played by the gas turbine components of these VHTR powered cycles. Therefore, this research work focuses on predicting the performance of helium compressors. A FORTRAN77 program is developed to simulate helium compressor operation, including surge line prediction. The resulting design point and off design performance data can be used to develop compressor map files readable by Numerical Propulsion Simulation Software (NPSS). This multi-physics simulation software that was developed for propulsion system analysis has found applications in simulating power-plant cycles.

  1. Parallel Discrete Molecular Dynamics Simulation With Speculation and In-Order Commitment*†

    PubMed Central

    Khan, Md. Ashfaquzzaman; Herbordt, Martin C.

    2011-01-01

    Discrete molecular dynamics simulation (DMD) uses simplified and discretized models enabling simulations to advance by event rather than by timestep. DMD is an instance of discrete event simulation and so is difficult to scale: even in this multi-core era, all reported DMD codes are serial. In this paper we discuss the inherent difficulties of scaling DMD and present our method of parallelizing DMD through event-based decomposition. Our method is microarchitecture inspired: speculative processing of events exposes parallelism, while in-order commitment ensures correctness. We analyze the potential of this parallelization method for shared-memory multiprocessors. Achieving scalability required extensive experimentation with scheduling and synchronization methods to mitigate serialization. The speed-up achieved for a variety of system sizes and complexities is nearly 6× on an 8-core and over 9× on a 12-core processor. We present and verify analytical models that account for the achieved performance as a function of available concurrency and architectural limitations. PMID:21822327

  2. Parallel Discrete Molecular Dynamics Simulation With Speculation and In-Order Commitment.

    PubMed

    Khan, Md Ashfaquzzaman; Herbordt, Martin C

    2011-07-20

    Discrete molecular dynamics simulation (DMD) uses simplified and discretized models enabling simulations to advance by event rather than by timestep. DMD is an instance of discrete event simulation and so is difficult to scale: even in this multi-core era, all reported DMD codes are serial. In this paper we discuss the inherent difficulties of scaling DMD and present our method of parallelizing DMD through event-based decomposition. Our method is microarchitecture inspired: speculative processing of events exposes parallelism, while in-order commitment ensures correctness. We analyze the potential of this parallelization method for shared-memory multiprocessors. Achieving scalability required extensive experimentation with scheduling and synchronization methods to mitigate serialization. The speed-up achieved for a variety of system sizes and complexities is nearly 6× on an 8-core and over 9× on a 12-core processor. We present and verify analytical models that account for the achieved performance as a function of available concurrency and architectural limitations.

  3. Liquid Oxygen/Liquid Methane Integrated Propulsion System Test Bed

    NASA Technical Reports Server (NTRS)

    Flynn, Howard; Lusby, Brian; Villemarette, Mark

    2011-01-01

    In support of NASA?s Propulsion and Cryogenic Advanced Development (PCAD) project, a liquid oxygen (LO2)/liquid methane (LCH4) Integrated Propulsion System Test Bed (IPSTB) was designed and advanced to the Critical Design Review (CDR) stage at the Johnson Space Center. The IPSTB?s primary objectives are to study LO2/LCH4 propulsion system steady state and transient performance, operational characteristics and to validate fluid and thermal models of a LO2/LCH4 propulsion system for use in future flight design work. Two phase thermal and dynamic fluid flow models of the IPSTB were built to predict the system performance characteristics under a variety of operating modes and to aid in the overall system design work. While at ambient temperature and simulated altitude conditions at the White Sands Test Facility, the IPSTB and its approximately 600 channels of system instrumentation would be operated to perform a variety of integrated main engine and reaction control engine hot fire tests. The pressure, temperature, and flow rate data collected during this testing would then be used to validate the analytical models of the IPSTB?s thermal and dynamic fluid flow performance. An overview of the IPSTB design and analytical model development will be presented.

  4. Extending the Constant Power Speed Range of the Brushless DC Motor through Dual Mode Inverter Control -- Part I: Theory and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lawler, J.S.

    2001-10-29

    An inverter topology and control scheme has been developed that can drive low-inductance, surface-mounted permanent magnet motors over the wide constant power speed range required in electric vehicle applications. This new controller is called the dual-mode inverter control (DMIC) [1]. The DMIC can drive either the Permanent Magnet Synchronous Machine (PMSM) with sinusoidal back emf, or the brushless dc machine (BDCM) with trapezoidal emf in the motoring and regenerative braking modes. In this paper we concentrate on the BDCM under high-speed motoring conditions. Simulation results show that if all motor and inverter loss mechanisms are neglected, the constant power speedmore » range of the DMIC is infinite. The simulation results are supported by closed form expressions for peak and rms motor current and average power derived from analytical solution to the differential equations governing the DMIC/BDCM drive for the lossless case. The analytical solution shows that the range of motor inductance that can be accommodated by the DMIC is more than an order of magnitude such that the DMIC is compatible with both low- and high-inductance BDCMs. Finally, method is given for integrating the classical hysteresis band current control, used for motor control below base speed, with the phase advance of DMIC that is applied above base speed. The power versus speed performance of the DMIC is then simulated across the entire speed range.« less

  5. Advanced Methods for Aircraft Engine Thrust and Noise Benefits: Nozzle-Inlet Flow Analysis

    NASA Technical Reports Server (NTRS)

    Morgan, Morris H.; Gilinsky, Mikhail; Patel, Kaushal; Coston, Calvin; Blankson, Isaiah M.

    2003-01-01

    The research is focused on a wide regime of problems in the propulsion field as well as in experimental testing and theoretical and numerical simulation analyses for advanced aircraft and rocket engines. Results obtained are based on analytical methods, numerical simulations and experimental tests at the NASA LaRC and Hampton University computer complexes and experimental facilities. The main objective of this research is injection, mixing and combustion enhancement in propulsion systems. The sub-projects in the reporting period are: (A) Aero-performance and acoustics of Telescope-shaped designs. The work included a pylon set application for SCRAMJET. (B) An analysis of sharp-edged nozzle exit designs for effective fuel injection into the flow stream in air-breathing engines: triangular-round and diamond-round nozzles. (C) Measurement technique improvements for the HU Low Speed Wind Tunnel (HU LSWT) including an automatic data acquisition system and a two component (drag-lift) balance system. In addition, a course in the field of aerodynamics was developed for the teaching and training of HU students.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaurov, Alexander A., E-mail: kaurov@uchicago.edu

    The methods for studying the epoch of cosmic reionization vary from full radiative transfer simulations to purely analytical models. While numerical approaches are computationally expensive and are not suitable for generating many mock catalogs, analytical methods are based on assumptions and approximations. We explore the interconnection between both methods. First, we ask how the analytical framework of excursion set formalism can be used for statistical analysis of numerical simulations and visual representation of the morphology of ionization fronts. Second, we explore the methods of training the analytical model on a given numerical simulation. We present a new code which emergedmore » from this study. Its main application is to match the analytical model with a numerical simulation. Then, it allows one to generate mock reionization catalogs with volumes exceeding the original simulation quickly and computationally inexpensively, meanwhile reproducing large-scale statistical properties. These mock catalogs are particularly useful for cosmic microwave background polarization and 21 cm experiments, where large volumes are required to simulate the observed signal.« less

  7. Multi-Evaporator Miniature Loop Heat Pipe for Small Spacecraft Thermal Control. Part 2; Validation Results

    NASA Technical Reports Server (NTRS)

    Ku, Jentung; Ottenstein, Laura; Douglas, Donya; Hoang, Triem

    2010-01-01

    Under NASA s New Millennium Program Space Technology 8 (ST 8) Project, Goddard Space Fight Center has conducted a Thermal Loop experiment to advance the maturity of the Thermal Loop technology from proof of concept to prototype demonstration in a relevant environment , i.e. from a technology readiness level (TRL) of 3 to a level of 6. The thermal Loop is an advanced thermal control system consisting of a miniature loop heat pipe (MLHP) with multiple evaporators and multiple condensers designed for future small system applications requiring low mass, low power, and compactness. The MLHP retains all features of state-of-the-art loop heat pipes (LHPs) and offers additional advantages to enhance the functionality, performance, versatility, and reliability of the system. An MLHP breadboard was built and tested in the laboratory and thermal vacuum environments for the TRL 4 and TRL 5 validations, respectively, and an MLHP proto-flight unit was built and tested in a thermal vacuum chamber for the TRL 6 validation. In addition, an analytical model was developed to simulate the steady state and transient behaviors of the MLHP during various validation tests. The MLHP demonstrated excellent performance during experimental tests and the analytical model predictions agreed very well with experimental data. All success criteria at various TRLs were met. Hence, the Thermal Loop technology has reached a TRL of 6. This paper presents the validation results, both experimental and analytical, of such a technology development effort.

  8. Advances in computer simulation of genome evolution: toward more realistic evolutionary genomics analysis by approximate bayesian computation.

    PubMed

    Arenas, Miguel

    2015-04-01

    NGS technologies present a fast and cheap generation of genomic data. Nevertheless, ancestral genome inference is not so straightforward due to complex evolutionary processes acting on this material such as inversions, translocations, and other genome rearrangements that, in addition to their implicit complexity, can co-occur and confound ancestral inferences. Recently, models of genome evolution that accommodate such complex genomic events are emerging. This letter explores these novel evolutionary models and proposes their incorporation into robust statistical approaches based on computer simulations, such as approximate Bayesian computation, that may produce a more realistic evolutionary analysis of genomic data. Advantages and pitfalls in using these analytical methods are discussed. Potential applications of these ancestral genomic inferences are also pointed out.

  9. Effect of Advanced Trauma Life Support program on medical interns' performance in simulated trauma patient management.

    PubMed

    Ahmadi, Koorosh; Sedaghat, Mohammad; Safdarian, Mahdi; Hashemian, Amir-Masoud; Nezamdoust, Zahra; Vaseie, Mohammad; Rahimi-Movaghar, Vafa

    2013-01-01

    Since appropriate and time-table methods in trauma care have an important impact on patients'outcome, we evaluated the effect of Advanced Trauma Life Support (ATLS) program on medical interns' performance in simulated trauma patient management. A descriptive and analytical study before and after the training was conducted on 24 randomly selected undergraduate medical interns from Imam Reza Hospital in Mashhad, Iran. On the first day, we assessed interns' clinical knowledge and their practical skill performance in confronting simulated trauma patients. After 2 days of ATLS training, we performed the same study and evaluated their score again on the fourth day. The two findings, pre- and post- ATLS periods, were compared through SPSS version 15.0 software. P values less than 0.05 were considered statistically significant. Our findings showed that interns'ability in all the three tasks improved after the training course. On the fourth day after training, there was a statistically significant increase in interns' clinical knowledge of ATLS procedures, the sequence of procedures and skill performance in trauma situations (P less than 0.001, P equal to 0.016 and P equal to 0.01 respectively). ATLS course has an important role in increasing clinical knowledge and practical skill performance of trauma care in medical interns.

  10. Gravitational waveforms for neutron star binaries from binary black hole simulations

    NASA Astrophysics Data System (ADS)

    Barkett, Kevin; Scheel, Mark; Haas, Roland; Ott, Christian; Bernuzzi, Sebastiano; Brown, Duncan; Szilagyi, Bela; Kaplan, Jeffrey; Lippuner, Jonas; Muhlberger, Curran; Foucart, Francois; Duez, Matthew

    2016-03-01

    Gravitational waves from binary neutron star (BNS) and black-hole/neutron star (BHNS) inspirals are primary sources for detection by the Advanced Laser Interferometer Gravitational-Wave Observatory. The tidal forces acting on the neutron stars induce changes in the phase evolution of the gravitational waveform, and these changes can be used to constrain the nuclear equation of state. Current methods of generating BNS and BHNS waveforms rely on either computationally challenging full 3D hydrodynamical simulations or approximate analytic solutions. We introduce a new method for computing inspiral waveforms for BNS/BHNS systems by adding the post-Newtonian (PN) tidal effects to full numerical simulations of binary black holes (BBHs), effectively replacing the non-tidal terms in the PN expansion with BBH results. Comparing a waveform generated with this method against a full hydrodynamical simulation of a BNS inspiral yields a phase difference of < 1 radian over ~ 15 orbits. The numerical phase accuracy required of BNS simulations to measure the accuracy of the method we present here is estimated as a function of the tidal deformability parameter λ.

  11. Gravitational waveforms for neutron star binaries from binary black hole simulations

    NASA Astrophysics Data System (ADS)

    Barkett, Kevin; Scheel, Mark A.; Haas, Roland; Ott, Christian D.; Bernuzzi, Sebastiano; Brown, Duncan A.; Szilágyi, Béla; Kaplan, Jeffrey D.; Lippuner, Jonas; Muhlberger, Curran D.; Foucart, Francois; Duez, Matthew D.

    2016-02-01

    Gravitational waves from binary neutron star (BNS) and black hole/neutron star (BHNS) inspirals are primary sources for detection by the Advanced Laser Interferometer Gravitational-Wave Observatory. The tidal forces acting on the neutron stars induce changes in the phase evolution of the gravitational waveform, and these changes can be used to constrain the nuclear equation of state. Current methods of generating BNS and BHNS waveforms rely on either computationally challenging full 3D hydrodynamical simulations or approximate analytic solutions. We introduce a new method for computing inspiral waveforms for BNS/BHNS systems by adding the post-Newtonian (PN) tidal effects to full numerical simulations of binary black holes (BBHs), effectively replacing the nontidal terms in the PN expansion with BBH results. Comparing a waveform generated with this method against a full hydrodynamical simulation of a BNS inspiral yields a phase difference of <1 radian over ˜15 orbits. The numerical phase accuracy required of BNS simulations to measure the accuracy of the method we present here is estimated as a function of the tidal deformability parameter λ .

  12. Analytical reverse time migration: An innovation in imaging of infrastructures using ultrasonic shear waves.

    PubMed

    Asadollahi, Aziz; Khazanovich, Lev

    2018-04-11

    The emergence of ultrasonic dry point contact (DPC) transducers that emit horizontal shear waves has enabled efficient collection of high-quality data in the context of a nondestructive evaluation of concrete structures. This offers an opportunity to improve the quality of evaluation by adapting advanced imaging techniques. Reverse time migration (RTM) is a simulation-based reconstruction technique that offers advantages over conventional methods, such as the synthetic aperture focusing technique. RTM is capable of imaging boundaries and interfaces with steep slopes and the bottom boundaries of inclusions and defects. However, this imaging technique requires a massive amount of memory and its computation cost is high. In this study, both bottlenecks of the RTM are resolved when shear transducers are used for data acquisition. An analytical approach was developed to obtain the source and receiver wavefields needed for imaging using reverse time migration. It is shown that the proposed analytical approach not only eliminates the high memory demand, but also drastically reduces the computation time from days to minutes. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Study of advanced fuel system concepts for commercial aircraft

    NASA Technical Reports Server (NTRS)

    Coffinberry, G. A.

    1985-01-01

    An analytical study was performed in order to assess relative performance and economic factors involved with alternative advanced fuel systems for future commercial aircraft operating with broadened property fuels. The DC-10-30 wide-body tri-jet aircraft and the CF6-8OX engine were used as a baseline design for the study. Three advanced systems were considered and were specifically aimed at addressing freezing point, thermal stability and lubricity fuel properties. Actual DC-10-30 routes and flight profiles were simulated by computer modeling and resulted in prediction of aircraft and engine fuel system temperatures during a nominal flight and during statistical one-day-per-year cold and hot flights. Emergency conditions were also evaluated. Fuel consumption and weight and power extraction results were obtained. An economic analysis was performed for new aircraft and systems. Advanced system means for fuel tank heating included fuel recirculation loops using engine lube heat and generator heat. Environmental control system bleed air heat was used for tank heating in a water recirculation loop. The results showed that fundamentally all of the three advanced systems are feasible but vary in their degree of compatibility with broadened-property fuel.

  14. Application of Characterization, Modeling, and Analytics Towards Understanding Process Structure Linkages in Metallic 3D Printing (Postprint)

    DTIC Science & Technology

    2017-08-01

    of metallic additive manufacturing processes and show that combining experimental data with modelling and advanced data processing and analytics...manufacturing processes and show that combining experimental data with modelling and advanced data processing and analytics methods will accelerate that...geometries, we develop a methodology that couples experimental data and modelling to convert the scan paths into spatially resolved local thermal histories

  15. NREL’s Advanced Analytics Research for Buildings – Social Media Version

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Forty percent of the total energy consumption in the United States comes from buildings. Working together, we can dramatically shrink that number. NREL’s advanced analytics research has already proven to reduce energy use, save money, and stabilize the grid.

  16. Implicit solution of Navier-Stokes equations on staggered curvilinear grids using a Newton-Krylov method with a novel analytical Jacobian.

    NASA Astrophysics Data System (ADS)

    Borazjani, Iman; Asgharzadeh, Hafez

    2015-11-01

    Flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates with explicit and semi-implicit schemes. Implicit schemes can be used to overcome these restrictions. However, implementing implicit solver for nonlinear equations including Navier-Stokes is not straightforward. Newton-Krylov subspace methods (NKMs) are one of the most advanced iterative methods to solve non-linear equations such as implicit descritization of the Navier-Stokes equation. The efficiency of NKMs massively depends on the Jacobian formation method, e.g., automatic differentiation is very expensive, and matrix-free methods slow down as the mesh is refined. Analytical Jacobian is inexpensive method, but derivation of analytical Jacobian for Navier-Stokes equation on staggered grid is challenging. The NKM with a novel analytical Jacobian was developed and validated against Taylor-Green vortex and pulsatile flow in a 90 degree bend. The developed method successfully handled the complex geometries such as an intracranial aneurysm with multiple overset grids, and immersed boundaries. It is shown that the NKM with an analytical Jacobian is 3 to 25 times faster than the fixed-point implicit Runge-Kutta method, and more than 100 times faster than automatic differentiation depending on the grid (size) and the flow problem. The developed methods are fully parallelized with parallel efficiency of 80-90% on the problems tested.

  17. The investigation of advanced remote sensing, radiative transfer and inversion techniques for the measurement of atmospheric constituents

    NASA Technical Reports Server (NTRS)

    Deepak, Adarsh; Wang, Pi-Huan

    1985-01-01

    The research program is documented for developing space and ground-based remote sensing techniques performed during the period from December 15, 1977 to March 15, 1985. The program involved the application of sophisticated radiative transfer codes and inversion methods to various advanced remote sensing concepts for determining atmospheric constituents, particularly aerosols. It covers detailed discussions of the solar aureole technique for monitoring columnar aerosol size distribution, and the multispectral limb scattered radiance and limb attenuated radiance (solar occultation) techniques, as well as the upwelling scattered solar radiance method for determining the aerosol and gaseous characteristics. In addition, analytical models of aerosol size distribution and simulation studies of the limb solar aureole radiance technique and the variability of ozone at high altitudes during satellite sunrise/sunset events are also described in detail.

  18. Introduction to the Special Issue: Advancing the State-of-the-Science in Reading Research through Modeling.

    PubMed

    Zevin, Jason D; Miller, Brett

    Reading research is increasingly a multi-disciplinary endeavor involving more complex, team-based science approaches. These approaches offer the potential of capturing the complexity of reading development, the emergence of individual differences in reading performance over time, how these differences relate to the development of reading difficulties and disability, and more fully understanding the nature of skilled reading in adults. This special issue focuses on the potential opportunities and insights that early and richly integrated advanced statistical and computational modeling approaches can provide to our foundational (and translational) understanding of reading. The issue explores how computational and statistical modeling, using both observed and simulated data, can serve as a contact point among research domains and topics, complement other data sources and critically provide analytic advantages over current approaches.

  19. Three dimensional calculation of thermonuclear ignition conditions for magnetized targets

    NASA Astrophysics Data System (ADS)

    Cortez, Ross; Cassibry, Jason; Lapointe, Michael; Adams, Robert

    2017-10-01

    Fusion power balance calculations, often performed using analytic methods, are used to estimate the design space for ignition conditions. In this paper, fusion power balance is calculated utilizing a 3-D smoothed particle hydrodynamics code (SPFMax) incorporating recent stopping power routines. Effects of thermal conduction, multigroup radiation emission and nonlocal absorption, ion/electron thermal equilibration, and compressional work are studied as a function of target and liner parameters and geometry for D-T, D-D, and 6LI-D fuels to identify the potential ignition design space. Here, ignition is defined as the condition when fusion particle deposition equals or exceeds the losses from heat conduction and radiation. The simulations are in support of ongoing research with NASA to develop advanced propulsion systems for rapid interplanetary space travel. Supported by NASA Innovative Advanced Concepts and NASA Marshall Space Flight Center.

  20. Nonequilibrium quantum thermodynamics in Coulomb crystals

    NASA Astrophysics Data System (ADS)

    Cosco, F.; Borrelli, M.; Silvi, P.; Maniscalco, S.; De Chiara, G.

    2017-06-01

    We present an in-depth study of the nonequilibrium statistics of the irreversible work produced during sudden quenches in proximity to the structural linear-zigzag transition of ion Coulomb crystals in 1+1 dimensions. By employing both an analytical approach based on a harmonic expansion and numerical simulations, we show the divergence of the average irreversible work in proximity to the transition. We show that the nonanalytic behavior of the work fluctuations can be characterized in terms of the critical exponents of the quantum Ising chain. Due to the technological advancements in trapped-ion experiments, our results can be readily verified.

  1. Turbine blade tip durability analysis

    NASA Technical Reports Server (NTRS)

    Mcknight, R. L.; Laflen, J. H.; Spamer, G. T.

    1981-01-01

    An air-cooled turbine blade from an aircraft gas turbine engine chosen for its history of cracking was subjected to advanced analytical and life-prediction techniques. The utility of advanced structural analysis techniques and advanced life-prediction techniques in the life assessment of hot section components are verified. Three dimensional heat transfer and stress analyses were applied to the turbine blade mission cycle and the results were input into advanced life-prediction theories. Shortcut analytical techniques were developed. The proposed life-prediction theories are evaluated.

  2. Structural analysis, electronic properties, and band gaps of a graphene nanoribbon: A new 2D materials

    NASA Astrophysics Data System (ADS)

    Dass, Devi

    2018-03-01

    Graphene nanoribbon (GNR), a new 2D carbon nanomaterial, has some unique features and special properties that offer a great potential for interconnect, nanoelectronic devices, optoelectronics, and nanophotonics. This paper reports the structural analysis, electronic properties, and band gaps of a GNR considering different chirality combinations obtained using the pz orbital tight binding model. In structural analysis, the analytical expressions for GNRs have been developed and verified using the simulation for the first time. It has been found that the total number of unit cells and carbon atoms within an overall unit cell and molecular structure of a GNR have been changed with the change in their chirality values which are similar to the values calculated using the developed analytical expressions thus validating both the simulation as well as analytical results. Further, the electronic band structures at different chirality values have been shown for the identification of metallic and semiconductor properties of a GNR. It has been concluded that all zigzag edge GNRs are metallic with very small band gaps range whereas all armchair GNRs show both the metallic and semiconductor nature with very small and high band gaps range. Again, the total number of subbands in each electronic band structure is equal to the total number of carbon atoms present in overall unit cell of the corresponding GNR. The semiconductors GNRs can be used as a channel material in field effect transistor suitable for advanced CMOS technology whereas the metallic GNRs could be used for interconnect.

  3. Thermal transport in dimerized harmonic lattices: Exact solution, crossover behavior, and extended reservoirs

    NASA Astrophysics Data System (ADS)

    Chien, Chih-Chun; Kouachi, Said; Velizhanin, Kirill A.; Dubi, Yonatan; Zwolak, Michael

    2017-01-01

    We present a method for calculating analytically the thermal conductance of a classical harmonic lattice with both alternating masses and nearest-neighbor couplings when placed between individual Langevin reservoirs at different temperatures. The method utilizes recent advances in analytic diagonalization techniques for certain classes of tridiagonal matrices. It recovers the results from a previous method that was applicable for alternating on-site parameters only, and extends the applicability to realistic systems in which masses and couplings alternate simultaneously. With this analytic result in hand, we show that the thermal conductance is highly sensitive to the modulation of the couplings. This is due to the existence of topologically induced edge modes at the lattice-reservoir interface and is also a reflection of the symmetries of the lattice. We make a connection to a recent work that demonstrates thermal transport is analogous to chemical reaction rates in solution given by Kramers' theory [Velizhanin et al., Sci. Rep. 5, 17506 (2015)], 10.1038/srep17506. In particular, we show that the turnover behavior in the presence of edge modes prevents calculations based on single-site reservoirs from coming close to the natural—or intrinsic—conductance of the lattice. Obtaining the correct value of the intrinsic conductance through simulation of even a small lattice where ballistic effects are important requires quite large extended reservoir regions. Our results thus offer a route for both the design and proper simulation of thermal conductance of nanoscale devices.

  4. On the first crossing distributions in fractional Brownian motion and the mass function of dark matter haloes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hiotelis, Nicos; Popolo, Antonino Del, E-mail: adelpopolo@oact.inaf.it, E-mail: hiotelis@ipta.demokritos.gr

    We construct an integral equation for the first crossing distributions for fractional Brownian motion in the case of a constant barrier and we present an exact analytical solution. Additionally we present first crossing distributions derived by simulating paths from fractional Brownian motion. We compare the results of the analytical solutions with both those of simulations and those of some approximated solutions which have been used in the literature. Finally, we present multiplicity functions for dark matter structures resulting from our analytical approach and we compare with those resulting from N-body simulations. We show that the results of analytical solutions aremore » in good agreement with those of path simulations but differ significantly from those derived from approximated solutions. Additionally, multiplicity functions derived from fractional Brownian motion are poor fits of the those which result from N-body simulations. We also present comparisons with other models which are exist in the literature and we discuss different ways of improving the agreement between analytical results and N-body simulations.« less

  5. Advances in edge-diffraction modeling for virtual-acoustic simulations

    NASA Astrophysics Data System (ADS)

    Calamia, Paul Thomas

    In recent years there has been growing interest in modeling sound propagation in complex, three-dimensional (3D) virtual environments. With diverse applications for the military, the gaming industry, psychoacoustics researchers, architectural acousticians, and others, advances in computing power and 3D audio-rendering techniques have driven research and development aimed at closing the gap between the auralization and visualization of virtual spaces. To this end, this thesis focuses on improving the physical and perceptual realism of sound-field simulations in virtual environments through advances in edge-diffraction modeling. To model sound propagation in virtual environments, acoustical simulation tools commonly rely on geometrical-acoustics (GA) techniques that assume asymptotically high frequencies, large flat surfaces, and infinitely thin ray-like propagation paths. Such techniques can be augmented with diffraction modeling to compensate for the effect of surface size on the strength and directivity of a reflection, to allow for propagation around obstacles and into shadow zones, and to maintain soundfield continuity across reflection and shadow boundaries. Using a time-domain, line-integral formulation of the Biot-Tolstoy-Medwin (BTM) diffraction expression, this thesis explores various aspects of diffraction calculations for virtual-acoustic simulations. Specifically, we first analyze the periodic singularity of the BTM integrand and describe the relationship between the singularities and higher-order reflections within wedges with open angle less than 180°. Coupled with analytical approximations for the BTM expression, this analysis allows for accurate numerical computations and a continuous sound field in the vicinity of an arbitrary wedge geometry insonified by a point source. Second, we describe an edge-subdivision strategy that allows for fast diffraction calculations with low error relative to a numerically more accurate solution. Third, to address the considerable increase in propagation paths due to diffraction, we describe a simple procedure for identifying and culling insignificant diffraction components during a virtual-acoustic simulation. Finally, we present a novel method to find GA components using diffraction parameters that ensures continuity at reflection and shadow boundaries.

  6. Development of MPS Method for Analyzing Melt Spreading Behavior and MCCI in Severe Accidents

    NASA Astrophysics Data System (ADS)

    Yamaji, Akifumi; Li, Xin

    2016-08-01

    Spreading of molten core (corium) on reactor containment vessel floor and molten corium-concrete interaction (MCCI) are important phenomena in the late phase of a severe accident for assessment of the containment integrity and managing the severe accident. The severe accident research at Waseda University has been advancing to show that simulations with moving particle semi-implicit (MPS) method (one of the particle methods) can greatly improve the analytical capability and mechanical understanding of the melt behavior in severe accidents. MPS models have been developed and verified regarding calculations of radiation and thermal field, solid-liquid phase transition, buoyancy, and temperature dependency of viscosity to simulate phenomena, such as spreading of corium, ablation of concrete by the corium, crust formation and cooling of the corium by top flooding. Validations have been conducted against experiments such as FARO L26S, ECOKATS-V1, Theofanous, and SPREAD for spreading, SURC-2, SURC-4, SWISS-1, and SWISS-2 for MCCI. These validations cover melt spreading behaviors and MCCI by mixture of molten oxides (including prototypic UO2-ZrO2), metals, and water. Generally, the analytical results show good agreement with the experiment with respect to the leading edge of spreading melt and ablation front history of concrete. The MPS results indicate that crust formation may play important roles in melt spreading and MCCI. There is a need to develop a code for two dimensional MCCI experiment simulation with MPS method as future study, which will be able to simulate anisotropic ablation of concrete.

  7. Sci—Fri PM: Topics — 05: Experience with linac simulation software in a teaching environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlone, Marco; Harnett, Nicole; Jaffray, David

    Medical linear accelerator education is usually restricted to use of academic textbooks and supervised access to accelerators. To facilitate the learning process, simulation software was developed to reproduce the effect of medical linear accelerator beam adjustments on resulting clinical photon beams. The purpose of this report is to briefly describe the method of operation of the software as well as the initial experience with it in a teaching environment. To first and higher orders, all components of medical linear accelerators can be described by analytical solutions. When appropriate calibrations are applied, these analytical solutions can accurately simulate the performance ofmore » all linear accelerator sub-components. Grouped together, an overall medical linear accelerator model can be constructed. Fifteen expressions in total were coded using MATLAB v 7.14. The program was called SIMAC. The SIMAC program was used in an accelerator technology course offered at our institution; 14 delegates attended the course. The professional breakdown of the participants was: 5 physics residents, 3 accelerator technologists, 4 regulators and 1 physics associate. The course consisted of didactic lectures supported by labs using SIMAC. At the conclusion of the course, eight of thirteen delegates were able to successfully perform advanced beam adjustments after two days of theory and use of the linac simulator program. We suggest that this demonstrates good proficiency in understanding of the accelerator physics, which we hope will translate to a better ability to understand real world beam adjustments on a functioning medical linear accelerator.« less

  8. Optronic System Imaging Simulator (OSIS): imager simulation tool of the ECOMOS project

    NASA Astrophysics Data System (ADS)

    Wegner, D.; Repasi, E.

    2018-04-01

    ECOMOS is a multinational effort within the framework of an EDA Project Arrangement. Its aim is to provide a generally accepted and harmonized European computer model for computing nominal Target Acquisition (TA) ranges of optronic imagers operating in the Visible or thermal Infrared (IR). The project involves close co-operation of defense and security industry and public research institutes from France, Germany, Italy, The Netherlands and Sweden. ECOMOS uses two approaches to calculate Target Acquisition (TA) ranges, the analytical TRM4 model and the image-based Triangle Orientation Discrimination model (TOD). In this paper the IR imager simulation tool, Optronic System Imaging Simulator (OSIS), is presented. It produces virtual camera imagery required by the TOD approach. Pristine imagery is degraded by various effects caused by atmospheric attenuation, optics, detector footprint, sampling, fixed pattern noise, temporal noise and digital signal processing. Resulting images might be presented to observers or could be further processed for automatic image quality calculations. For convenience OSIS incorporates camera descriptions and intermediate results provided by TRM4. For input OSIS uses pristine imagery tied with meta information about scene content, its physical dimensions, and gray level interpretation. These images represent planar targets placed at specified distances to the imager. Furthermore, OSIS is extended by a plugin functionality that enables integration of advanced digital signal processing techniques in ECOMOS such as compression, local contrast enhancement, digital turbulence mitiga- tion, to name but a few. By means of this image-based approach image degradations and image enhancements can be investigated, which goes beyond the scope of the analytical TRM4 model.

  9. Aggregated N-of-1 randomized controlled trials: modern data analytics applied to a clinically valid method of intervention effectiveness.

    PubMed

    Cushing, Christopher C; Walters, Ryan W; Hoffman, Lesa

    2014-03-01

    Aggregated N-of-1 randomized controlled trials (RCTs) combined with multilevel modeling represent a methodological advancement that may help bridge science and practice in pediatric psychology. The purpose of this article is to offer a primer for pediatric psychologists interested in conducting aggregated N-of-1 RCTs. An overview of N-of-1 RCT methodology is provided and 2 simulated data sets are analyzed to demonstrate the clinical and research potential of the methodology. The simulated data example demonstrates the utility of aggregated N-of-1 RCTs for understanding the clinical impact of an intervention for a given individual and the modeling of covariates to explain why an intervention worked for one patient and not another. Aggregated N-of-1 RCTs hold potential for improving the science and practice of pediatric psychology.

  10. Advancement of CMOS Doping Technology in an External Development Framework

    NASA Astrophysics Data System (ADS)

    Jain, Amitabh; Chambers, James J.; Shaw, Judy B.

    2011-01-01

    The consumer appetite for a rich multimedia experience drives technology development for mobile hand-held devices and the infrastructure to support them. Enhancements in functionality, speed, and user experience are derived from advancements in CMOS technology. The technical challenges in developing each successive CMOS technology node to support these enhancements have become increasingly difficult. These trends have motivated the CMOS business towards a collaborative approach based on strategic partnerships. This paper describes our model and experience of CMOS development, based on multi-dimensional industrial and academic partnerships. We provide to our process equipment, materials, and simulation partners, as well as to our silicon foundry partners, the detailed requirements for future integrated circuit products. This is done very early in the development cycle to ensure that these requirements can be met. In order to determine these fundamental requirements, we rely on a strategy that requires strong interaction between process and device simulation, physical and chemical analytical methods, and research at academic institutions. This learning is shared with each project partner to address integration and manufacturing issues encountered during CMOS technology development from its inception through product ramp. We utilize TI's core strengths in physical analysis, unit processes and integration, yield ramp, reliability, and product engineering to support this technological development. Finally, this paper presents examples of the advancement of CMOS doping technology for the 28 nm node and beyond through this development model.

  11. Theory, modeling, and simulation of structural and functional materials: Micromechanics, microstructures, and properties

    NASA Astrophysics Data System (ADS)

    Jin, Yongmei

    In recent years, theoretical modeling and computational simulation of microstructure evolution and materials property has been attracting much attention. While significant advances have been made, two major challenges remain. One is the integration of multiple physical phenomena for simulation of complex materials behavior, the other is the bridging over multiple length and time scales in materials modeling and simulation. The research presented in this Thesis is focused mainly on tackling the first major challenge. In this Thesis, a unified Phase Field Microelasticity (PFM) approach is developed. This approach is an advanced version of the phase field method that takes into account the exact elasticity of arbitrarily anisotropic, elastically and structurally inhomogeneous systems. The proposed theory and models are applicable to infinite solids, elastic half-space, and finite bodies with arbitrary-shaped free surfaces, which may undergo various concomitant physical processes. The Phase Field Microelasticity approach is employed to formulate the theories and models of martensitic transformation, dislocation dynamics, and crack evolution in single crystal and polycrystalline solids. It is also used to study strain relaxation in heteroepitaxial thin films through misfit dislocation and surface roughening. Magnetic domain evolution in nanocrystalline thin films is also investigated. Numerous simulation studies are performed. Comparison with analytical predictions and experimental observations are presented. Agreement verities the theory and models as realistic simulation tools for computational materials science and engineering. The same Phase Field Microelasticity formalism of individual models of different physical phenomena makes it easy to integrate multiple physical processes into one unified simulation model, where multiple phenomena are treated as various relaxation modes that together act as one common cooperative phenomenon. The model does not impose a priori constraints on possible microstructure evolution paths. This gives the model predicting power, where material system itself "chooses" the optimal path for multiple processes. The advances made in this Thesis present a significant step forward to overcome the first challenge, mesoscale multi-physics modeling and simulation of materials. At the end of this Thesis, the way to tackle the second challenge, bridging over multiple length and time scales in materials modeling and simulation, is discussed based on connection between the mesoscale Phase Field Microelasticity modeling and microscopic atomistic calculation as well as macroscopic continuum theory.

  12. Assessment of a hybrid finite element and finite volume code for turbulent incompressible flows

    DOE PAGES

    Xia, Yidong; Wang, Chuanjin; Luo, Hong; ...

    2015-12-15

    Hydra-TH is a hybrid finite-element/finite-volume incompressible/low-Mach flow simulation code based on the Hydra multiphysics toolkit being developed and used for thermal-hydraulics applications. In the present work, a suite of verification and validation (V&V) test problems for Hydra-TH was defined to meet the design requirements of the Consortium for Advanced Simulation of Light Water Reactors (CASL). The intent for this test problem suite is to provide baseline comparison data that demonstrates the performance of the Hydra-TH solution methods. The simulation problems vary in complexity from laminar to turbulent flows. A set of RANS and LES turbulence models were used in themore » simulation of four classical test problems. Numerical results obtained by Hydra-TH agreed well with either the available analytical solution or experimental data, indicating the verified and validated implementation of these turbulence models in Hydra-TH. Where possible, we have attempted some form of solution verification to identify sensitivities in the solution methods, and to suggest best practices when using the Hydra-TH code.« less

  13. Assessment of a hybrid finite element and finite volume code for turbulent incompressible flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xia, Yidong; Wang, Chuanjin; Luo, Hong

    Hydra-TH is a hybrid finite-element/finite-volume incompressible/low-Mach flow simulation code based on the Hydra multiphysics toolkit being developed and used for thermal-hydraulics applications. In the present work, a suite of verification and validation (V&V) test problems for Hydra-TH was defined to meet the design requirements of the Consortium for Advanced Simulation of Light Water Reactors (CASL). The intent for this test problem suite is to provide baseline comparison data that demonstrates the performance of the Hydra-TH solution methods. The simulation problems vary in complexity from laminar to turbulent flows. A set of RANS and LES turbulence models were used in themore » simulation of four classical test problems. Numerical results obtained by Hydra-TH agreed well with either the available analytical solution or experimental data, indicating the verified and validated implementation of these turbulence models in Hydra-TH. Where possible, we have attempted some form of solution verification to identify sensitivities in the solution methods, and to suggest best practices when using the Hydra-TH code.« less

  14. Analysis of exposure to electromagnetic fields in a healthcare environment: simulation and experimental study.

    PubMed

    de Miguel-Bilbao, Silvia; Martín, Miguel Angel; Del Pozo, Alejandro; Febles, Victor; Hernández, José A; de Aldecoa, José C Fernández; Ramos, Victoria

    2013-11-01

    Recent advances in wireless technologies have lead to an increase in wireless instrumentation present in healthcare centers. This paper presents an analytical method for characterizing electric field (E-field) exposure within these environments. The E-field levels of the different wireless communications systems have been measured in two floors of the Canary University Hospital Consortium (CUHC). The electromagnetic (EM) conditions detected with the experimental measures have been estimated using the software EFC-400-Telecommunications (Narda Safety Test Solutions, Sandwiesenstrasse 7, 72793 Pfullingen, Germany). The experimental and simulated results are represented through 2D contour maps, and have been compared with the recommended safety and exposure thresholds. The maximum value obtained is much lower than the 3 V m(-1) that is established in the International Electrotechnical Commission Standard of Electromedical Devices. Results show a high correlation in terms of E-field cumulative distribution function (CDF) between the experimental and simulation results. In general, the CDFs of each pair of experimental and simulated samples follow a lognormal distribution with the same mean.

  15. Direct Simulation of Friction Forces for Heavy Ions Interacting with a Warm Magnetized Electron Distribution

    NASA Astrophysics Data System (ADS)

    Bruhwiler, D. L.; Busby, R.; Fedotov, A. V.; Ben-Zvi, I.; Cary, J. R.; Stoltz, P.; Burov, A.; Litvinenko, V. N.; Messmer, P.; Abell, D.; Nieter, C.

    2005-06-01

    A proposed luminosity upgrade to RHIC includes a novel electron cooling section, which would use ˜55 MeV electrons to cool fully-ionized 100 GeV/nucleon gold ions. High-current bunched electron beams are required for the RHIC cooler, resulting in very high transverse temperatures and relatively low values for the magnetized cooling logarithm. The accuracy of analytical formulae in this regime requires careful examination. Simulations of the friction coefficient, using the VORPAL code, for single gold ions passing once through the interaction region, are compared with theoretical calculations. Charged particles are advanced using a fourth-order Hermite predictor-corrector algorithm. The fields in the beam frame are obtained from direct calculation of Coulomb's law, which is more efficient than multipole-type algorithms for less than ˜106 particles. Because the interaction time is so short, it is necessary to suppress the diffusive aspect of the ion dynamics through the careful use of positrons in the simulations.

  16. Active Control of Inlet Noise on the JT15D Turbofan Engine

    NASA Technical Reports Server (NTRS)

    Smith, Jerome P.; Hutcheson, Florence V.; Burdisso, Ricardo A.; Fuller, Chris R.

    1999-01-01

    This report presents the key results obtained by the Vibration and Acoustics Laboratories at Virginia Tech over the year from November 1997 to December 1998 on the Active Noise Control of Turbofan Engines research project funded by NASA Langley Research Center. The concept of implementing active noise control techniques with fuselage-mounted error sensors is investigated both analytically and experimentally. The analytical part of the project involves the continued development of an advanced modeling technique to provide prediction and design guidelines for application of active noise control techniques to large, realistic high bypass engines of the type on which active control methods are expected to be applied. Results from the advanced analytical model are presented that show the effectiveness of the control strategies, and the analytical results presented for fuselage error sensors show good agreement with the experimentally observed results and provide additional insight into the control phenomena. Additional analytical results are presented for active noise control used in conjunction with a wavenumber sensing technique. The experimental work is carried out on a running JT15D turbofan jet engine in a test stand at Virginia Tech. The control strategy used in these tests was the feedforward Filtered-X LMS algorithm. The control inputs were supplied by single and multiple circumferential arrays of acoustic sources equipped with neodymium iron cobalt magnets mounted upstream of the fan. The reference signal was obtained from an inlet mounted eddy current probe. The error signals were obtained from a number of pressure transducers flush-mounted in a simulated fuselage section mounted in the engine test cell. The active control methods are investigated when implemented with the control sources embedded within the acoustically absorptive material on a passively-lined inlet. The experimental results show that the combination of active control techniques with fuselage-mounted error sensors and passive control techniques is an effective means of reducing radiated noise from turbofan engines. Strategic selection of the location of the error transducers is shown to be effective for reducing the radiation towards particular directions in the farfield. An analytical model is used to predict the behavior of the control system and to guide the experimental design configurations, and the analytical results presented show good agreement with the experimentally observed results.

  17. Advancing Clinical Proteomics via Analysis Based on Biological Complexes: A Tale of Five Paradigms.

    PubMed

    Goh, Wilson Wen Bin; Wong, Limsoon

    2016-09-02

    Despite advances in proteomic technologies, idiosyncratic data issues, for example, incomplete coverage and inconsistency, resulting in large data holes, persist. Moreover, because of naïve reliance on statistical testing and its accompanying p values, differential protein signatures identified from such proteomics data have little diagnostic power. Thus, deploying conventional analytics on proteomics data is insufficient for identifying novel drug targets or precise yet sensitive biomarkers. Complex-based analysis is a new analytical approach that has potential to resolve these issues but requires formalization. We categorize complex-based analysis into five method classes or paradigms and propose an even-handed yet comprehensive evaluation rubric based on both simulated and real data. The first four paradigms are well represented in the literature. The fifth and newest paradigm, the network-paired (NP) paradigm, represented by a method called Extremely Small SubNET (ESSNET), dominates in precision-recall and reproducibility, maintains strong performance in small sample sizes, and sensitively detects low-abundance complexes. In contrast, the commonly used over-representation analysis (ORA) and direct-group (DG) test paradigms maintain good overall precision but have severe reproducibility issues. The other two paradigms considered here are the hit-rate and rank-based network analysis paradigms; both of these have good precision-recall and reproducibility, but they do not consider low-abundance complexes. Therefore, given its strong performance, NP/ESSNET may prove to be a useful approach for improving the analytical resolution of proteomics data. Additionally, given its stability, it may also be a powerful new approach toward functional enrichment tests, much like its ORA and DG counterparts.

  18. Modeling of optical mirror and electromechanical behavior

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Lu, Chao; Liu, Zishun; Liu, Ai Q.; Zhang, Xu M.

    2001-10-01

    This paper presents finite element (FE) simulation and theoretical analysis of novel MEMS fiber-optical switches actuated by electrostatic attraction. FE simulation for the switches under static and dynamic loading are first carried out to reveal the mechanical characteristics of the minimum or critical switching voltages, the natural frequencies, mode shapes and response under different levels of electrostatic attraction load. To validate the FE simulation results, a theoretical (or analytical) model is then developed for one specific switch, i.e., Plate_40_104. Good agreement is found between the FE simulation and the analytical results. From both FE simulation and theoretical analysis, the critical switching voltage for Plate_40_104 is derived to be 238 V for the switching angel of 12 degree(s). The critical switching on and off times are 431 microsecond(s) and 67 microsecond(s) , respectively. The present study not only develops good FE and analytical models, but also demonstrates step by step a method to simplify a real optical switch structure with reference to the FE simulation results for analytical purpose. With the FE and analytical models, it is easy to obtain any information about the mechanical behaviors of the optical switches, which are helpful in yielding optimized design.

  19. 10 CFR 431.445 - Determination of small electric motor efficiency.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... statistical analysis, computer simulation or modeling, or other analytic evaluation of performance data. (3... statistical analysis, computer simulation or modeling, and other analytic evaluation of performance data on.... (ii) If requested by the Department, the manufacturer shall conduct simulations to predict the...

  20. Numerical solution of the full potential equation using a chimera grid approach

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.

    1995-01-01

    A numerical scheme utilizing a chimera zonal grid approach for solving the full potential equation in two spatial dimensions is described. Within each grid zone a fully-implicit approximate factorization scheme is used to advance the solution one interaction. This is followed by the explicit advance of all common zonal grid boundaries using a bilinear interpolation of the velocity potential. The presentation is highlighted with numerical results simulating the flow about a two-dimensional, nonlifting, circular cylinder. For this problem, the flow domain is divided into two parts: an inner portion covered by a polar grid and an outer portion covered by a Cartesian grid. Both incompressible and compressible (transonic) flow solutions are included. Comparisons made with an analytic solution as well as single grid results indicate that the chimera zonal grid approach is a viable technique for solving the full potential equation.

  1. Validation of material point method for soil fluidisation analysis

    NASA Astrophysics Data System (ADS)

    Bolognin, Marco; Martinelli, Mario; Bakker, Klaas J.; Jonkman, Sebastiaan N.

    2017-06-01

    The main aim of this paper is to describe and analyse the modelling of vertical column tests that undergo fluidisation by the application of a hydraulic gradient. A recent advancement of the material point method (MPM), allows studying both stationary and non-stationary fluid flow while interacting with the solid phase. The fluidisation initiation and post-fluidisation processes of the soil will be investigated with an advanced MPM formulation (Double Point) in which the behavior of the solid and the liquid phase is evaluated separately, assigning to each of them a set of material points (MPs). The result of these simulations are compared to analytic solutions and measurements from laboratory experiments. This work is used as a benchmark test for the MPM double point formulation in the Anura3D software and to verify the feasibility of the software for possible future engineering applications.

  2. Recent advances in the modeling of plasmas with the Particle-In-Cell methods

    NASA Astrophysics Data System (ADS)

    Vay, Jean-Luc; Lehe, Remi; Vincenti, Henri; Godfrey, Brendan; Lee, Patrick; Haber, Irv

    2015-11-01

    The Particle-In-Cell (PIC) approach is the method of choice for self-consistent simulations of plasmas from first principles. The fundamentals of the PIC method were established decades ago but improvements or variations are continuously being proposed. We report on several recent advances in PIC related algorithms, including: (a) detailed analysis of the numerical Cherenkov instability and its remediation, (b) analytic pseudo-spectral electromagnetic solvers in Cartesian and cylindrical (with azimuthal modes decomposition) geometries, (c) arbitrary-order finite-difference and generalized pseudo-spectral Maxwell solvers, (d) novel analysis of Maxwell's solvers' stencil variation and truncation, in application to domain decomposition strategies and implementation of Perfectly Matched Layers in high-order and pseudo-spectral solvers. Work supported by US-DOE Contracts DE-AC02-05CH11231 and the US-DOE SciDAC program ComPASS. Used resources of NERSC, supported by US-DOE Contract DE-AC02-05CH11231.

  3. Current Status of Mycotoxin Analysis: A Critical Review.

    PubMed

    Shephard, Gordon S

    2016-07-01

    It is over 50 years since the discovery of aflatoxins focused the attention of food safety specialists on fungal toxins in the feed and food supply. Since then, analysis of this important group of natural contaminants has advanced in parallel with general developments in analytical science, and current MS methods are capable of simultaneously analyzing hundreds of compounds, including mycotoxins, pesticides, and drugs. This profusion of data may advance our understanding of human exposure, yet constitutes an interpretive challenge to toxicologists and food safety regulators. Despite these advances in analytical science, the basic problem of the extreme heterogeneity of mycotoxin contamination, although now well understood, cannot be circumvented. The real health challenges posed by mycotoxin exposure occur in the developing world, especially among small-scale and subsistence farmers. Addressing these problems requires innovative approaches in which analytical science must also play a role in providing suitable out-of-laboratory analytical techniques.

  4. Promising Ideas for Collective Advancement of Communal Knowledge Using Temporal Analytics and Cluster Analysis

    ERIC Educational Resources Information Center

    Lee, Alwyn Vwen Yen; Tan, Seng Chee

    2017-01-01

    Understanding ideas in a discourse is challenging, especially in textual discourse analysis. We propose using temporal analytics with unsupervised machine learning techniques to investigate promising ideas for the collective advancement of communal knowledge in an online knowledge building discourse. A discourse unit network was constructed and…

  5. A TENTATIVE GUIDE, DIFFERENTIAL AND INTEGRAL CALCULUS.

    ERIC Educational Resources Information Center

    BRANT, VINCENT; GERARDI, WILLIAM

    THE COURSE IS INTENDED TO GO BEYOND THE REQUIREMENTS OF THE ADVANCED PLACEMENT PROGRAM IN MATHEMATICS AS DESIGNED BY THE COLLEGE ENTRANCE EXAMINATION BOARD. THE ADVANCED PLACEMENT PROGRAM CONSISTS OF A 1-YEAR COURSE COMBINING ANALYTIC GEOMETRY AND CALCULUS. PRESUPPOSED HERE ARE--A SEMESTER COURSE IN ANALYTIC GEOMETRY AND A THOROUGH KNOWLEDGE OF…

  6. Advanced, Analytic, Automated (AAA) Measurement of Engagement during Learning

    ERIC Educational Resources Information Center

    D'Mello, Sidney; Dieterle, Ed; Duckworth, Angela

    2017-01-01

    It is generally acknowledged that engagement plays a critical role in learning. Unfortunately, the study of engagement has been stymied by a lack of valid and efficient measures. We introduce the advanced, analytic, and automated (AAA) approach to measure engagement at fine-grained temporal resolutions. The AAA measurement approach is grounded in…

  7. A novel method for energy harvesting simulation based on scenario generation

    NASA Astrophysics Data System (ADS)

    Wang, Zhe; Li, Taoshen; Xiao, Nan; Ye, Jin; Wu, Min

    2018-06-01

    Energy harvesting network (EHN) is a new form of computer networks. It converts ambient energy into usable electric energy and supply the electrical energy as a primary or secondary power source to the communication devices. However, most of the EHN uses the analytical probability distribution function to describe the energy harvesting process, which cannot accurately identify the actual situation for the lack of authenticity. We propose an EHN simulation method based on scenario generation in this paper. Firstly, instead of setting a probability distribution in advance, it uses optimal scenario reduction technology to generate representative scenarios in single period based on the historical data of the harvested energy. Secondly, it uses homogeneous simulated annealing algorithm to generate optimal daily energy harvesting scenario sequences to get a more accurate simulation of the random characteristics of the energy harvesting network. Then taking the actual wind power data as an example, the accuracy and stability of the method are verified by comparing with the real data. Finally, we cite an instance to optimize the network throughput, which indicate the feasibility and effectiveness of the method we proposed from the optimal solution and data analysis in energy harvesting simulation.

  8. A brief introduction to computer-intensive methods, with a view towards applications in spatial statistics and stereology.

    PubMed

    Mattfeldt, Torsten

    2011-04-01

    Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.

  9. Neoclassical toroidal viscosity calculations in tokamaks using a δf Monte Carlo simulation and their verifications.

    PubMed

    Satake, S; Park, J-K; Sugama, H; Kanno, R

    2011-07-29

    Neoclassical toroidal viscosities (NTVs) in tokamaks are investigated using a δf Monte Carlo simulation, and are successfully verified with a combined analytic theory over a wide range of collisionality. A Monte Carlo simulation has been required in the study of NTV since the complexities in guiding-center orbits of particles and their collisions cannot be fully investigated by any means of analytic theories alone. Results yielded the details of the complex NTV dependency on particle precessions and collisions, which were predicted roughly in a combined analytic theory. Both numerical and analytic methods can be utilized and extended based on these successful verifications.

  10. Intelligent model-based diagnostics for vehicle health management

    NASA Astrophysics Data System (ADS)

    Luo, Jianhui; Tu, Fang; Azam, Mohammad S.; Pattipati, Krishna R.; Willett, Peter K.; Qiao, Liu; Kawamoto, Masayuki

    2003-08-01

    The recent advances in sensor technology, remote communication and computational capabilities, and standardized hardware/software interfaces are creating a dramatic shift in the way the health of vehicles is monitored and managed. These advances facilitate remote monitoring, diagnosis and condition-based maintenance of automotive systems. With the increased sophistication of electronic control systems in vehicles, there is a concomitant increased difficulty in the identification of the malfunction phenomena. Consequently, the current rule-based diagnostic systems are difficult to develop, validate and maintain. New intelligent model-based diagnostic methodologies that exploit the advances in sensor, telecommunications, computing and software technologies are needed. In this paper, we will investigate hybrid model-based techniques that seamlessly employ quantitative (analytical) models and graph-based dependency models for intelligent diagnosis. Automotive engineers have found quantitative simulation (e.g. MATLAB/SIMULINK) to be a vital tool in the development of advanced control systems. The hybrid method exploits this capability to improve the diagnostic system's accuracy and consistency, utilizes existing validated knowledge on rule-based methods, enables remote diagnosis, and responds to the challenges of increased system complexity. The solution is generic and has the potential for application in a wide range of systems.

  11. Nonlinear analysis for dual-frequency concurrent energy harvesting

    NASA Astrophysics Data System (ADS)

    Yan, Zhimiao; Lei, Hong; Tan, Ting; Sun, Weipeng; Huang, Wenhu

    2018-05-01

    The dual-frequency responses of the hybrid energy harvester undergoing the base excitation and galloping were analyzed numerically. In this work, an approximate dual-frequency analytical method is proposed for the nonlinear analysis of such a system. To obtain the approximate analytical solutions of the full coupled distributed-parameter model, the forcing interactions is first neglected. Then, the electromechanical decoupled governing equation is developed using the equivalent structure method. The hybrid mechanical response is finally separated to be the self-excited and forced responses for deriving the analytical solutions, which are confirmed by the numerical simulations of the full coupled model. The forced response has great impacts on the self-excited response. The boundary of Hopf bifurcation is analytically determined by the onset wind speed to galloping, which is linearly increased by the electrical damping. Quenching phenomenon appears when the increasing base excitation suppresses the galloping. The theoretical quenching boundary depends on the forced mode velocity. The quenching region increases with the base acceleration and electrical damping, but decreases with the wind speed. Superior to the base-excitation-alone case, the existence of the aerodynamic force protects the hybrid energy harvester at resonance from damages caused by the excessive large displacement. From the view of the harvested power, the hybrid system surpasses the base-excitation-alone system or the galloping-alone system. This study advances our knowledge on intrinsic nonlinear dynamics of the dual-frequency energy harvesting system by taking advantage of the analytical solutions.

  12. Advancing from Rules of Thumb: Quantifying the Effects of Small Density Changes in Mass Transport to Electrodes. Understanding Natural Convection.

    PubMed

    Ngamchuea, Kamonwad; Eloul, Shaltiel; Tschulik, Kristina; Compton, Richard G

    2015-07-21

    Understanding mass transport is prerequisite to all quantitative analysis of electrochemical experiments. While the contribution of diffusion is well understood, the influence of density gradient-driven natural convection on the mass transport in electrochemical systems is not. To date, it has been assumed to be relevant only for high concentrations of redox-active species and at long experimental time scales. If unjustified, this assumption risks misinterpretation of analytical data obtained from scanning electrochemical microscopy (SECM) and generator-collector experiments, as well as analytical sensors utilizing macroelectrodes/microelectrode arrays. It also affects the results expected from electrodeposition. On the basis of numerical simulation, herein it is demonstrated that even at less than 10 mM concentrations and short experimental times of tens of seconds, density gradient-driven natural convection significantly affects mass transport. This is evident from in-depth numerical simulation for the oxidation of hexacyanoferrate (II) at various electrode sizes and electrode orientations. In each case, the induced convection and its influence on the diffusion layer established near the electrode are illustrated by maps of the velocity fields and concentration distributions evolving with time. The effects of natural convection on mass transport and chronoamperometric currents are thus quantified and discussed for the different cases studied.

  13. Anomalous contact angle hysteresis of a captive bubble: advancing contact line pinning.

    PubMed

    Hong, Siang-Jie; Chang, Feng-Ming; Chou, Tung-He; Chan, Seong Heng; Sheng, Yu-Jane; Tsao, Heng-Kwong

    2011-06-07

    Contact angle hysteresis of a sessile drop on a substrate consists of continuous invasion of liquid phase with the advancing angle (θ(a)) and contact line pinning of liquid phase retreat until the receding angle (θ(r)) is reached. Receding pinning is generally attributed to localized defects that are more wettable than the rest of the surface. However, the defect model cannot explain advancing pinning of liquid phase invasion driven by a deflating bubble and continuous retreat of liquid phase driven by the inflating bubble. A simple thermodynamic model based on adhesion hysteresis is proposed to explain anomalous contact angle hysteresis of a captive bubble quantitatively. The adhesion model involves two solid–liquid interfacial tensions (γ(sl) > γ(sl)′). Young’s equation with γ(sl) gives the advancing angle θ(a) while that with γ(sl)′ due to surface rearrangement yields the receding angle θ(r). Our analytical analysis indicates that contact line pinning represents frustration in surface free energy, and the equilibrium shape corresponds to a nondifferential minimum instead of a local minimum. On the basis of our thermodynamic model, Surface Evolver simulations are performed to reproduce both advancing and receding behavior associated with a captive bubble on the acrylic glass.

  14. Analytical modeling and sensor monitoring for optimal processing of advanced textile structural composites by resin transfer molding

    NASA Technical Reports Server (NTRS)

    Loos, Alfred C.; Macrae, John D.; Hammond, Vincent H.; Kranbuehl, David E.; Hart, Sean M.; Hasko, Gregory H.; Markus, Alan M.

    1993-01-01

    A two-dimensional model of the resin transfer molding (RTM) process was developed which can be used to simulate the infiltration of resin into an anisotropic fibrous preform. Frequency dependent electromagnetic sensing (FDEMS) has been developed for in situ monitoring of the RTM process. Flow visualization tests were performed to obtain data which can be used to verify the sensor measurements and the model predictions. Results of the tests showed that FDEMS can accurately detect the position of the resin flow-front during mold filling, and that the model predicted flow-front patterns agreed well with the measured flow-front patterns.

  15. An anisotropic thermal-stress model for through-silicon via

    NASA Astrophysics Data System (ADS)

    Liu, Song; Shan, Guangbao

    2018-02-01

    A two-dimensional thermal-stress model of through-silicon via (TSV) is proposed considering the anisotropic elastic property of the silicon substrate. By using the complex variable approach, the distribution of thermal-stress in the substrate can be characterized more accurately. TCAD 3-D simulations are used to verify the model accuracy and well agree with analytical results (< ±5%). The proposed thermal-stress model can be integrated into stress-driven design flow for 3-D IC , leading to the more accurate timing analysis considering the thermal-stress effect. Project supported by the Aerospace Advanced Manufacturing Technology Research Joint Fund (No. U1537208).

  16. Near wakes of advanced turbopropellers

    NASA Technical Reports Server (NTRS)

    Hanson, D. B.; Patrick, W. P.

    1989-01-01

    The flow in the wake of a model single rotation Prop-Fan rotor operating in a wind tunnel was traversed with a hot-wire anemometer system designed to determine the 3 periodic velocity components. Special data acquisition and data reduction methods were required to deal with the high data frequency, narrow wakes, and large fluctuating air angles in the tip vortex region. The model tip helical Mach number was 1.17, simulating the cruise condition. Although the flow field is complex, flow features such as viscous velocity defects, vortex sheets, tip vortices, and propagating acoustic pulses are clearly identified with the aid of a simple analytical wake theory.

  17. Digital system upset. The effects of simulated lightning-induced transients on a general-purpose microprocessor

    NASA Technical Reports Server (NTRS)

    Belcastro, C. M.

    1983-01-01

    Flight critical computer based control systems designed for advanced aircraft must exhibit ultrareliable performance in lightning charged environments. Digital system upset can occur as a result of lightning induced electrical transients, and a methodology was developed to test specific digital systems for upset susceptibility. Initial upset data indicates that there are several distinct upset modes and that the occurrence of upset is related to the relative synchronization of the transient input with the processing sate of the digital system. A large upset test data base will aid in the formulation and verification of analytical upset reliability modeling techniques which are being developed.

  18. Kinetic modeling of plant metabolism and its predictive power: peppermint essential oil biosynthesis as an example.

    PubMed

    Lange, Bernd Markus; Rios-Estepa, Rigoberto

    2014-01-01

    The integration of mathematical modeling with analytical experimentation in an iterative fashion is a powerful approach to advance our understanding of the architecture and regulation of metabolic networks. Ultimately, such knowledge is highly valuable to support efforts aimed at modulating flux through target pathways by molecular breeding and/or metabolic engineering. In this article we describe a kinetic mathematical model of peppermint essential oil biosynthesis, a pathway that has been studied extensively for more than two decades. Modeling assumptions and approximations are described in detail. We provide step-by-step instructions on how to run simulations of dynamic changes in pathway metabolites concentrations.

  19. Optofluidic sensing from inkjet-printed droplets: the enormous enhancement by evaporation-induced spontaneous flow on photonic crystal biosilica†

    PubMed Central

    Kong, Xianming; Xi, Yuting; LeDuff, Paul; Li, Erwen; Liu, Ye; Cheng, Li-Jing; Rorrer, Gregory L.; Tan, Hua; Wang, Alan X.

    2016-01-01

    Novel transducers for detecting an ultra-small volume of an analyte solution play pivotal roles in many applications such as chemical analysis, environmental protection and biomedical diagnosis. Recent advances in optofluidics offer tremendous opportunities for analyzing miniature amounts of samples with high detection sensitivity. In this work, we demonstrate enormous enhancement factors (106–107) of the detection limit for optofluidic analysis from inkjet-printed droplets by evaporation-induced spontaneous flow on photonic crystal biosilica when compared with conventional surface-enhanced Raman scattering (SERS) sensing using the pipette dispensing technology. Our computational fluid dynamics simulation has shown a strong recirculation flow inside the 100 picoliter droplet during the evaporation process due to the thermal Marangoni effect. The combination of the evaporation-induced spontaneous flow in micron-sized droplets and the highly hydrophilic photonic crystal biosilica is capable of providing a strong convection flow to combat the reverse diffusion force, resulting in a higher concentration of the analyte molecules at the diatom surface. In the meanwhile, high density hot-spots provided by the strongly coupled plasmonic nanoparticles with photonic crystal biosilica under a 1.5 μm laser spot are verified by finite-difference time domain simulation, which is crucial for SERS sensing. Using a drop-on-demand inkjet device to dispense multiple 100 picoliter analyte droplets with pinpoint accuracy, we achieved the single molecule detection of Rhodamine 6G and label-free sensing of 4.5 × 10−17 g trinitrotoluene from only 200 nanoliter solution. PMID:27714122

  20. Just-in-time Time Data Analytics and Visualization of Climate Simulations using the Bellerophon Framework

    NASA Astrophysics Data System (ADS)

    Anantharaj, V. G.; Venzke, J.; Lingerfelt, E.; Messer, B.

    2015-12-01

    Climate model simulations are used to understand the evolution and variability of earth's climate. Unfortunately, high-resolution multi-decadal climate simulations can take days to weeks to complete. Typically, the simulation results are not analyzed until the model runs have ended. During the course of the simulation, the output may be processed periodically to ensure that the model is preforming as expected. However, most of the data analytics and visualization are not performed until the simulation is finished. The lengthy time period needed for the completion of the simulation constrains the productivity of climate scientists. Our implementation of near real-time data visualization analytics capabilities allows scientists to monitor the progress of their simulations while the model is running. Our analytics software executes concurrently in a co-scheduling mode, monitoring data production. When new data are generated by the simulation, a co-scheduled data analytics job is submitted to render visualization artifacts of the latest results. These visualization output are automatically transferred to Bellerophon's data server located at ORNL's Compute and Data Environment for Science (CADES) where they are processed and archived into Bellerophon's database. During the course of the experiment, climate scientists can then use Bellerophon's graphical user interface to view animated plots and their associated metadata. The quick turnaround from the start of the simulation until the data are analyzed permits research decisions and projections to be made days or sometimes even weeks sooner than otherwise possible! The supercomputer resources used to run the simulation are unaffected by co-scheduling the data visualization jobs, so the model runs continuously while the data are visualized. Our just-in-time data visualization software looks to increase climate scientists' productivity as climate modeling moves into exascale era of computing.

  1. Multi-Evaporator Miniature Loop Heat Pipe for Small Spacecraft Thermal Control. Part 1; New Technologies and Validation Approach

    NASA Technical Reports Server (NTRS)

    Ku, Jentung; Ottenstein, Laura; Douglas, Donya; Hoang, Triem

    2010-01-01

    Under NASA s New Millennium Program Space Technology 8 (ST 8) Project, four experiments Thermal Loop, Dependable Microprocessor, SAILMAST, and UltraFlex - were conducted to advance the maturity of individual technologies from proof of concept to prototype demonstration in a relevant environment , i.e. from a technology readiness level (TRL) of 3 to a level of 6. This paper presents the new technologies and validation approach of the Thermal Loop experiment. The Thermal Loop is an advanced thermal control system consisting of a miniature loop heat pipe (MLHP) with multiple evaporators and multiple condensers designed for future small system applications requiring low mass, low power, and compactness. The MLHP retains all features of state-of-the-art loop heat pipes (LHPs) and offers additional advantages to enhance the functionality, performance, versatility, and reliability of the system. Details of the thermal loop concept, technical advances, benefits, objectives, level 1 requirements, and performance characteristics are described. Also included in the paper are descriptions of the test articles and mathematical modeling used for the technology validation. An MLHP breadboard was built and tested in the laboratory and thermal vacuum environments for TRL 4 and TRL 5 validations, and an MLHP proto-flight unit was built and tested in a thermal vacuum chamber for the TRL 6 validation. In addition, an analytical model was developed to simulate the steady state and transient behaviors of the MLHP during various validation tests. Capabilities and limitations of the analytical model are also addressed.

  2. A Comprehensive Microfluidics Device Construction and Characterization Module for the Advanced Undergraduate Analytical Chemistry Laboratory

    ERIC Educational Resources Information Center

    Piunno, Paul A. E.; Zetina, Adrian; Chu, Norman; Tavares, Anthony J.; Noor, M. Omair; Petryayeva, Eleonora; Uddayasankar, Uvaraj; Veglio, Andrew

    2014-01-01

    An advanced analytical chemistry undergraduate laboratory module on microfluidics that spans 4 weeks (4 h per week) is presented. The laboratory module focuses on comprehensive experiential learning of microfluidic device fabrication and the core characteristics of microfluidic devices as they pertain to fluid flow and the manipulation of samples.…

  3. AN ADVANCED PLACEMENT COURSE IN ANALYTIC GEOMETRY AND CALCULUS (MATHEMATICS XV X AP).

    ERIC Educational Resources Information Center

    DEROLF, JOHN J.; MIENTKA, WALTER E.

    THIS TEXT ON ANALYTIC GEOMETRY AND CALCULUS IS A CORRESPONDENCE COURSE DESIGNED FOR ADVANCED PLACEMENT OF HIGH SCHOOL STUDENTS IN COLLEGE. EACH OF THE 21 LESSONS INCLUDES READING ASSIGNMENTS AND LISTS OF PROBLEMS TO BE WORKED. IN ADDITION, SUPPLEMENTARY EXPLANATIONS AND COMMENTS ARE INCLUDED THAT (1) PROVIDE ILLUSTRATIVE EXAMPLES OF CONCEPTS AND…

  4. Understanding Fluorescence Measurements through a Guided-Inquiry and Discovery Experiment in Advanced Analytical Laboratory

    ERIC Educational Resources Information Center

    Wilczek-Vera, Grazyna; Salin, Eric Dunbar

    2011-01-01

    An experiment on fluorescence spectroscopy suitable for an advanced analytical laboratory is presented. Its conceptual development used a combination of the expository and discovery styles. The "learn-as-you-go" and direct "hands-on" methodology applied ensures an active role for a student in the process of visualization and discovery of concepts.…

  5. Independent Research and Independent Exploratory Development Annual Report Fiscal Year 1975

    DTIC Science & Technology

    1975-09-01

    and Coding Study.(Z?80) ................................... ......... .................... 40 Optical Cover CMMUnicallor’s Using Laser Transceiverst...Using Auger Spectroscopy and PUBLICATIONS Additional Advanced Analytical Techniques," Wagner, N. K., "Auger Electron Spectroscopy NELC Technical Note 2904...K.. "Analysis of Microelectronic Materials Using Auger Spectroscopy and Additional Advanced Analytical Techniques," Contact: Proceedings of the

  6. Validation of an advanced analytical procedure applied to the measurement of environmental radioactivity.

    PubMed

    Thanh, Tran Thien; Vuong, Le Quang; Ho, Phan Long; Chuong, Huynh Dinh; Nguyen, Vo Hoang; Tao, Chau Van

    2018-04-01

    In this work, an advanced analytical procedure was applied to calculate radioactivity in spiked water samples in a close geometry gamma spectroscopy. It included MCNP-CP code in order to calculate the coincidence summing correction factor (CSF). The CSF results were validated by a deterministic method using ETNA code for both p-type HPGe detectors. It showed that a good agreement for both codes. Finally, the validity of the developed procedure was confirmed by a proficiency test to calculate the activities of various radionuclides. The results of the radioactivity measurement with both detectors using the advanced analytical procedure were received the ''Accepted'' statuses following the proficiency test. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Advances in Instrumental Analysis of Brominated Flame Retardants: Current Status and Future Perspectives

    PubMed Central

    2014-01-01

    This review aims to highlight the recent advances and methodological improvements in instrumental techniques applied for the analysis of different brominated flame retardants (BFRs). The literature search strategy was based on the recent analytical reviews published on BFRs. The main selection criteria involved the successful development and application of analytical methods for determination of the target compounds in various environmental matrices. Different factors affecting chromatographic separation and mass spectrometric detection of brominated analytes were evaluated and discussed. Techniques using advanced instrumentation to achieve outstanding results in quantification of different BFRs and their metabolites/degradation products were highlighted. Finally, research gaps in the field of BFR analysis were identified and recommendations for future research were proposed. PMID:27433482

  8. Fast 2D Fluid-Analytical Simulation of IEDs and Plasma Uniformity in Multi-frequency CCPs

    NASA Astrophysics Data System (ADS)

    Kawamura, E.; Lieberman, M. A.; Graves, D. B.

    2014-10-01

    A fast 2D axisymmetric fluid-analytical model using the finite elements tool COMSOL is interfaced with a 1D particle-in-cell (PIC) code to study ion energy distributions (IEDs) in multi-frequency argon capacitively coupled plasmas (CCPs). A bulk fluid plasma model which solves the time-dependent plasma fluid equations is coupled with an analytical sheath model which solves for the sheath parameters. The fluid-analytical results are used as input to a PIC simulation of the sheath region of the discharge to obtain the IEDs at the wafer electrode. Each fluid-analytical-PIC simulation on a moderate 2.2 GHz CPU workstation with 8 GB of memory took about 15-20 minutes. The 2D multi-frequency fluid-analytical model was compared to 1D PIC simulations of a symmetric parallel plate discharge, showing good agreement. Fluid-analytical simulations of a 2/60/162 MHz argon CCP with a typical asymmetric reactor geometry were also conducted. The low 2 MHz frequency controlled the sheath width and voltage while the higher frequencies controlled the plasma production. A standing wave was observable at the highest frequency of 162 MHz. Adding 2 MHz power to a 60 MHz discharge or 162 MHz to a dual frequency 2 MHz/60 MHz discharge enhanced the plasma uniformity. This work was supported by the Department of Energy Office of Fusion Energy Science Contract DE-SC000193, and in part by gifts from Lam Research Corporation and Micron Corporation.

  9. Analytical Applications of Monte Carlo Techniques.

    ERIC Educational Resources Information Center

    Guell, Oscar A.; Holcombe, James A.

    1990-01-01

    Described are analytical applications of the theory of random processes, in particular solutions obtained by using statistical procedures known as Monte Carlo techniques. Supercomputer simulations, sampling, integration, ensemble, annealing, and explicit simulation are discussed. (CW)

  10. Analytical Finite Element Simulation Model for Structural Crashworthiness Prediction

    DOT National Transportation Integrated Search

    1974-02-01

    The analytical development and appropriate derivations are presented for a simulation model of vehicle crashworthiness prediction. Incremental equations governing the nonlinear elasto-plastic dynamic response of three-dimensional frame structures are...

  11. An immersed boundary method for modeling a dirty geometry data

    NASA Astrophysics Data System (ADS)

    Onishi, Keiji; Tsubokura, Makoto

    2017-11-01

    We present a robust, fast, and low preparation cost immersed boundary method (IBM) for simulating an incompressible high Re flow around highly complex geometries. The method is achieved by the dispersion of the momentum by the axial linear projection and the approximate domain assumption satisfying the mass conservation around the wall including cells. This methodology has been verified against an analytical theory and wind tunnel experiment data. Next, we simulate the problem of flow around a rotating object and demonstrate the ability of this methodology to the moving geometry problem. This methodology provides the possibility as a method for obtaining a quick solution at a next large scale supercomputer. This research was supported by MEXT as ``Priority Issue on Post-K computer'' (Development of innovative design and production processes) and used computational resources of the K computer provided by the RIKEN Advanced Institute for Computational Science.

  12. NMR relaxation rate in quasi one-dimensional antiferromagnets

    NASA Astrophysics Data System (ADS)

    Capponi, Sylvain; Dupont, Maxime; Laflorencie, Nicolas; Sengupta, Pinaki; Shao, Hui; Sandvik, Anders W.

    We compare results of different numerical approaches to compute the NMR relaxation rate 1 /T1 in quasi one-dimensional (1d) antiferromagnets. In the purely 1d regime, recent numerical simulations using DMRG have provided the full crossover behavior from classical regime at high temperature to universal Tomonaga-Luttinger liquid at low-energy (in the gapless case) or activated behavior (in the gapped case). For quasi 1d models, we can use mean-field approaches to reduce the problem to a 1d one that can be studied using DMRG. But in some cases, we can also simulate the full microscopic model using quantum Monte-Carlo techniques. This allows to compute dynamical correlations in imaginary time and we will discuss recent advances to perform stochastic analytic continuation to get real frequency spectra. Finally, we connect our results to experiments on various quasi 1d materials.

  13. Predicting Operator Execution Times Using CogTool

    NASA Technical Reports Server (NTRS)

    Santiago-Espada, Yamira; Latorella, Kara A.

    2013-01-01

    Researchers and developers of NextGen systems can use predictive human performance modeling tools as an initial approach to obtain skilled user performance times analytically, before system testing with users. This paper describes the CogTool models for a two pilot crew executing two different types of a datalink clearance acceptance tasks, and on two different simulation platforms. The CogTool time estimates for accepting and executing Required Time of Arrival and Interval Management clearances were compared to empirical data observed in video tapes and registered in simulation files. Results indicate no statistically significant difference between empirical data and the CogTool predictions. A population comparison test found no significant differences between the CogTool estimates and the empirical execution times for any of the four test conditions. We discuss modeling caveats and considerations for applying CogTool to crew performance modeling in advanced cockpit environments.

  14. Critical length scale controls adhesive wear mechanisms

    PubMed Central

    Aghababaei, Ramin; Warner, Derek H.; Molinari, Jean-Francois

    2016-01-01

    The adhesive wear process remains one of the least understood areas of mechanics. While it has long been established that adhesive wear is a direct result of contacting surface asperities, an agreed upon understanding of how contacting asperities lead to wear debris particle has remained elusive. This has restricted adhesive wear prediction to empirical models with limited transferability. Here we show that discrepant observations and predictions of two distinct adhesive wear mechanisms can be reconciled into a unified framework. Using atomistic simulations with model interatomic potentials, we reveal a transition in the asperity wear mechanism when contact junctions fall below a critical length scale. A simple analytic model is formulated to predict the transition in both the simulation results and experiments. This new understanding may help expand use of computer modelling to explore adhesive wear processes and to advance physics-based wear laws without empirical coefficients. PMID:27264270

  15. Flight simulator fidelity assessment in a rotorcraft lateral translation maneuver

    NASA Technical Reports Server (NTRS)

    Hess, R. A.; Malsbury, T.; Atencio, A., Jr.

    1992-01-01

    A model-based methodology for assessing flight simulator fidelity in closed-loop fashion is exercised in analyzing a rotorcraft low-altitude maneuver for which flight test and simulation results were available. The addition of a handling qualities sensitivity function to a previously developed model-based assessment criteria allows an analytical comparison of both performance and handling qualities between simulation and flight test. Model predictions regarding the existence of simulator fidelity problems are corroborated by experiment. The modeling approach is used to assess analytically the effects of modifying simulator characteristics on simulator fidelity.

  16. Complex dynamics induced by strong confinement - From tracer diffusion in strongly heterogeneous media to glassy relaxation of dense fluids in narrow slits

    NASA Astrophysics Data System (ADS)

    Mandal, Suvendu; Spanner-Denzer, Markus; Leitmann, Sebastian; Franosch, Thomas

    2017-08-01

    We provide an overview of recent advances of the complex dynamics of particles in strong confinements. The first paradigm is the Lorentz model where tracers explore a quenched disordered host structure. Such systems naturally occur as limiting cases of binary glass-forming systems if the dynamics of one component is much faster than the other. For a certain critical density of the host structure the tracers undergo a localization transition which constitutes a critical phenomenon. A series of predictions in the vicinity of the transition have been elaborated and tested versus computer simulations. Analytical progress is achieved for small obstacle densities. The second paradigm is a dense strongly interacting liquid confined to a narrow slab. Then the glass transition depends nonmonotonically on the separation of the plates due to an interplay of local packing and layering. Very small slab widths allow to address certain features of the statics and dynamics analytically.

  17. Coupled rotor/fuselage dynamic analysis of the AH-1G helicopter and correlation with flight vibrations data

    NASA Technical Reports Server (NTRS)

    Corrigan, J. C.; Cronkhite, J. D.; Dompka, R. V.; Perry, K. S.; Rogers, J. P.; Sadler, S. G.

    1989-01-01

    Under a research program designated Design Analysis Methods for VIBrationS (DAMVIBS), existing analytical methods are used for calculating coupled rotor-fuselage vibrations of the AH-1G helicopter for correlation with flight test data from an AH-1G Operational Load Survey (OLS) test program. The analytical representation of the fuselage structure is based on a NASTRAN finite element model (FEM), which has been developed, extensively documented, and correlated with ground vibration test. One procedure that was used for predicting coupled rotor-fuselage vibrations using the advanced Rotorcraft Flight Simulation Program C81 and NASTRAN is summarized. Detailed descriptions of the analytical formulation of rotor dynamics equations, fuselage dynamic equations, coupling between the rotor and fuselage, and solutions to the total system of equations in C81 are included. Analytical predictions of hub shears for main rotor harmonics 2p, 4p, and 6p generated by C81 are used in conjunction with 2p OLS measured control loads and a 2p lateral tail rotor gearbox force, representing downwash impingement on the vertical fin, to excite the NASTRAN model. NASTRAN is then used to correlate with measured OLS flight test vibrations. Blade load comparisons predicted by C81 showed good agreement. In general, the fuselage vibration correlations show good agreement between anslysis and test in vibration response through 15 to 20 Hz.

  18. Fast 2D fluid-analytical simulation of ion energy distributions and electromagnetic effects in multi-frequency capacitive discharges

    NASA Astrophysics Data System (ADS)

    Kawamura, E.; Lieberman, M. A.; Graves, D. B.

    2014-12-01

    A fast 2D axisymmetric fluid-analytical plasma reactor model using the finite elements simulation tool COMSOL is interfaced with a 1D particle-in-cell (PIC) code to study ion energy distributions (IEDs) in multi-frequency capacitive argon discharges. A bulk fluid plasma model, which solves the time-dependent plasma fluid equations for the ion continuity and electron energy balance, is coupled with an analytical sheath model, which solves for the sheath parameters. The time-independent Helmholtz equation is used to solve for the fields and a gas flow model solves for the steady-state pressure, temperature and velocity of the neutrals. The results of the fluid-analytical model are used as inputs to a PIC simulation of the sheath region of the discharge to obtain the IEDs at the target electrode. Each 2D fluid-analytical-PIC simulation on a moderate 2.2 GHz CPU workstation with 8 GB of memory took about 15-20 min. The multi-frequency 2D fluid-analytical model was compared to 1D PIC simulations of a symmetric parallel-plate discharge, showing good agreement. We also conducted fluid-analytical simulations of a multi-frequency argon capacitively coupled plasma (CCP) with a typical asymmetric reactor geometry at 2/60/162 MHz. The low frequency 2 MHz power controlled the sheath width and sheath voltage while the high frequencies controlled the plasma production. A standing wave was observable at the highest frequency of 162 MHz. We noticed that adding 2 MHz power to a 60 MHz discharge or 162 MHz to a dual frequency 2 MHz/60 MHz discharge can enhance the plasma uniformity. We found that multiple frequencies were not only useful for controlling IEDs but also plasma uniformity in CCP reactors.

  19. New hybrid voxelized/analytical primitive in Monte Carlo simulations for medical applications

    NASA Astrophysics Data System (ADS)

    Bert, Julien; Lemaréchal, Yannick; Visvikis, Dimitris

    2016-05-01

    Monte Carlo simulations (MCS) applied in particle physics play a key role in medical imaging and particle therapy. In such simulations, particles are transported through voxelized phantoms derived from predominantly patient CT images. However, such voxelized object representation limits the incorporation of fine elements, such as artificial implants from CAD modeling or anatomical and functional details extracted from other imaging modalities. In this work we propose a new hYbrid Voxelized/ANalytical primitive (YVAN) that combines both voxelized and analytical object descriptions within the same MCS, without the need to simultaneously run two parallel simulations, which is the current gold standard methodology. Given that YVAN is simply a new primitive object, it does not require any modifications on the underlying MC navigation code. The new proposed primitive was assessed through a first simple MCS. Results from the YVAN primitive were compared against an MCS using a pure analytical geometry and the layer mass geometry concept. A perfect agreement was found between these simulations, leading to the conclusion that the new hybrid primitive is able to accurately and efficiently handle phantoms defined by a mixture of voxelized and analytical objects. In addition, two application-based evaluation studies in coronary angiography and intra-operative radiotherapy showed that the use of YVAN was 6.5% and 12.2% faster than the layered mass geometry method, respectively, without any associated loss of accuracy. However, the simplification advantages and differences in computational time improvements obtained with YVAN depend on the relative proportion of the analytical and voxelized structures used in the simulation as well as the size and number of triangles used in the description of the analytical object meshes.

  20. Proposal of Classification Method of Time Series Data in International Emissions Trading Market Using Agent-based Simulation

    NASA Astrophysics Data System (ADS)

    Nakada, Tomohiro; Takadama, Keiki; Watanabe, Shigeyoshi

    This paper proposes the classification method using Bayesian analytical method to classify the time series data in the international emissions trading market depend on the agent-based simulation and compares the case with Discrete Fourier transform analytical method. The purpose demonstrates the analytical methods mapping time series data such as market price. These analytical methods have revealed the following results: (1) the classification methods indicate the distance of mapping from the time series data, it is easier the understanding and inference than time series data; (2) these methods can analyze the uncertain time series data using the distance via agent-based simulation including stationary process and non-stationary process; and (3) Bayesian analytical method can show the 1% difference description of the emission reduction targets of agent.

  1. A contrastive study on the influences of radial and three-dimensional satellite gravity gradiometry on the accuracy of the Earth's gravitational field recovery

    NASA Astrophysics Data System (ADS)

    Zheng, Wei; Hsu, Hou-Tse; Zhong, Min; Yun, Mei-Juan

    2012-10-01

    The accuracy of the Earth's gravitational field measured from the gravity field and steady-state ocean circulation explorer (GOCE), up to 250 degrees, influenced by the radial gravity gradient Vzz and three-dimensional gravity gradient Vij from the satellite gravity gradiometry (SGG) are contrastively demonstrated based on the analytical error model and numerical simulation, respectively. Firstly, the new analytical error model of the cumulative geoid height, influenced by the radial gravity gradient Vzz and three-dimensional gravity gradient Vij are established, respectively. In 250 degrees, the GOCE cumulative geoid height error measured by the radial gravity gradient Vzz is about 2½ times higher than that measured by the three-dimensional gravity gradient Vij. Secondly, the Earth's gravitational field from GOCE completely up to 250 degrees is recovered using the radial gravity gradient Vzz and three-dimensional gravity gradient Vij by numerical simulation, respectively. The study results show that when the measurement error of the gravity gradient is 3 × 10-12/s2, the cumulative geoid height errors using the radial gravity gradient Vzz and three-dimensional gravity gradient Vij are 12.319 cm and 9.295 cm at 250 degrees, respectively. The accuracy of the cumulative geoid height using the three-dimensional gravity gradient Vij is improved by 30%-40% on average compared with that using the radial gravity gradient Vzz in 250 degrees. Finally, by mutual verification of the analytical error model and numerical simulation, the orders of magnitude from the accuracies of the Earth's gravitational field recovery make no substantial differences based on the radial and three-dimensional gravity gradients, respectively. Therefore, it is feasible to develop in advance a radial cold-atom interferometric gradiometer with a measurement accuracy of 10-13/s2-10-15/s2 for precisely producing the next-generation GOCE Follow-On Earth gravity field model with a high spatial resolution.

  2. 14 CFR Appendix H to Part 121 - Advanced Simulation

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 3 2011-01-01 2011-01-01 false Advanced Simulation H Appendix H to Part... Simulation This appendix provides guidelines and a means for achieving flightcrew training in advanced... simulator, as appropriate. Advanced Simulation Training Program For an operator to conduct Level C or D...

  3. 14 CFR Appendix H to Part 121 - Advanced Simulation

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 3 2012-01-01 2012-01-01 false Advanced Simulation H Appendix H to Part... Simulation This appendix provides guidelines and a means for achieving flightcrew training in advanced... simulator, as appropriate. Advanced Simulation Training Program For an operator to conduct Level C or D...

  4. 14 CFR Appendix H to Part 121 - Advanced Simulation

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 3 2013-01-01 2013-01-01 false Advanced Simulation H Appendix H to Part... Simulation This appendix provides guidelines and a means for achieving flightcrew training in advanced... simulator, as appropriate. Advanced Simulation Training Program For an operator to conduct Level C or D...

  5. Knowledge Style Profiling: An Exploration of Cognitive, Temperament, Demographic and Organizational Characteristics among Decision Makers Using Advanced Analytical Technologies

    ERIC Educational Resources Information Center

    Polito, Vincent A., Jr.

    2010-01-01

    The objective of this research was to explore the possibilities of identifying knowledge style factors that could be used as central elements of a professional business analyst's (PBA) performance attributes at work for those decision makers that use advanced analytical technologies on decision making tasks. Indicators of knowledge style were…

  6. Simulation and statistics: Like rhythm and song

    NASA Astrophysics Data System (ADS)

    Othman, Abdul Rahman

    2013-04-01

    Simulation has been introduced to solve problems in the form of systems. By using this technique the following two problems can be overcome. First, a problem that has an analytical solution but the cost of running an experiment to solve is high in terms of money and lives. Second, a problem exists but has no analytical solution. In the field of statistical inference the second problem is often encountered. With the advent of high-speed computing devices, a statistician can now use resampling techniques such as the bootstrap and permutations to form pseudo sampling distribution that will lead to the solution of the problem that cannot be solved analytically. This paper discusses how a Monte Carlo simulation was and still being used to verify the analytical solution in inference. This paper also discusses the resampling techniques as simulation techniques. The misunderstandings about these two techniques are examined. The successful usages of both techniques are also explained.

  7. Comprehensive two-dimensional gas chromatography and food sensory properties: potential and challenges.

    PubMed

    Cordero, Chiara; Kiefl, Johannes; Schieberle, Peter; Reichenbach, Stephen E; Bicchi, Carlo

    2015-01-01

    Modern omics disciplines dealing with food flavor focus the analytical efforts on the elucidation of sensory-active compounds, including all possible stimuli of multimodal perception (aroma, taste, texture, etc.) by means of a comprehensive, integrated treatment of sample constituents, such as physicochemical properties, concentration in the matrix, and sensory properties (odor/taste quality, perception threshold). Such analyses require detailed profiling of known bioactive components as well as advanced fingerprinting techniques to catalog sample constituents comprehensively, quantitatively, and comparably across samples. Multidimensional analytical platforms support comprehensive investigations required for flavor analysis by combining information on analytes' identities, physicochemical behaviors (volatility, polarity, partition coefficient, and solubility), concentration, and odor quality. Unlike other omics, flavor metabolomics and sensomics include the final output of the biological phenomenon (i.e., sensory perceptions) as an additional analytical dimension, which is specifically and exclusively triggered by the chemicals analyzed. However, advanced omics platforms, which are multidimensional by definition, pose challenging issues not only in terms of coupling with detection systems and sample preparation, but also in terms of data elaboration and processing. The large number of variables collected during each analytical run provides a high level of information, but requires appropriate strategies to exploit fully this potential. This review focuses on advances in comprehensive two-dimensional gas chromatography and analytical platforms combining two-dimensional gas chromatography with olfactometry, chemometrics, and quantitative assays for food sensory analysis to assess the quality of a given product. We review instrumental advances and couplings, automation in sample preparation, data elaboration, and a selection of applications.

  8. Development of a Reduced-Order Three-Dimensional Flow Model for Thermal Mixing and Stratification Simulation during Reactor Transients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Rui

    2017-09-03

    Mixing, thermal-stratification, and mass transport phenomena in large pools or enclosures play major roles for the safety of reactor systems. Depending on the fidelity requirement and computational resources, various modeling methods, from the 0-D perfect mixing model to 3-D Computational Fluid Dynamics (CFD) models, are available. Each is associated with its own advantages and shortcomings. It is very desirable to develop an advanced and efficient thermal mixing and stratification modeling capability embedded in a modern system analysis code to improve the accuracy of reactor safety analyses and to reduce modeling uncertainties. An advanced system analysis tool, SAM, is being developedmore » at Argonne National Laboratory for advanced non-LWR reactor safety analysis. While SAM is being developed as a system-level modeling and simulation tool, a reduced-order three-dimensional module is under development to model the multi-dimensional flow and thermal mixing and stratification in large enclosures of reactor systems. This paper provides an overview of the three-dimensional finite element flow model in SAM, including the governing equations, stabilization scheme, and solution methods. Additionally, several verification and validation tests are presented, including lid-driven cavity flow, natural convection inside a cavity, laminar flow in a channel of parallel plates. Based on the comparisons with the analytical solutions and experimental results, it is demonstrated that the developed 3-D fluid model can perform very well for a wide range of flow problems.« less

  9. Fast analytical scatter estimation using graphics processing units.

    PubMed

    Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris

    2015-01-01

    To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.

  10. Science Update: Analytical Chemistry.

    ERIC Educational Resources Information Center

    Worthy, Ward

    1980-01-01

    Briefly discusses new instrumentation in the field of analytical chemistry. Advances in liquid chromatography, photoacoustic spectroscopy, the use of lasers, and mass spectrometry are also discussed. (CS)

  11. Investigation of Acoustical Shielding by a Wedge-Shaped Airframe

    NASA Technical Reports Server (NTRS)

    Gerhold, Carl H.; Clark, Lorenzo R.; Dunn, Mark H.; Tweed, John

    2006-01-01

    Experiments on a scale model of an advanced unconventional subsonic transport concept, the Blended Wing Body (BWB), have demonstrated significant shielding of inlet-radiated noise. A computational model of the shielding mechanism has been developed using a combination of boundary integral equation method (BIEM) and equivalent source method (ESM). The computation models the incident sound from a point source in a nacelle and determines the scattered sound field. In this way the sound fields with and without the airfoil can be estimated for comparison to experiment. An experimental test bed using a simplified wedge-shape airfoil and a broadband point noise source in a simulated nacelle has been developed for the purposes of verifying the analytical model and also to study the effect of engine nacelle placement on shielding. The experimental study is conducted in the Anechoic Noise Research Facility at NASA Langley Research Center. The analytic and experimental results are compared at 6300 and 8000 Hz. These frequencies correspond to approximately 150 Hz on the full scale aircraft. Comparison between the experimental and analytic results is quite good, not only for the noise scattering by the airframe, but also for the total sound pressure in the far field. Many of the details of the sound field that the analytic model predicts are seen or indicated in the experiment, within the spatial resolution limitations of the experiment. Changing nacelle location produces comparable changes in noise shielding contours evaluated analytically and experimentally. Future work in the project will be enhancement of the analytic model to extend the analysis to higher frequencies corresponding to the blade passage frequency of the high bypass ratio ducted fan engines that are expected to power the BWB.

  12. Investigation of Acoustical Shielding by a Wedge-Shaped Airframe

    NASA Technical Reports Server (NTRS)

    Gerhold, Carl H.; Clark, Lorenzo R.; Dunn, Mark H.; Tweed, John

    2004-01-01

    Experiments on a scale model of an advanced unconventional subsonic transport concept, the Blended Wing Body (BWB), have demonstrated significant shielding of inlet-radiated noise. A computational model of the shielding mechanism has been developed using a combination of boundary integral equation method (BIEM) and equivalent source method (ESM). The computation models the incident sound from a point source in a nacelle and determines the scattered sound field. In this way the sound fields with and without the airfoil can be estimated for comparison to experiment. An experimental test bed using a simplified wedge-shape airfoil and a broadband point noise source in a simulated nacelle has been developed for the purposes of verifying the analytical model and also to study the effect of engine nacelle placement on shielding. The experimental study is conducted in the Anechoic Noise Research Facility at NASA Langley Research Center. The analytic and experimental results are compared at 6300 and 8000 Hz. These frequencies correspond to approximately 150 Hz on the full scale aircraft. Comparison between the experimental and analytic results is quite good, not only for the noise scattering by the airframe, but also for the total sound pressure in the far field. Many of the details of the sound field that the analytic model predicts are seen or indicated in the experiment, within the spatial resolution limitations of the experiment. Changing nacelle location produces comparable changes in noise shielding contours evaluated analytically and experimentally. Future work in the project will be enhancement of the analytic model to extend the analysis to higher frequencies corresponding to the blade passage frequency of the high bypass ratio ducted fan engines that are expected to power the BWB.

  13. Advances in analytical chemistry

    NASA Technical Reports Server (NTRS)

    Arendale, W. F.; Congo, Richard T.; Nielsen, Bruce J.

    1991-01-01

    Implementation of computer programs based on multivariate statistical algorithms makes possible obtaining reliable information from long data vectors that contain large amounts of extraneous information, for example, noise and/or analytes that we do not wish to control. Three examples are described. Each of these applications requires the use of techniques characteristic of modern analytical chemistry. The first example, using a quantitative or analytical model, describes the determination of the acid dissociation constant for 2,2'-pyridyl thiophene using archived data. The second example describes an investigation to determine the active biocidal species of iodine in aqueous solutions. The third example is taken from a research program directed toward advanced fiber-optic chemical sensors. The second and third examples require heuristic or empirical models.

  14. Integrating GIS and ABM to Explore Spatiotemporal Dynamics

    NASA Astrophysics Data System (ADS)

    Sun, M.; Jiang, Y.; Yang, C.

    2013-12-01

    Agent-based modeling as a methodology for the bottom-up exploration with the account of adaptive behavior and heterogeneity of system components can help discover the development and pattern of the complex social and environmental system. However, ABM is a computationally intensive process especially when the number of system components becomes large and the agent-agent/agent-environmental interaction is modeled very complex. Most of traditional ABM frameworks developed based on CPU do not have a satisfying computing capacity. To address the problem and as the emergence of advanced techniques, GPU computing with CUDA can provide powerful parallel structure to enable the complex simulation of spatiotemporal dynamics. In this study, we first develop a GPU-based ABM system. Secondly, in order to visualize the dynamics generated from the movement of agent and the change of agent/environmental attributes during the simulation, we integrate GIS into the ABM system. Advanced geovisualization technologies can be utilized for representing the spatiotemporal change events, such as proper 2D/3D maps with state-of-the-art symbols, space-time cube and multiple layers each of which presents pattern in one time-stamp, etc. Thirdly, visual analytics which include interactive tools (e.g. grouping, filtering, linking, etc.) is included in our ABM-GIS system to help users conduct real-time data exploration during the progress of simulation. Analysis like flow analysis and spatial cluster analysis can be integrated according to the geographical problem we want to explore.

  15. An Advanced Analytical Chemistry Experiment Using Gas Chromatography-Mass Spectrometry, MATLAB, and Chemometrics to Predict Biodiesel Blend Percent Composition

    ERIC Educational Resources Information Center

    Pierce, Karisa M.; Schale, Stephen P.; Le, Trang M.; Larson, Joel C.

    2011-01-01

    We present a laboratory experiment for an advanced analytical chemistry course where we first focus on the chemometric technique partial least-squares (PLS) analysis applied to one-dimensional (1D) total-ion-current gas chromatography-mass spectrometry (GC-TIC) separations of biodiesel blends. Then, we focus on n-way PLS (n-PLS) applied to…

  16. Computational toxicity in 21st century safety sciences (China ...

    EPA Pesticide Factsheets

    presentation at the Joint Meeting of Analytical Toxicology and Computational Toxicology Committee (Chinese Society of Toxicology) International Workshop on Advanced Chemical Safety Assessment Technologies on 11 May 2016, Fuzhou University, Fuzhou China presentation at the Joint Meeting of Analytical Toxicology and Computational Toxicology Committee (Chinese Society of Toxicology) International Workshop on Advanced Chemical Safety Assessment Technologies on 11 May 2016, Fuzhou University, Fuzhou China

  17. Rapid and semi-analytical design and simulation of a toroidal magnet made with YBCO and MgB 2 superconductors

    DOE PAGES

    Dimitrov, I. K.; Zhang, X.; Solovyov, V. F.; ...

    2015-07-07

    Recent advances in second-generation (YBCO) high-temperature superconducting wire could potentially enable the design of super high performance energy storage devices that combine the high energy density of chemical storage with the high power of superconducting magnetic storage. However, the high aspect ratio and the considerable filament size of these wires require the concomitant development of dedicated optimization methods that account for the critical current density in type-II superconductors. In this study, we report on the novel application and results of a CPU-efficient semianalytical computer code based on the Radia 3-D magnetostatics software package. Our algorithm is used to simulate andmore » optimize the energy density of a superconducting magnetic energy storage device model, based on design constraints, such as overall size and number of coils. The rapid performance of the code is pivoted on analytical calculations of the magnetic field based on an efficient implementation of the Biot-Savart law for a large variety of 3-D “base” geometries in the Radia package. The significantly reduced CPU time and simple data input in conjunction with the consideration of realistic input variables, such as material-specific, temperature, and magnetic-field-dependent critical current densities, have enabled the Radia-based algorithm to outperform finite-element approaches in CPU time at the same accuracy levels. Comparative simulations of MgB 2 and YBCO-based devices are performed at 4.2 K, in order to ascertain the realistic efficiency of the design configurations.« less

  18. The analytical and numerical approaches to the theory of the Moon's librations: Modern analysis and results

    NASA Astrophysics Data System (ADS)

    Petrova, N.; Zagidullin, A.; Nefedyev, Y.; Kosulin, V.; Andreev, A.

    2017-11-01

    Observing physical librations of celestial bodies and the Moon represents one of the astronomical methods of remotely assessing the internal structure of a celestial body without conducting expensive space experiments. The paper contains a review of recent advances in studying the Moon's structure using various methods of obtaining and applying the lunar physical librations (LPhL) data. In this article LPhL simulation methods of assessing viscoelastic and dissipative properties of the lunar body and lunar core parameters, whose existence has been recently confirmed during the seismic data reprocessing of ;Apollo; space mission, are described. Much attention is paid to physical interpretation of the free librations phenomenon and the methods for its determination. In the paper the practical application of the most accurate analytical LPhL tables (Rambaux and Williams, 2011) is discussed. The tables were built on the basis of complex analytical processing of the residual differences obtained when comparing long-term series of laser observations with the numerical ephemeris DE421. In the paper an efficiency analysis of two approaches to LPhL theory is conducted: the numerical and the analytical ones. It has been shown that in lunar investigation both approaches complement each other in various aspects: the numerical approach provides high accuracy of the theory, which is required for the proper processing of modern observations, the analytical approach allows to comprehend the essence of the phenomena in the lunar rotation, predict and interpret new effects in the observations of lunar body and lunar core parameters.

  19. Free-jet acoustic investigation of high-radius-ratio coannular plug nozzles

    NASA Technical Reports Server (NTRS)

    Knott, P. R.; Janardan, B. A.; Majjigi, R. K.; Bhutiani, P. K.; Vogt, P. G.

    1984-01-01

    The experimental and analytical results of a scale model simulated flight acoustic exploratory investigation of high radius ratio coannular plug nozzles with inverted velocity and temperature profiles are summarized. Six coannular plug nozzle configurations and a baseline convergent conical nozzle were tested for simulated flight acoustic evaluation. The nozzles were tested over a range of test conditions that are typical of a Variable Cycle Engine for application to advanced high speed aircraft. It was found that in simulate flight, the high radius ratio coannular plug nozzles maintain their jet noise and shock noise reduction features previously observed in static testing. The presence of nozzle bypass struts will not significantly affect the acousticn noise reduction features of a General Electric type nozzle design. A unique coannular plug nozzle flight acoustic spectral prediction method was identified and found to predict the measured results quite well. Special laser velocimeter and acoustic measurements were performed which have given new insights into the jet and shock noise reduction mechanisms of coannular plug nozzles with regard to identifying further benificial research efforts.

  20. Gyrokinetic Particle Simulations of Neoclassical Transport

    NASA Astrophysics Data System (ADS)

    Lin, Zhihong

    A time varying weighting (delta f) scheme based on the small gyro-radius ordering is developed and applied to a steady state, multi-species gyrokinetic particle simulation of neoclassical transport. Accurate collision operators conserving momentum and energy are developed and implemented. Benchmark simulation results using these operators are found to agree very well with neoclassical theory. For example, it is dynamically demonstrated that like-particle collisions produce no particle flux and that the neoclassical fluxes are ambipolar for an ion -electron plasma. An important physics feature of the present scheme is the introduction of toroidal flow to the simulations. In agreement with the existing analytical neoclassical theory, ion energy flux is enhanced by the toroidal mass flow and the neoclassical viscosity is a Pfirsch-Schluter factor times the classical viscosity in the banana regime. In addition, the poloidal electric field associated with toroidal mass flow is found to enhance density gradient driven electron particle flux and the bootstrap current while reducing temperature gradient driven flux and current. Modifications of the neoclassical transport by the orbit squeezing effects due to the radial electric field associated with sheared toroidal flow are studied. Simulation results indicate a reduction of both ion thermal flux and neoclassical toroidal rotation. Neoclassical theory in the steep gradient profile regime, where conventional neoclassical theory fails, is examined by taking into account finite banana width effects. The relevance of these studies to interesting experimental conditions in tokamaks is discussed. Finally, the present numerical scheme is extended to general geometry equilibrium. This new formulation will be valuable for the development of new capabilities to address complex equilibria such as advanced stellarator configurations and possibly other alternate concepts for the magnetic confinement of plasmas. In general, the present work demonstrates a valuable new capability for studying important aspects of neoclassical transport inaccessible by conventional analytical calculation processes.

  1. Collisional Ion and Electron Scale Gyrokinetic Simulations in the Tokamak Pedestal

    NASA Astrophysics Data System (ADS)

    Belli, E. A.; Candy, J.; Snyder, P. B.

    2016-10-01

    A new gyrokinetic solver, CGYRO, has been developed for precise studies of high collisionality regimes, such as the H-mode pedestal and L-mode edge. Building on GYRO and NEO, CGYRO uses the same velocity-space coordinates as NEO to optimize the accuracy of the collision dynamics and allow for advanced operators beyond the standard Lorentz pitch-angle scattering model. These advanced operators include energy diffusion and finite-FLR collisional effects. The code is optimized for multiscale (coupled electron and ion turbulence scales) simulations, employing a new spatial discretization and array distribution scheme that targets scalability on next-generation (exascale) HPC systems. In this work, CGYRO is used to study the complex spectrum of modes in the pedestal region. The onset of the linear KBM with full collisional effects is assessed to develop an improved KBM/RBM model for EPED. The analysis is extended to high k to explore the role of electron-scale (ETG-range) physics. Comparisons with new analytic collisional theories are made. Inclusion of sonic toroidal rotation (including full centrifugal effects) for studies including heavy wall impurities is also reported. Work supported in part by the US DOE under DE-FC02-06ER54873 and DE-FC02-08ER54963.

  2. Proposed Facility Modifications to Support Propulsion Systems Testing Under Simulated Space Conditions at Plum Brook Station's Spacecraft Propulsion Research Facility (B-2)

    NASA Technical Reports Server (NTRS)

    Edwards, Daryl A.

    2008-01-01

    Preparing NASA's Plum Brook Station's Spacecraft Propulsion Research Facility (B-2) to support NASA's new generation of launch vehicles has raised many challenges for B-2's support staff. The facility provides a unique capability to test chemical propulsion systems/vehicles while simulating space thermal and vacuum environments. Designed and constructed in the early 1960s to support upper stage cryogenic engine/vehicle system development, the Plum Brook Station B-2 facility will require modifications to support the larger, more powerful, and more advanced engine systems for the next generation of vehicles leaving earth's orbit. Engine design improvements over the years have included large area expansion ratio nozzles, greater combustion chamber pressures, and advanced materials. Consequently, it has become necessary to determine what facility changes are required and how the facility can be adapted to support varying customers and their specific test needs. Exhaust system performance, including understanding the present facility capabilities, is the primary focus of this work. A variety of approaches and analytical tools are being employed to gain this understanding. This presentation discusses some of the challenges in applying these tools to this project and expected facility configuration to support the varying customer needs.

  3. Proposed Facility Modifications to Support Propulsion Systems Testing Under Simulated Space Conditions at Plum Brook Station's Spacecraft Propulsion Research Facility (B-2)

    NASA Technical Reports Server (NTRS)

    Edwards, Daryl A.

    2007-01-01

    Preparing NASA's Plum Brook Station's Spacecraft Propulsion Research Facility (B-2) to support NASA's new generation of launch vehicles has raised many challenges for B-2 s support staff. The facility provides a unique capability to test chemical propulsion systems/vehicles while simulating space thermal and vacuum environments. Designed and constructed 4 decades ago to support upper stage cryogenic engine/vehicle system development, the Plum Brook Station B-2 facility will require modifications to support the larger, more powerful, and more advanced engine systems for the next generation of vehicles leaving earth's orbit. Engine design improvements over the years have included large area expansion ratio nozzles, greater combustion chamber pressures, and advanced materials. Consequently, it has become necessary to determine what facility changes are required and how the facility can be adapted to support varying customers and their specific test needs. Instrumental in this task is understanding the present facility capabilities and identifying what reasonable changes can be implemented. A variety of approaches and analytical tools are being employed to gain this understanding. This paper discusses some of the challenges in applying these tools to this project and expected facility configuration to support the varying customer needs.

  4. Synergia: an accelerator modeling tool with 3-D space charge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amundson, James F.; Spentzouris, P.; /Fermilab

    2004-07-01

    High precision modeling of space-charge effects, together with accurate treatment of single-particle dynamics, is essential for designing future accelerators as well as optimizing the performance of existing machines. We describe Synergia, a high-fidelity parallel beam dynamics simulation package with fully three dimensional space-charge capabilities and a higher order optics implementation. We describe the computational techniques, the advanced human interface, and the parallel performance obtained using large numbers of macroparticles. We also perform code benchmarks comparing to semi-analytic results and other codes. Finally, we present initial results on particle tune spread, beam halo creation, and emittance growth in the Fermilab boostermore » accelerator.« less

  5. Development of techniques for advanced optical contamination measurement with internal reflection spectroscopy, phase 1, volume 1

    NASA Technical Reports Server (NTRS)

    Hayes, J. D.

    1972-01-01

    The feasibility of monitoring volatile contaminants in a large space simulation chamber using techniques of internal reflection spectroscopy was demonstrated analytically and experimentally. The infrared spectral region was selected as the operational spectral range in order to provide unique identification of the contaminants along with sufficient sensitivity to detect trace contaminant concentrations. It was determined theoretically that a monolayer of the contaminants could be detected and identified using optimized experimental procedures. This ability was verified experimentally. Procedures were developed to correct the attenuated total reflectance spectra for thick sample distortion. However, by using two different element designs the need for such correction can be avoided.

  6. Simulating future residential property losses from wildfire in Flathead County, Montana: Chapter 1

    USGS Publications Warehouse

    Prato, Tony; Paveglio, Travis B; Barnett, Yan; Silverstein, Robin; Hardy, Michael; Keane, Robert; Loehman, Rachel A.; Clark, Anthony; Fagre, Daniel B.; Venn, Tyron; Stockmann, Keith

    2014-01-01

    Wildfire damages to private residences in the United States and elsewhere have increased as a result of expansion of the wildland-urban interface (WUI) and other factors. Understanding this unwelcome trend requires analytical frameworks that simulate how various interacting social, economic, and biophysical factors influence those damages. A methodological framework is developed for simulating expected residential property losses from wildfire [E(RLW)], which is a probabilistic monetary measure of wildfire risk to residential properties in the WUI. E(RLW) is simulated for Flathead County, Montana for five, 10-year subperiods covering the period 2010-2059, under various assumptions about future climate change, economic growth, land use policy, and forest management. Results show statistically significant increases in the spatial extent of WUI properties, the number of residential structures at risk from wildfire, and E(RLW) over the 50-year evaluation period for both the county and smaller subareas (i.e., neighborhoods and parcels). The E(RLW) simulation framework presented here advances the field of wildfire risk assessment by providing a finer-scale tool that incorporates a set of dynamic, interacting processes. The framework can be applied using other scenarios for climate change, economic growth, land use policy, and forest management, and in other areas.

  7. Design of coherent receiver optical front end for unamplified applications.

    PubMed

    Zhang, Bo; Malouin, Christian; Schmidt, Theodore J

    2012-01-30

    Advanced modulation schemes together with coherent detection and digital signal processing has enabled the next generation high-bandwidth optical communication systems. One of the key advantages of coherent detection is its superior receiver sensitivity compared to direct detection receivers due to the gain provided by the local oscillator (LO). In unamplified applications, such as metro and edge networks, the ultimate receiver sensitivity is dictated by the amount of shot noise, thermal noise, and the residual beating of the local oscillator with relative intensity noise (LO-RIN). We show that the best sensitivity is achieved when the thermal noise is balanced with the residual LO-RIN beat noise, which results in an optimum LO power. The impact of thermal noise from the transimpedance amplifier (TIA), the RIN from the LO, and the common mode rejection ratio (CMRR) from a balanced photodiode are individually analyzed via analytical models and compared to numerical simulations. The analytical model results match well with those of the numerical simulations, providing a simplified method to quantify the impact of receiver design tradeoffs. For a practical 100 Gb/s integrated coherent receiver with 7% FEC overhead, we show that an optimum receiver sensitivity of -33 dBm can be achieved at GFEC cliff of 8.55E-5 if the LO power is optimized at 11 dBm. We also discuss a potential method to monitor the imperfections of a balanced and integrated coherent receiver.

  8. Interlaboratory comparability, bias, and precision for four laboratories measuring constituents in precipitation, November 1982-August 1983

    USGS Publications Warehouse

    Brooks, M.H.; Schroder, L.J.; Malo, B.A.

    1985-01-01

    Four laboratories were evaluated in their analysis of identical natural and simulated precipitation water samples. Interlaboratory comparability was evaluated using analysis of variance coupled with Duncan 's multiple range test, and linear-regression models describing the relations between individual laboratory analytical results for natural precipitation samples. Results of the statistical analyses indicate that certain pairs of laboratories produce different results when analyzing identical samples. Analyte bias for each laboratory was examined using analysis of variance coupled with Duncan 's multiple range test on data produced by the laboratories from the analysis of identical simulated precipitation samples. Bias for a given analyte produced by a single laboratory has been indicated when the laboratory mean for that analyte is shown to be significantly different from the mean for the most-probable analyte concentrations in the simulated precipitation samples. Ion-chromatographic methods for the determination of chloride, nitrate, and sulfate have been compared with the colorimetric methods that were also in use during the study period. Comparisons were made using analysis of variance coupled with Duncan 's multiple range test for means produced by the two methods. Analyte precision for each laboratory has been estimated by calculating a pooled variance for each analyte. Analyte estimated precisions have been compared using F-tests and differences in analyte precisions for laboratory pairs have been reported. (USGS)

  9. Initialization and Simulation of Three-Dimensional Aircraft Wake Vortices

    NASA Technical Reports Server (NTRS)

    Ash, Robert L.; Zheng, Z. C.

    1997-01-01

    This paper studies the effects of axial velocity profiles on vortex decay, in order to properly initialize and simulate three-dimensional wake vortex flow. Analytical relationships are obtained based on a single vortex model and computational simulations are performed for a rather practical vortex wake, which show that the single vortex analytical relations can still be applicable at certain streamwise sections of three-dimensional wake vortices.

  10. Analytical methodology for determination of helicopter IFR precision approach requirements. [pilot workload and acceptance level

    NASA Technical Reports Server (NTRS)

    Phatak, A. V.

    1980-01-01

    A systematic analytical approach to the determination of helicopter IFR precision approach requirements is formulated. The approach is based upon the hypothesis that pilot acceptance level or opinion rating of a given system is inversely related to the degree of pilot involvement in the control task. A nonlinear simulation of the helicopter approach to landing task incorporating appropriate models for UH-1H aircraft, the environmental disturbances and the human pilot was developed as a tool for evaluating the pilot acceptance hypothesis. The simulated pilot model is generic in nature and includes analytical representation of the human information acquisition, processing, and control strategies. Simulation analyses in the flight director mode indicate that the pilot model used is reasonable. Results of the simulation are used to identify candidate pilot workload metrics and to test the well known performance-work-load relationship. A pilot acceptance analytical methodology is formulated as a basis for further investigation, development and validation.

  11. Accuracy of Binary Black Hole Waveform Models for Advanced LIGO

    NASA Astrophysics Data System (ADS)

    Kumar, Prayush; Fong, Heather; Barkett, Kevin; Bhagwat, Swetha; Afshari, Nousha; Chu, Tony; Brown, Duncan; Lovelace, Geoffrey; Pfeiffer, Harald; Scheel, Mark; Szilagyi, Bela; Simulating Extreme Spacetimes (SXS) Team

    2016-03-01

    Coalescing binaries of compact objects, such as black holes and neutron stars, are the primary targets for gravitational-wave (GW) detection with Advanced LIGO. Accurate modeling of the emitted GWs is required to extract information about the binary source. The most accurate solution to the general relativistic two-body problem is available in numerical relativity (NR), which is however limited in application due to computational cost. Current searches use semi-analytic models that are based in post-Newtonian (PN) theory and calibrated to NR. In this talk, I will present comparisons between contemporary models and high-accuracy numerical simulations performed using the Spectral Einstein Code (SpEC), focusing at the questions: (i) How well do models capture binary's late-inspiral where they lack a-priori accurate information from PN or NR, and (ii) How accurately do they model binaries with parameters outside their range of calibration. These results guide the choice of templates for future GW searches, and motivate future modeling efforts.

  12. Proposition of stair climb of a drop using chemical wettability gradient

    NASA Astrophysics Data System (ADS)

    Seerha, Prabh P. S.; Kumar, Parmod; Das, Arup K.; Mitra, Sushanta K.

    2017-07-01

    We propose a passive technique for a drop to climb along the staircase textured surface using chemical wettability gradients. The stair structure, droplet configuration, and contact angle gradient are modeled using Lagrangian smoothed particle hydrodynamics. The stair climb efficiency of the droplet is found to be a function of wettability gradient strength. Using analytical balance of actuation and resistive forces across droplets, physical reasons behind stair climbing are established and influencing parameters are identified. Evolution of the droplet shape along with the advancing and the receding contact angles is presented from where instantaneous actuation and hysteresis forces are calculated. Using history of Lagrangian particles, circulation at the foot of stairs and progressing development of the advancing drop front are monitored. Higher efficiency in stair climbing in the case of a bigger sized drop than smaller one is obtained from simulation results and realized from force balance. Difficulty in climbing steeper stairs is also demonstrated to delineate the effect of gravitational pull against the actuation force due to the wettability gradient.

  13. Computational sciences in the upstream oil and gas industry

    PubMed Central

    Halsey, Thomas C.

    2016-01-01

    The predominant technical challenge of the upstream oil and gas industry has always been the fundamental uncertainty of the subsurface from which it produces hydrocarbon fluids. The subsurface can be detected remotely by, for example, seismic waves, or it can be penetrated and studied in the extremely limited vicinity of wells. Inevitably, a great deal of uncertainty remains. Computational sciences have been a key avenue to reduce and manage this uncertainty. In this review, we discuss at a relatively non-technical level the current state of three applications of computational sciences in the industry. The first of these is seismic imaging, which is currently being revolutionized by the emergence of full wavefield inversion, enabled by algorithmic advances and petascale computing. The second is reservoir simulation, also being advanced through the use of modern highly parallel computing architectures. Finally, we comment on the role of data analytics in the upstream industry. This article is part of the themed issue ‘Energy and the subsurface’. PMID:27597785

  14. Realistic Analytical Polyhedral MRI Phantoms

    PubMed Central

    Ngo, Tri M.; Fung, George S. K.; Han, Shuo; Chen, Min; Prince, Jerry L.; Tsui, Benjamin M. W.; McVeigh, Elliot R.; Herzka, Daniel A.

    2015-01-01

    Purpose Analytical phantoms have closed form Fourier transform expressions and are used to simulate MRI acquisitions. Existing 3D analytical phantoms are unable to accurately model shapes of biomedical interest. It is demonstrated that polyhedral analytical phantoms have closed form Fourier transform expressions and can accurately represent 3D biomedical shapes. Theory The derivations of the Fourier transform of a polygon and polyhedron are presented. Methods The Fourier transform of a polyhedron was implemented and its accuracy in representing faceted and smooth surfaces was characterized. Realistic anthropomorphic polyhedral brain and torso phantoms were constructed and their use in simulated 3D/2D MRI acquisitions was described. Results Using polyhedra, the Fourier transform of faceted shapes can be computed to within machine precision. Smooth surfaces can be approximated with increasing accuracy by increasing the number of facets in the polyhedron; the additional accumulated numerical imprecision of the Fourier transform of polyhedra with many faces remained small. Simulations of 3D/2D brain and 2D torso cine acquisitions produced realistic reconstructions free of high frequency edge aliasing as compared to equivalent voxelized/rasterized phantoms. Conclusion Analytical polyhedral phantoms are easy to construct and can accurately simulate shapes of biomedical interest. PMID:26479724

  15. Review of recent advances in analytical techniques for the determination of neurotransmitters

    PubMed Central

    Perry, Maura; Li, Qiang; Kennedy, Robert T.

    2009-01-01

    Methods and advances for monitoring neurotransmitters in vivo or for tissue analysis of neurotransmitters over the last five years are reviewed. The review is organized primarily by neurotransmitter type. Transmitter and related compounds may be monitored by either in vivo sampling coupled to analytical methods or implanted sensors. Sampling is primarily performed using microdialysis, but low-flow push-pull perfusion may offer advantages of spatial resolution while minimizing the tissue disruption associated with higher flow rates. Analytical techniques coupled to these sampling methods include liquid chromatography, capillary electrophoresis, enzyme assays, sensors, and mass spectrometry. Methods for the detection of amino acid, monoamine, neuropeptide, acetylcholine, nucleoside, and soluable gas neurotransmitters have been developed and improved upon. Advances in the speed and sensitivity of these methods have enabled improvements in temporal resolution and increased the number of compounds detectable. Similar advances have enabled improved detection at tissue samples, with a substantial emphasis on single cell and other small samples. Sensors provide excellent temporal and spatial resolution for in vivo monitoring. Advances in application to catecholamines, indoleamines, and amino acids have been prominent. Improvements in stability, sensitivity, and selectivity of the sensors have been of paramount interest. PMID:19800472

  16. Statistical error in simulations of Poisson processes: Example of diffusion in solids

    NASA Astrophysics Data System (ADS)

    Nilsson, Johan O.; Leetmaa, Mikael; Vekilova, Olga Yu.; Simak, Sergei I.; Skorodumova, Natalia V.

    2016-08-01

    Simulations of diffusion in solids often produce poor statistics of diffusion events. We present an analytical expression for the statistical error in ion conductivity obtained in such simulations. The error expression is not restricted to any computational method in particular, but valid in the context of simulation of Poisson processes in general. This analytical error expression is verified numerically for the case of Gd-doped ceria by running a large number of kinetic Monte Carlo calculations.

  17. MERRA Analytic Services

    NASA Astrophysics Data System (ADS)

    Schnase, J. L.; Duffy, D. Q.; McInerney, M. A.; Tamkin, G. S.; Thompson, J. H.; Gill, R.; Grieg, C. M.

    2012-12-01

    MERRA Analytic Services (MERRA/AS) is a cyberinfrastructure resource for developing and evaluating a new generation of climate data analysis capabilities. MERRA/AS supports OBS4MIP activities by reducing the time spent in the preparation of Modern Era Retrospective-Analysis for Research and Applications (MERRA) data used in data-model intercomparison. It also provides a testbed for experimental development of high-performance analytics. MERRA/AS is a cloud-based service built around the Virtual Climate Data Server (vCDS) technology that is currently used by the NASA Center for Climate Simulation (NCCS) to deliver Intergovernmental Panel on Climate Change (IPCC) data to the Earth System Grid Federation (ESGF). Crucial to its effectiveness, MERRA/AS's servers will use a workflow-generated realizable object capability to perform analyses over the MERRA data using the MapReduce approach to parallel storage-based computation. The results produced by these operations will be stored by the vCDS, which will also be able to host code sets for those who wish to explore the use of MapReduce for more advanced analytics. While the work described here will focus on the MERRA collection, these technologies can be used to publish other reanalysis, observational, and ancillary OBS4MIP data to ESGF and, importantly, offer an architectural approach to climate data services that can be generalized to applications and customers beyond the traditional climate research community. In this presentation, we describe our approach, experiences, lessons learned,and plans for the future.; (A) MERRA/AS software stack. (B) Example MERRA/AS interfaces.

  18. Response of the Benguela upwelling systems to spatial variations in the wind stress

    NASA Astrophysics Data System (ADS)

    Fennel, Wolfgang; Junker, Tim; Schmidt, Martin; Mohrholz, Volker

    2012-08-01

    In this paper we combine field observations, numerical modeling and an idealized analytical theory to study some features of the Benguela upwelling system. The current system can be established through a combination of observations and realistic simulations with an advanced numerical model. The poleward undercurrent below the equator-ward coastal jet is often found as a countercurrent that reaches the sea surface seaward of the coastal jet. The coastal band of cold upwelled water appears to broaden from south to north and at the northern edge of the wind band an offshore flow is often detected, which deflects the coastal Angola current to the west. These features can be explained and understood with an idealized analytical model forced by a spatially variable wind. A crucial role is played by the wind stress curl, which shapes the oceanic response through Ekman-pumping. The interplay of the curl driven effects and the coastal Ekman upwelling together with the coastal jet, Kelvin waves, and the undercurrent is the key to understand the formation of the three-dimensional circulation patterns in the Benguela system. While the numerical model is based on the full set of primitive equations, realistic topography and forcing, the analytic model uses a linear, flat-bottomed f-plane ocean, where the coast is a straight wall and the forcing is represented by an alongshore band of dome-shaped wind stress. Although the analytical model is highly idealized it is very useful to grasp the basic mechanisms leading to the response patterns.

  19. Characterization of Compton-scatter imaging with an analytical simulation method

    PubMed Central

    Jones, Kevin C; Redler, Gage; Templeton, Alistair; Bernard, Damian; Turian, Julius V; Chu, James C H

    2018-01-01

    By collimating the photons scattered when a megavoltage therapy beam interacts with the patient, a Compton-scatter image may be formed without the delivery of an extra dose. To characterize and assess the potential of the technique, an analytical model for simulating scatter images was developed and validated against Monte Carlo (MC). For three phantoms, the scatter images collected during irradiation with a 6 MV flattening-filter-free therapy beam were simulated. Images, profiles, and spectra were compared for different phantoms and different irradiation angles. The proposed analytical method simulates accurate scatter images up to 1000 times faster than MC. Minor differences between MC and analytical simulated images are attributed to limitations in the isotropic superposition/convolution algorithm used to analytically model multiple-order scattering. For a detector placed at 90° relative to the treatment beam, the simulated scattered photon energy spectrum peaks at 140–220 keV, and 40–50% of the photons are the result of multiple scattering. The high energy photons originate at the beam entrance. Increasing the angle between source and detector increases the average energy of the collected photons and decreases the relative contribution of multiple scattered photons. Multiple scattered photons cause blurring in the image. For an ideal 5 mm diameter pinhole collimator placed 18.5 cm from the isocenter, 10 cGy of deposited dose (2 Hz imaging rate for 1200 MU min−1 treatment delivery) is expected to generate an average 1000 photons per mm2 at the detector. For the considered lung tumor CT phantom, the contrast is high enough to clearly identify the lung tumor in the scatter image. Increasing the treatment beam size perpendicular to the detector plane decreases the contrast, although the scatter subject contrast is expected to be greater than the megavoltage transmission image contrast. With the analytical method, real-time tumor tracking may be possible through comparison of simulated and acquired patient images. PMID:29243663

  20. Characterization of Compton-scatter imaging with an analytical simulation method

    NASA Astrophysics Data System (ADS)

    Jones, Kevin C.; Redler, Gage; Templeton, Alistair; Bernard, Damian; Turian, Julius V.; Chu, James C. H.

    2018-01-01

    By collimating the photons scattered when a megavoltage therapy beam interacts with the patient, a Compton-scatter image may be formed without the delivery of an extra dose. To characterize and assess the potential of the technique, an analytical model for simulating scatter images was developed and validated against Monte Carlo (MC). For three phantoms, the scatter images collected during irradiation with a 6 MV flattening-filter-free therapy beam were simulated. Images, profiles, and spectra were compared for different phantoms and different irradiation angles. The proposed analytical method simulates accurate scatter images up to 1000 times faster than MC. Minor differences between MC and analytical simulated images are attributed to limitations in the isotropic superposition/convolution algorithm used to analytically model multiple-order scattering. For a detector placed at 90° relative to the treatment beam, the simulated scattered photon energy spectrum peaks at 140-220 keV, and 40-50% of the photons are the result of multiple scattering. The high energy photons originate at the beam entrance. Increasing the angle between source and detector increases the average energy of the collected photons and decreases the relative contribution of multiple scattered photons. Multiple scattered photons cause blurring in the image. For an ideal 5 mm diameter pinhole collimator placed 18.5 cm from the isocenter, 10 cGy of deposited dose (2 Hz imaging rate for 1200 MU min-1 treatment delivery) is expected to generate an average 1000 photons per mm2 at the detector. For the considered lung tumor CT phantom, the contrast is high enough to clearly identify the lung tumor in the scatter image. Increasing the treatment beam size perpendicular to the detector plane decreases the contrast, although the scatter subject contrast is expected to be greater than the megavoltage transmission image contrast. With the analytical method, real-time tumor tracking may be possible through comparison of simulated and acquired patient images.

  1. An Analytical Comparison of the Fidelity of "Large Motion" Versus "Small Motion" Flight Simulators in a Rotorcraft Side-Step Task

    NASA Technical Reports Server (NTRS)

    Hess, Ronald A.

    1999-01-01

    This paper presents an analytical and experimental methodology for studying flight simulator fidelity. The task was a rotorcraft bob-up/down maneuver in which vertical acceleration constituted the motion cue. The task considered here is aside-step maneuver that differs from the bob-up one important way: both roll and lateral acceleration cues are available to the pilot. It has been communicated to the author that in some Verticle Motion Simulator (VMS) studies, the lateral acceleration cue has been found to be the most important. It is of some interest to hypothesize how this motion cue associated with "outer-loop" lateral translation fits into the modeling procedure where only "inner-loop " motion cues were considered. This Note is an attempt at formulating such an hypothesis and analytically comparing a large-motion simulator, e.g., the VMS, with a small-motion simulator, e.g., a hexapod.

  2. University Macro Analytic Simulation Model.

    ERIC Educational Resources Information Center

    Baron, Robert; Gulko, Warren

    The University Macro Analytic Simulation System (UMASS) has been designed as a forecasting tool to help university administrators budgeting decisions. Alternative budgeting strategies can be tested on a computer model and then an operational alternative can be selected on the basis of the most desirable projected outcome. UMASS uses readily…

  3. Three-dimensional benchmark for variable-density flow and transport simulation: matching semi-analytic stability modes for steady unstable convection in an inclined porous box

    USGS Publications Warehouse

    Voss, Clifford I.; Simmons, Craig T.; Robinson, Neville I.

    2010-01-01

    This benchmark for three-dimensional (3D) numerical simulators of variable-density groundwater flow and solute or energy transport consists of matching simulation results with the semi-analytical solution for the transition from one steady-state convective mode to another in a porous box. Previous experimental and analytical studies of natural convective flow in an inclined porous layer have shown that there are a variety of convective modes possible depending on system parameters, geometry and inclination. In particular, there is a well-defined transition from the helicoidal mode consisting of downslope longitudinal rolls superimposed upon an upslope unicellular roll to a mode consisting of purely an upslope unicellular roll. Three-dimensional benchmarks for variable-density simulators are currently (2009) lacking and comparison of simulation results with this transition locus provides an unambiguous means to test the ability of such simulators to represent steady-state unstable 3D variable-density physics.

  4. Simulating parameters of lunar physical libration on the basis of its analytical theory

    NASA Astrophysics Data System (ADS)

    Petrova, N.; Zagidullin, A.; Nefediev, Yu.

    2014-04-01

    Results of simulating behavior of lunar physical libration parameters are presented. Some features in the speed change of impulse variables are revealed: fast periodic changes in р2 and long periodic changes in р3. A problem of searching for a dynamic explanation of this phenomenon is put. The simulation was performed on the basis of the analytical libration theory [1] in the programming environment VBA.

  5. Dark-ages Reionization and Galaxy Formation Simulation - XIV. Gas accretion, cooling, and star formation in dwarf galaxies at high redshift

    NASA Astrophysics Data System (ADS)

    Qin, Yuxiang; Duffy, Alan R.; Mutch, Simon J.; Poole, Gregory B.; Geil, Paul M.; Mesinger, Andrei; Wyithe, J. Stuart B.

    2018-06-01

    We study dwarf galaxy formation at high redshift (z ≥ 5) using a suite of high-resolution, cosmological hydrodynamic simulations and a semi-analytic model (SAM). We focus on gas accretion, cooling, and star formation in this work by isolating the relevant process from reionization and supernova feedback, which will be further discussed in a companion paper. We apply the SAM to halo merger trees constructed from a collisionless N-body simulation sharing identical initial conditions to the hydrodynamic suite, and calibrate the free parameters against the stellar mass function predicted by the hydrodynamic simulations at z = 5. By making comparisons of the star formation history and gas components calculated by the two modelling techniques, we find that semi-analytic prescriptions that are commonly adopted in the literature of low-redshift galaxy formation do not accurately represent dwarf galaxy properties in the hydrodynamic simulation at earlier times. We propose three modifications to SAMs that will provide more accurate high-redshift simulations. These include (1) the halo mass and baryon fraction which are overestimated by collisionless N-body simulations; (2) the star formation efficiency which follows a different cosmic evolutionary path from the hydrodynamic simulation; and (3) the cooling rate which is not well defined for dwarf galaxies at high redshift. Accurate semi-analytic modelling of dwarf galaxy formation informed by detailed hydrodynamical modelling will facilitate reliable semi-analytic predictions over the large volumes needed for the study of reionization.

  6. Analytical Modeling of Acoustic Phonon-Limited Mobility in Strained Graphene Nanoribbons

    NASA Astrophysics Data System (ADS)

    Yousefvand, Ali; Ahmadi, Mohammad T.; Meshginqalam, Bahar

    2017-11-01

    Recent advances in graphene nanoribbon-based electronic devices encourage researchers to develop modeling and simulation methods to explore device physics. On the other hand, increasing the operating speed of nanoelectronic devices has recently attracted significant attention, and the modification of acoustic phonon interactions because of their important effect on carrier mobility can be considered as a method for carrier mobility optimization which subsequently enhances the device speed. Moreover, strain has an important influence on the electronic properties of the nanoelectronic devices. In this paper, the acoustic phonons mobility of armchair graphene nanoribbons ( n-AGNRs) under uniaxial strain is modeled analytically. In addition, strain, width and temperature effects on the acoustic phonon mobility of strained n-AGNRs are investigated. An increment in the strained AGNR acoustic phonon mobility by increasing the ribbon width is reported. Additionally, two different behaviors for the acoustic phonon mobility are verified by increasing the applied strain in 3 m, 3 m + 2 and 3 m + 1 AGNRs. Finally, the temperature effect on the modeled AGNR phonon mobility is explored, and mobility reduction by raising the temperature is reported.

  7. COBRA ATD multispectral camera response model

    NASA Astrophysics Data System (ADS)

    Holmes, V. Todd; Kenton, Arthur C.; Hilton, Russell J.; Witherspoon, Ned H.; Holloway, John H., Jr.

    2000-08-01

    A new multispectral camera response model has been developed in support of the US Marine Corps (USMC) Coastal Battlefield Reconnaissance and Analysis (COBRA) Advanced Technology Demonstration (ATD) Program. This analytical model accurately estimates response form five Xybion intensified IMC 201 multispectral cameras used for COBRA ATD airborne minefield detection. The camera model design is based on a series of camera response curves which were generated through optical laboratory test performed by the Naval Surface Warfare Center, Dahlgren Division, Coastal Systems Station (CSS). Data fitting techniques were applied to these measured response curves to obtain nonlinear expressions which estimates digitized camera output as a function of irradiance, intensifier gain, and exposure. This COBRA Camera Response Model was proven to be very accurate, stable over a wide range of parameters, analytically invertible, and relatively simple. This practical camera model was subsequently incorporated into the COBRA sensor performance evaluation and computational tools for research analysis modeling toolbox in order to enhance COBRA modeling and simulation capabilities. Details of the camera model design and comparisons of modeled response to measured experimental data are presented.

  8. Assessment of a hybrid finite element and finite volume code for turbulent incompressible flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xia, Yidong, E-mail: yidong.xia@inl.gov; Wang, Chuanjin; Luo, Hong

    Hydra-TH is a hybrid finite-element/finite-volume incompressible/low-Mach flow simulation code based on the Hydra multiphysics toolkit being developed and used for thermal-hydraulics applications. In the present work, a suite of verification and validation (V&V) test problems for Hydra-TH was defined to meet the design requirements of the Consortium for Advanced Simulation of Light Water Reactors (CASL). The intent for this test problem suite is to provide baseline comparison data that demonstrates the performance of the Hydra-TH solution methods. The simulation problems vary in complexity from laminar to turbulent flows. A set of RANS and LES turbulence models were used in themore » simulation of four classical test problems. Numerical results obtained by Hydra-TH agreed well with either the available analytical solution or experimental data, indicating the verified and validated implementation of these turbulence models in Hydra-TH. Where possible, some form of solution verification has been attempted to identify sensitivities in the solution methods, and suggest best practices when using the Hydra-TH code. -- Highlights: •We performed a comprehensive study to verify and validate the turbulence models in Hydra-TH. •Hydra-TH delivers 2nd-order grid convergence for the incompressible Navier–Stokes equations. •Hydra-TH can accurately simulate the laminar boundary layers. •Hydra-TH can accurately simulate the turbulent boundary layers with RANS turbulence models. •Hydra-TH delivers high-fidelity LES capability for simulating turbulent flows in confined space.« less

  9. Advanced Methods for Aircraft Engine Thrust and Noise Benefits: Nozzle-Inlet Flow Analysis

    NASA Technical Reports Server (NTRS)

    Morgan, Morris H., III; Gilinsky, Mikhail M.

    2004-01-01

    In this project on the first stage (2000-Ol), we continued to develop the previous joint research between the Fluid Mechanics and Acoustics Laboratory (FM&AL) at Hampton University (HU) and the Jet Noise Team (JNT) at the NASA Langley Research Center (NASA LaRC). At the second stage (2001-03), FM&AL team concentrated its efforts on solving of problems of interest to Glenn Research Center (NASA GRC), especially in the field of propulsion system enhancement. The NASA GRC R&D Directorate and LaRC Hyper-X Program specialists in a hypersonic technology jointly with the FM&AL staff conducted research on a wide region of problems in the propulsion field as well as in experimental testing and theoretical and numerical simulation analyses for advanced aircraft and rocket engines. The last year the Hampton University School of Engineering & Technology was awarded the NASA grant, for creation of the Aeropropulsion Center, and the FM&AL is a key team of the project fulfillment responsible for research in Aeropropulsion and Acoustics (Pillar I). This work is supported by joint research between the NASA GRC/ FM&AL and the Institute of Mechanics at Moscow State University (IMMSU) in Russia under a CRDF grant. The main areas of current scientific interest of the FM&AL include an investigation of the proposed and patented advanced methods for aircraft engine thrust and noise benefits. This is the main subject of our other projects, of which one is presented. The last year we concentrated our efforts to analyze three main problems: (a) new effective methods fuel injection into the flow stream in air-breathing engines; (b) new re-circulation method for mixing, heat transfer and combustion enhancement in propulsion systems and domestic industry application; (c) covexity flow The research is focused on a wide regime of problems in the propulsion field as well as in experimental testing and theoretical and numerical simulation analyses for advanced aircraft and rocket engines (see, for example, Figures 4). The FM&AL Team uses analytical methods, numerical simulations and experimental tests at the Hampton University campus, NASA and IM/MSU.

  10. Nonequilibrium Chemical Effects in Single-Molecule SERS Revealed by Ab Initio Molecular Dynamics Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fischer, Sean A.; Aprà, Edoardo; Govind, Niranjan

    2017-02-03

    Recent developments in nanophotonics have paved the way for achieving significant advances in the realm of single molecule chemical detection, imaging, and dynamics. In particular, surface-enhanced Raman scattering (SERS) is a powerful analytical technique that is now routinely used to identify the chemical identity of single molecules. Understanding how nanoscale physical and chemical processes affect single molecule SERS spectra and selection rules is a challenging task, and is still actively debated. Herein, we explore underappreciated chemical phenomena in ultrasensitive SERS. We observe a fluctuating excited electronic state manifold, governed by the conformational dynamics of a molecule (4,4’-dimercaptostilbene, DMS) interacting withmore » a metallic cluster (Ag20). This affects our simulated single molecule SERS spectra; the time trajectories of a molecule interacting with its unique local environment dictates the relative intensities of the observable Raman-active vibrational states. Ab initio molecular dynamics of a model Ag20-DMS system are used to illustrate both concepts in light of recent experimental results.« less

  11. The MCNP6 Analytic Criticality Benchmark Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling)more » and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.« less

  12. Considerations in detecting CDC select agents under field conditions

    NASA Astrophysics Data System (ADS)

    Spinelli, Charles; Soelberg, Scott; Swanson, Nathaneal; Furlong, Clement; Baker, Paul

    2008-04-01

    Surface Plasmon Resonance (SPR) has become a widely accepted technique for real-time detection of interactions between receptor molecules and ligands. Antibody may serve as receptor and can be attached to the gold surface of the SPR device, while candidate analyte fluids contact the detecting antibody. Minute, but detectable, changes in refractive indices (RI) indicate that analyte has bound to the antibody. A decade ago, an inexpensive, robust, miniature and fully integrated SPR chip, called SPREETA, was developed. University of Washington (UW) researchers subsequently developed a portable, temperature-regulated instrument, called SPIRIT, to simultaneously use eight of these three-channel SPREETA chips. A SPIRIT prototype instrument was tested in the field, coupled to a remote reporting system on a surrogate unmanned aerial vehicle (UAV). Two target protein analytes were released sequentially as aerosols with low analyte concentration during each of three flights and were successfully detected and verified. Laboratory experimentation with a more advanced SPIRIT instrument demonstrated detection of very low levels of several select biological agents that might be employed by bioterrorists. Agent detection under field-like conditions is more challenging, especially as analyte concentrations are reduced and complex matricies are introduced. Two different sample preconditioning protocols have been developed for select agents in complex matrices. Use of these preconditioning techniques has allowed laboratory detection in spiked heavy mud of Francisella tularensis at 10 3 CFU/ml, Bacillus anthracis spores at 10 3 CFU/ml, Staphylococcal enterotoxin B (SEB) at 1 ng/ml, and Vaccinia virus (a smallpox simulant) at 10 5 PFU/ml. Ongoing experiments are aimed at simultaneous detection of multiple agents in spiked heavy mud, using a multiplex preconditioning protocol.

  13. Error-analysis and comparison to analytical models of numerical waveforms produced by the NRAR Collaboration

    NASA Astrophysics Data System (ADS)

    Hinder, Ian; Buonanno, Alessandra; Boyle, Michael; Etienne, Zachariah B.; Healy, James; Johnson-McDaniel, Nathan K.; Nagar, Alessandro; Nakano, Hiroyuki; Pan, Yi; Pfeiffer, Harald P.; Pürrer, Michael; Reisswig, Christian; Scheel, Mark A.; Schnetter, Erik; Sperhake, Ulrich; Szilágyi, Bela; Tichy, Wolfgang; Wardell, Barry; Zenginoğlu, Anıl; Alic, Daniela; Bernuzzi, Sebastiano; Bode, Tanja; Brügmann, Bernd; Buchman, Luisa T.; Campanelli, Manuela; Chu, Tony; Damour, Thibault; Grigsby, Jason D.; Hannam, Mark; Haas, Roland; Hemberger, Daniel A.; Husa, Sascha; Kidder, Lawrence E.; Laguna, Pablo; London, Lionel; Lovelace, Geoffrey; Lousto, Carlos O.; Marronetti, Pedro; Matzner, Richard A.; Mösta, Philipp; Mroué, Abdul; Müller, Doreen; Mundim, Bruno C.; Nerozzi, Andrea; Paschalidis, Vasileios; Pollney, Denis; Reifenberger, George; Rezzolla, Luciano; Shapiro, Stuart L.; Shoemaker, Deirdre; Taracchini, Andrea; Taylor, Nicholas W.; Teukolsky, Saul A.; Thierfelder, Marcus; Witek, Helvi; Zlochower, Yosef

    2013-01-01

    The Numerical-Relativity-Analytical-Relativity (NRAR) collaboration is a joint effort between members of the numerical relativity, analytical relativity and gravitational-wave data analysis communities. The goal of the NRAR collaboration is to produce numerical-relativity simulations of compact binaries and use them to develop accurate analytical templates for the LIGO/Virgo Collaboration to use in detecting gravitational-wave signals and extracting astrophysical information from them. We describe the results of the first stage of the NRAR project, which focused on producing an initial set of numerical waveforms from binary black holes with moderate mass ratios and spins, as well as one non-spinning binary configuration which has a mass ratio of 10. All of the numerical waveforms are analysed in a uniform and consistent manner, with numerical errors evaluated using an analysis code created by members of the NRAR collaboration. We compare previously-calibrated, non-precessing analytical waveforms, notably the effective-one-body (EOB) and phenomenological template families, to the newly-produced numerical waveforms. We find that when the binary's total mass is ˜100-200M⊙, current EOB and phenomenological models of spinning, non-precessing binary waveforms have overlaps above 99% (for advanced LIGO) with all of the non-precessing-binary numerical waveforms with mass ratios ⩽4, when maximizing over binary parameters. This implies that the loss of event rate due to modelling error is below 3%. Moreover, the non-spinning EOB waveforms previously calibrated to five non-spinning waveforms with mass ratio smaller than 6 have overlaps above 99.7% with the numerical waveform with a mass ratio of 10, without even maximizing on the binary parameters.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Mark D.; McPherson, Brian J.; Grigg, Reid B.

    Numerical simulation is an invaluable analytical tool for scientists and engineers in making predictions about of the fate of carbon dioxide injected into deep geologic formations for long-term storage. Current numerical simulators for assessing storage in deep saline formations have capabilities for modeling strongly coupled processes involving multifluid flow, heat transfer, chemistry, and rock mechanics in geologic media. Except for moderate pressure conditions, numerical simulators for deep saline formations only require the tracking of two immiscible phases and a limited number of phase components, beyond those comprising the geochemical reactive system. The requirements for numerically simulating the utilization and storagemore » of carbon dioxide in partially depleted petroleum reservoirs are more numerous than those for deep saline formations. The minimum number of immiscible phases increases to three, the number of phase components may easily increase fourfold, and the coupled processes of heat transfer, geochemistry, and geomechanics remain. Public and scientific confidence in the ability of numerical simulators used for carbon dioxide sequestration in deep saline formations has advanced via a natural progression of the simulators being proven against benchmark problems, code comparisons, laboratory-scale experiments, pilot-scale injections, and commercial-scale injections. This paper describes a new numerical simulator for the scientific investigation of carbon dioxide utilization and storage in partially depleted petroleum reservoirs, with an emphasis on its unique features for scientific investigations; and documents the numerical simulation of the utilization of carbon dioxide for enhanced oil recovery in the western section of the Farnsworth Unit and represents an early stage in the progression of numerical simulators for carbon utilization and storage in depleted oil reservoirs.« less

  15. Integrating Soft Set Theory and Fuzzy Linguistic Model to Evaluate the Performance of Training Simulation Systems

    PubMed Central

    Chang, Kuei-Hu; Chang, Yung-Chia; Chain, Kai; Chung, Hsiang-Yu

    2016-01-01

    The advancement of high technologies and the arrival of the information age have caused changes to the modern warfare. The military forces of many countries have replaced partially real training drills with training simulation systems to achieve combat readiness. However, considerable types of training simulation systems are used in military settings. In addition, differences in system set up time, functions, the environment, and the competency of system operators, as well as incomplete information have made it difficult to evaluate the performance of training simulation systems. To address the aforementioned problems, this study integrated analytic hierarchy process, soft set theory, and the fuzzy linguistic representation model to evaluate the performance of various training simulation systems. Furthermore, importance–performance analysis was adopted to examine the influence of saving costs and training safety of training simulation systems. The findings of this study are expected to facilitate applying military training simulation systems, avoiding wasting of resources (e.g., low utility and idle time), and providing data for subsequent applications and analysis. To verify the method proposed in this study, the numerical examples of the performance evaluation of training simulation systems were adopted and compared with the numerical results of an AHP and a novel AHP-based ranking technique. The results verified that not only could expert-provided questionnaire information be fully considered to lower the repetition rate of performance ranking, but a two-dimensional graph could also be used to help administrators allocate limited resources, thereby enhancing the investment benefits and training effectiveness of a training simulation system. PMID:27598390

  16. Integrating Soft Set Theory and Fuzzy Linguistic Model to Evaluate the Performance of Training Simulation Systems.

    PubMed

    Chang, Kuei-Hu; Chang, Yung-Chia; Chain, Kai; Chung, Hsiang-Yu

    2016-01-01

    The advancement of high technologies and the arrival of the information age have caused changes to the modern warfare. The military forces of many countries have replaced partially real training drills with training simulation systems to achieve combat readiness. However, considerable types of training simulation systems are used in military settings. In addition, differences in system set up time, functions, the environment, and the competency of system operators, as well as incomplete information have made it difficult to evaluate the performance of training simulation systems. To address the aforementioned problems, this study integrated analytic hierarchy process, soft set theory, and the fuzzy linguistic representation model to evaluate the performance of various training simulation systems. Furthermore, importance-performance analysis was adopted to examine the influence of saving costs and training safety of training simulation systems. The findings of this study are expected to facilitate applying military training simulation systems, avoiding wasting of resources (e.g., low utility and idle time), and providing data for subsequent applications and analysis. To verify the method proposed in this study, the numerical examples of the performance evaluation of training simulation systems were adopted and compared with the numerical results of an AHP and a novel AHP-based ranking technique. The results verified that not only could expert-provided questionnaire information be fully considered to lower the repetition rate of performance ranking, but a two-dimensional graph could also be used to help administrators allocate limited resources, thereby enhancing the investment benefits and training effectiveness of a training simulation system.

  17. An analytical approach to γ-ray self-shielding effects for radioactive bodies encountered nuclear decommissioning scenarios.

    PubMed

    Gamage, K A A; Joyce, M J

    2011-10-01

    A novel analytical approach is described that accounts for self-shielding of γ radiation in decommissioning scenarios. The approach is developed with plutonium-239, cobalt-60 and caesium-137 as examples; stainless steel and concrete have been chosen as the media for cobalt-60 and caesium-137, respectively. The analytical methods have been compared MCNPX 2.6.0 simulations. A simple, linear correction factor relates the analytical results and the simulated estimates. This has the potential to greatly simplify the estimation of self-shielding effects in decommissioning activities. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. NREL’s Advanced Analytics Research for Energy-Efficient Buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kutscher, Chuck; Livingood, Bill; Wilson, Eric

    At NREL, we believe in building better buildings. More importantly, high-performance buildings that can do more and be smarter than ever before. Forty percent of the total energy consumption in the United States comes from buildings. Working together, we can dramatically shrink that number. But first, it starts with the research: our observations, experiments, modeling, analysis, and more. NREL’s advanced analytics research has already proven to reduce energy use, save money, and stabilize the grid.

  19. Recent Advances in Paper-Based Sensors

    PubMed Central

    Liana, Devi D.; Raguse, Burkhard; Gooding, J. Justin; Chow, Edith

    2012-01-01

    Paper-based sensors are a new alternative technology for fabricating simple, low-cost, portable and disposable analytical devices for many application areas including clinical diagnosis, food quality control and environmental monitoring. The unique properties of paper which allow passive liquid transport and compatibility with chemicals/biochemicals are the main advantages of using paper as a sensing platform. Depending on the main goal to be achieved in paper-based sensors, the fabrication methods and the analysis techniques can be tuned to fulfill the needs of the end-user. Current paper-based sensors are focused on microfluidic delivery of solution to the detection site whereas more advanced designs involve complex 3-D geometries based on the same microfluidic principles. Although paper-based sensors are very promising, they still suffer from certain limitations such as accuracy and sensitivity. However, it is anticipated that in the future, with advances in fabrication and analytical techniques, that there will be more new and innovative developments in paper-based sensors. These sensors could better meet the current objectives of a viable low-cost and portable device in addition to offering high sensitivity and selectivity, and multiple analyte discrimination. This paper is a review of recent advances in paper-based sensors and covers the following topics: existing fabrication techniques, analytical methods and application areas. Finally, the present challenges and future outlooks are discussed. PMID:23112667

  20. Semi-analytical solution for the generalized absorbing boundary condition in molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Lee, Chung-Shuo; Chen, Yan-Yu; Yu, Chi-Hua; Hsu, Yu-Chuan; Chen, Chuin-Shan

    2017-07-01

    We present a semi-analytical solution of a time-history kernel for the generalized absorbing boundary condition in molecular dynamics (MD) simulations. To facilitate the kernel derivation, the concept of virtual atoms in real space that can conform with an arbitrary boundary in an arbitrary lattice is adopted. The generalized Langevin equation is regularized using eigenvalue decomposition and, consequently, an analytical expression of an inverse Laplace transform is obtained. With construction of dynamical matrices in the virtual domain, a semi-analytical form of the time-history kernel functions for an arbitrary boundary in an arbitrary lattice can be found. The time-history kernel functions for different crystal lattices are derived to show the generality of the proposed method. Non-equilibrium MD simulations in a triangular lattice with and without the absorbing boundary condition are conducted to demonstrate the validity of the solution.

  1. A carrier-based analytical theory for negative capacitance symmetric double-gate field effect transistors and its simulation verification

    NASA Astrophysics Data System (ADS)

    Jiang, Chunsheng; Liang, Renrong; Wang, Jing; Xu, Jun

    2015-09-01

    A carrier-based analytical drain current model for negative capacitance symmetric double-gate field effect transistors (NC-SDG FETs) is proposed by solving the differential equation of the carrier, the Pao-Sah current formulation, and the Landau-Khalatnikov equation. The carrier equation is derived from Poisson’s equation and the Boltzmann distribution law. According to the model, an amplified semiconductor surface potential and a steeper subthreshold slope could be obtained with suitable thicknesses of the ferroelectric film and insulator layer at room temperature. Results predicted by the analytical model agree well with those of the numerical simulation from a 2D simulator without any fitting parameters. The analytical model is valid for all operation regions and captures the transitions between them without any auxiliary variables or functions. This model can be used to explore the operating mechanisms of NC-SDG FETs and to optimize device performance.

  2. Assessing the Clinical Impact of Approximations in Analytical Dose Calculations for Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schuemann, Jan, E-mail: jschuemann@mgh.harvard.edu; Giantsoudi, Drosoula; Grassberger, Clemens

    2015-08-01

    Purpose: To assess the impact of approximations in current analytical dose calculation methods (ADCs) on tumor control probability (TCP) in proton therapy. Methods: Dose distributions planned with ADC were compared with delivered dose distributions as determined by Monte Carlo simulations. A total of 50 patients were investigated in this analysis with 10 patients per site for 5 treatment sites (head and neck, lung, breast, prostate, liver). Differences were evaluated using dosimetric indices based on a dose-volume histogram analysis, a γ-index analysis, and estimations of TCP. Results: We found that ADC overestimated the target doses on average by 1% to 2%more » for all patients considered. The mean dose, D95, D50, and D02 (the dose value covering 95%, 50% and 2% of the target volume, respectively) were predicted within 5% of the delivered dose. The γ-index passing rate for target volumes was above 96% for a 3%/3 mm criterion. Differences in TCP were up to 2%, 2.5%, 6%, 6.5%, and 11% for liver and breast, prostate, head and neck, and lung patients, respectively. Differences in normal tissue complication probabilities for bladder and anterior rectum of prostate patients were less than 3%. Conclusion: Our results indicate that current dose calculation algorithms lead to underdosage of the target by as much as 5%, resulting in differences in TCP of up to 11%. To ensure full target coverage, advanced dose calculation methods like Monte Carlo simulations may be necessary in proton therapy. Monte Carlo simulations may also be required to avoid biases resulting from systematic discrepancies in calculated dose distributions for clinical trials comparing proton therapy with conventional radiation therapy.« less

  3. New ghost-node method for linking different models with varied grid refinement

    USGS Publications Warehouse

    James, S.C.; Dickinson, J.E.; Mehl, S.W.; Hill, M.C.; Leake, S.A.; Zyvoloski, G.A.; Eddebbarh, A.-A.

    2006-01-01

    A flexible, robust method for linking grids of locally refined ground-water flow models constructed with different numerical methods is needed to address a variety of hydrologic problems. This work outlines and tests a new ghost-node model-linking method for a refined "child" model that is contained within a larger and coarser "parent" model that is based on the iterative method of Steffen W. Mehl and Mary C. Hill (2002, Advances in Water Res., 25, p. 497-511; 2004, Advances in Water Res., 27, p. 899-912). The method is applicable to steady-state solutions for ground-water flow. Tests are presented for a homogeneous two-dimensional system that has matching grids (parent cells border an integer number of child cells) or nonmatching grids. The coupled grids are simulated by using the finite-difference and finite-element models MODFLOW and FEHM, respectively. The simulations require no alteration of the MODFLOW or FEHM models and are executed using a batch file on Windows operating systems. Results indicate that when the grids are matched spatially so that nodes and child-cell boundaries are aligned, the new coupling technique has error nearly equal to that when coupling two MODFLOW models. When the grids are nonmatching, model accuracy is slightly increased compared to that for matching-grid cases. Overall, results indicate that the ghost-node technique is a viable means to couple distinct models because the overall head and flow errors relative to the analytical solution are less than if only the regional coarse-grid model was used to simulate flow in the child model's domain.

  4. Numerical Simulation of One- And Two-Phase Flows In Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Gilinsky, Mikhail M.

    2002-01-01

    In this report, we present some results of problems investigated during joint research between the Hampton University (HU) Fluid Mechanics and Acoustics Laboratory (FM&AL), NASA Glenn Research Center (GRC) and the Hyper-X Program of the NASA Langley Research Center (LaRC). This work is supported by joint research between the NASA GRC/HU FM&AL and the Institute of Mechanics at Moscow State University (IM/MSU) in Russia under a Civilian Research and Development Foundation (CRDF) grant, #RE1-2068. The main areas of current scientific interest of the FM&AL include an investigation of the proposed and patented advanced methods for aircraft engine thrust and noise benefits. These methods are based on nontraditional 3D (three dimensional) corrugated and composite nozzle, inlet, propeller and screw designs such as the Bluebell and Telescope nozzles, Mobius-shaped screws, etc. These are the main subject of our other projects, of which one is the NASA MURED's (Minority University Research and Education Division) FAR (Faculty Awards for Research) Award, #NAG-3-2249. Working jointly with this project team, our team also analyzes additional methods for exhaust jet noise reduction. These methods are without essential thrust loss and even with thrust augmentation. The research is focused on a wide regime of problems in the propulsion field as well as in experimental testing and theoretical and numerical simulation analyses for advanced aircraft and rocket engines. The FM&AL Team uses analytical methods, numerical simulations and experimental tests at the Hampton University campus, NASA and IM/MSU. The main results obtained by FM&AL team were published in the papers and patents.

  5. PlanetPack: A radial-velocity time-series analysis tool facilitating exoplanets detection, characterization, and dynamical simulations

    NASA Astrophysics Data System (ADS)

    Baluev, Roman V.

    2013-08-01

    We present PlanetPack, a new software tool that we developed to facilitate and standardize the advanced analysis of radial velocity (RV) data for the goal of exoplanets detection, characterization, and basic dynamical N-body simulations. PlanetPack is a command-line interpreter, that can run either in an interactive mode or in a batch mode of automatic script interpretation. Its major abilities include: (i) advanced RV curve fitting with the proper maximum-likelihood treatment of unknown RV jitter; (ii) user-friendly multi-Keplerian as well as Newtonian N-body RV fits; (iii) use of more efficient maximum-likelihood periodograms that involve the full multi-planet fitting (sometimes called as “residual” or “recursive” periodograms); (iv) easily calculatable parametric 2D likelihood function level contours, reflecting the asymptotic confidence regions; (v) fitting under some useful functional constraints is user-friendly; (vi) basic tasks of short- and long-term planetary dynamical simulation using a fast Everhart-type integrator based on Gauss-Legendre spacings; (vii) fitting the data with red noise (auto-correlated errors); (viii) various analytical and numerical methods for the tasks of determining the statistical significance. It is planned that further functionality may be added to PlanetPack in the future. During the development of this software, a lot of effort was made to improve the calculational speed, especially for CPU-demanding tasks. PlanetPack was written in pure C++ (standard of 1998/2003), and is expected to be compilable and useable on a wide range of platforms.

  6. The Benefits and Complexities of Operating Geographic Information Systems (GIS) in a High Performance Computing (HPC) Environment

    NASA Astrophysics Data System (ADS)

    Shute, J.; Carriere, L.; Duffy, D.; Hoy, E.; Peters, J.; Shen, Y.; Kirschbaum, D.

    2017-12-01

    The NASA Center for Climate Simulation (NCCS) at the Goddard Space Flight Center is building and maintaining an Enterprise GIS capability for its stakeholders, to include NASA scientists, industry partners, and the public. This platform is powered by three GIS subsystems operating in a highly-available, virtualized environment: 1) the Spatial Analytics Platform is the primary NCCS GIS and provides users discoverability of the vast DigitalGlobe/NGA raster assets within the NCCS environment; 2) the Disaster Mapping Platform provides mapping and analytics services to NASA's Disaster Response Group; and 3) the internal (Advanced Data Analytics Platform/ADAPT) enterprise GIS provides users with the full suite of Esri and open source GIS software applications and services. All systems benefit from NCCS's cutting edge infrastructure, to include an InfiniBand network for high speed data transfers; a mixed/heterogeneous environment featuring seamless sharing of information between Linux and Windows subsystems; and in-depth system monitoring and warning systems. Due to its co-location with the NCCS Discover High Performance Computing (HPC) environment and the Advanced Data Analytics Platform (ADAPT), the GIS platform has direct access to several large NCCS datasets including DigitalGlobe/NGA, Landsat, MERRA, and MERRA2. Additionally, the NCCS ArcGIS Desktop Windows virtual machines utilize existing NetCDF and OPeNDAP assets for visualization, modelling, and analysis - thus eliminating the need for data duplication. With the advent of this platform, Earth scientists have full access to vast data repositories and the industry-leading tools required for successful management and analysis of these multi-petabyte, global datasets. The full system architecture and integration with scientific datasets will be presented. Additionally, key applications and scientific analyses will be explained, to include the NASA Global Landslide Catalog (GLC) Reporter crowdsourcing application, the NASA GLC Viewer discovery and analysis tool, the DigitalGlobe/NGA Data Discovery Tool, the NASA Disaster Response Group Mapping Platform (https://maps.disasters.nasa.gov), and support for NASA's Arctic - Boreal Vulnerability Experiment (ABoVE).

  7. The detection of problem analytes in a single proficiency test challenge in the absence of the Health Care Financing Administration rule violations.

    PubMed

    Cembrowski, G S; Hackney, J R; Carey, N

    1993-04-01

    The Clinical Laboratory Improvement Act of 1988 (CLIA 88) has dramatically changed proficiency testing (PT) practices having mandated (1) satisfactory PT for certain analytes as a condition of laboratory operation, (2) fixed PT limits for many of these "regulated" analytes, and (3) an increased number of PT specimens (n = 5) for each testing cycle. For many of these analytes, the fixed limits are much broader than the previously employed Standard Deviation Index (SDI) criteria. Paradoxically, there may be less incentive to identify and evaluate analytically significant outliers to improve the analytical process. Previously described "control rules" to evaluate these PT results are unworkable as they consider only two or three results. We used Monte Carlo simulations of Kodak Ektachem analyzers participating in PT to determine optimal control rules for the identification of PT results that are inconsistent with those from other laboratories using the same methods. The analysis of three representative analytes, potassium, creatine kinase, and iron was simulated with varying intrainstrument and interinstrument standard deviations (si and sg, respectively) obtained from the College of American Pathologists (Northfield, Ill) Quality Assurance Services data and Proficiency Test data, respectively. Analytical errors were simulated in each of the analytes and evaluated in terms of multiples of the interlaboratory SDI. Simple control rules for detecting systematic and random error were evaluated with power function graphs, graphs of probability of error detected vs magnitude of error. Based on the simulation results, we recommend screening all analytes for the occurrence of two or more observations exceeding the same +/- 1 SDI limit. For any analyte satisfying this condition, the mean of the observations should be calculated. For analytes with sg/si ratios between 1.0 and 1.5, a significant systematic error is signaled by the mean exceeding 1.0 SDI. Significant random error is signaled by one observation exceeding the +/- 3-SDI limit or the range of the observations exceeding 4 SDIs. For analytes with higher sg/si, significant systematic or random error is signaled by violation of the screening rule (having at least two observations exceeding the same +/- 1 SDI limit). Random error can also be signaled by one observation exceeding the +/- 1.5-SDI limit or the range of the observations exceeding 3 SDIs. We present a practical approach to the workup of apparent PT errors.

  8. Modeling and simulation of a 2-DOF bidirectional electrothermal microactuator

    NASA Astrophysics Data System (ADS)

    Topaloglu, N.; Elbuken, C.; Nieva, P. M.; Yavuz, M.; Huissoon, J. P.

    2008-03-01

    In this paper we present the modeling and simulation of a 2 degree-of-freedom (DOF) bidirectional electrothermal actuator. The four arm microactuator was designed to move in both the horizontal and vertical axes. By tailoring the geometrical parameters of the design, the in-plane and out-of-plane motions were decoupled, resulting in enhanced mobility in both directions. The motion of the actuator was modeled analytically using an electro-thermo-mechanical analysis. To validate the analytical model, finite element simulations were performed using ANSYS. The microactuators were fabricated using PolyMUMPS process and experimental results show good agreement with both the analytical model and the simulations. We demonstrated that the 2-DOF bidirectional electrothermal actuator can achieve 3.7 μm in-plane and 13.3 μm out-of-plane deflections with an input voltage of 10 V.

  9. Particle rings and astrophysical accretion discs

    NASA Astrophysics Data System (ADS)

    Lovelace, R. V. E.; Romanova, M. M.

    2016-03-01

    Norman Rostoker had a wide range of interests and significant impact on the plasma physics research at Cornell during the time he was a Cornell professor. His interests ranged from the theory of energetic electron and ion beams and strong particle rings to the related topics of astrophysical accretion discs. We outline some of the topics related to rings and discs including the Rossby wave instability which leads to formation of anticyclonic vortices in astrophysical discs. These vorticies are regions of high pressure and act to trap dust particles which in turn may facilitate planetesimals growth in proto-planetary disks and could be important for planet formation. Analytical methods and global 3D magneto-hydrodynamic simulations have led to rapid advances in our understanding of discs in recent years.

  10. Special Issue on a Fault Tolerant Network on Chip Architecture

    NASA Astrophysics Data System (ADS)

    Janidarmian, Majid; Tinati, Melika; Khademzadeh, Ahmad; Ghavibazou, Maryam; Fekr, Atena Roshan

    2010-06-01

    In this paper a fast and efficient spare switch selection algorithm is presented in a reliable NoC architecture based on specific application mapped onto mesh topology called FERNA. Based on ring concept used in FERNA, this algorithm achieves best results equivalent to exhaustive algorithm with much less run time improving two parameters. Inputs of FERNA algorithm for response time of the system and extra communication cost minimization are derived from simulation of high transaction level using SystemC TLM and mathematical formulation, respectively. The results demonstrate that improvement of above mentioned parameters lead to advance whole system reliability that is analytically calculated. Mapping algorithm has been also investigated as an effective issue on extra bandwidth requirement and system reliability.

  11. Nonlinear dynamics of mini-satellite respinup by weak internal controllable torques

    NASA Astrophysics Data System (ADS)

    Somov, Yevgeny

    2014-12-01

    Contemporary space engineering advanced new problem before theoretical mechanics and motion control theory: a spacecraft directed respinup by the weak restricted control internal forces. The paper presents some results on this problem, which is very actual for energy supply of information mini-satellites (for communication, geodesy, radio- and opto-electronic observation of the Earth et al.) with electro-reaction plasma thrusters and gyro moment cluster based on the reaction wheels or the control moment gyros. The solution achieved is based on the methods for synthesis of nonlinear robust control and on rigorous analytical proof for the required spacecraft rotation stability by Lyapunov function method. These results were verified by a computer simulation of strongly nonlinear oscillatory processes at respinuping of a flexible spacecraft.

  12. A Simpli ed, General Approach to Simulating from Multivariate Copula Functions

    Treesearch

    Barry Goodwin

    2012-01-01

    Copulas have become an important analytic tool for characterizing multivariate distributions and dependence. One is often interested in simulating data from copula estimates. The process can be analytically and computationally complex and usually involves steps that are unique to a given parametric copula. We describe an alternative approach that uses \\probability{...

  13. Numerical Simulation of the Perrin-Like Experiments

    ERIC Educational Resources Information Center

    Mazur, Zygmunt; Grech, Dariusz

    2008-01-01

    A simple model of the random Brownian walk of a spherical mesoscopic particle in viscous liquids is proposed. The model can be solved analytically and simulated numerically. The analytic solution gives the known Einstein-Smoluchowski diffusion law r[superscript 2] = 2Dt, where the diffusion constant D is expressed by the mass and geometry of a…

  14. The Development of MST Test Information for the Prediction of Test Performances

    ERIC Educational Resources Information Center

    Park, Ryoungsun; Kim, Jiseon; Chung, Hyewon; Dodd, Barbara G.

    2017-01-01

    The current study proposes novel methods to predict multistage testing (MST) performance without conducting simulations. This method, called MST test information, is based on analytic derivation of standard errors of ability estimates across theta levels. We compared standard errors derived analytically to the simulation results to demonstrate the…

  15. Tutorial in medical decision modeling incorporating waiting lines and queues using discrete event simulation.

    PubMed

    Jahn, Beate; Theurl, Engelbert; Siebert, Uwe; Pfeiffer, Karl-Peter

    2010-01-01

    In most decision-analytic models in health care, it is assumed that there is treatment without delay and availability of all required resources. Therefore, waiting times caused by limited resources and their impact on treatment effects and costs often remain unconsidered. Queuing theory enables mathematical analysis and the derivation of several performance measures of queuing systems. Nevertheless, an analytical approach with closed formulas is not always possible. Therefore, simulation techniques are used to evaluate systems that include queuing or waiting, for example, discrete event simulation. To include queuing in decision-analytic models requires a basic knowledge of queuing theory and of the underlying interrelationships. This tutorial introduces queuing theory. Analysts and decision-makers get an understanding of queue characteristics, modeling features, and its strength. Conceptual issues are covered, but the emphasis is on practical issues like modeling the arrival of patients. The treatment of coronary artery disease with percutaneous coronary intervention including stent placement serves as an illustrative queuing example. Discrete event simulation is applied to explicitly model resource capacities, to incorporate waiting lines and queues in the decision-analytic modeling example.

  16. Numerically calibrated model for propagation of a relativistic unmagnetized jet in dense media

    NASA Astrophysics Data System (ADS)

    Harrison, Richard; Gottlieb, Ore; Nakar, Ehud

    2018-06-01

    Relativistic jets reside in high-energy astrophysical systems of all scales. Their interaction with the surrounding media is critical as it determines the jet evolution, observable signature, and feedback on the environment. During its motion, the interaction of the jet with the ambient media inflates a highly pressurized cocoon, which under certain conditions collimates the jet and strongly affects its propagation. Recently, Bromberg et al. derived a general simplified (semi-)analytic solution for the evolution of the jet and the cocoon in case of an unmagnetized jet that propagates in a medium with a range of density profiles. In this work we use a large suite of 2D and 3D relativistic hydrodynamic simulations in order to test the validity and accuracy of this model. We discuss the similarities and differences between the analytic model and numerical simulations and also, to some extent, between 2D and 3D simulations. Our main finding is that although the analytic model is highly simplified, it properly predicts the evolution of the main ingredients of the jet-cocoon system, including its temporal evolution and the transition between various regimes (e.g. collimated to uncollimated). The analytic solution predicts a jet head velocity that is faster by a factor of about 3 compared to the simulations, as long as the head velocity is Newtonian. We use the results of the simulations to calibrate the analytic model which significantly increases its accuracy. We provide an applet that calculates semi-analytically the propagation of a jet in an arbitrary density profile defined by the user at http://www.astro.tau.ac.il/˜ore/propagation.html.

  17. Tri-FAST Hardware-in-the-Loop Simulation. Volume I. Tri-FAST Hardware-in-the-Loop Simulation at the Advanced Simulation Center

    DTIC Science & Technology

    1979-03-28

    TECHNICAL REPORT T-79-43 TRI- FAST HARDWARE-IN-THE-LOOP SIMULATION Volume 1: Trn FAST Hardware-In-the. Loop Simulation at the Advanced Simulation...Identify by block number) Tri- FAST Hardware-in-the-Loop ACSL Advanced Simulation Center Simulation RF Target Models I a. AfIACT ( sin -oveme skit N nem...e n tdositr by block number) The purpose of this report is to document the Tri- FAST missile simulation development and the seeker hardware-in-the

  18. DEVELOPMENT OF USER-FRIENDLY SIMULATION SYSTEM OF EARTHQUAKE INDUCED URBAN SPREADING FIRE

    NASA Astrophysics Data System (ADS)

    Tsujihara, Osamu; Gawa, Hidemi; Hayashi, Hirofumi

    In the simulation of earthquake induced urban spreading fire, the produce of the analytical model of the target area is required as well as the analysis of spreading fire and the presentati on of the results. In order to promote the use of the simulation, it is important that the simulation system is non-intrusive and the analysis results can be demonstrated by the realistic presentation. In this study, the simulation system is developed based on the Petri-net algorithm, in which the easy operation can be realized in the modeling of the target area of the simulation through the presentation of analytical results by realistic 3-D animation.

  19. A methodology for the assessment of manned flight simulator fidelity

    NASA Technical Reports Server (NTRS)

    Hess, Ronald A.; Malsbury, Terry N.

    1989-01-01

    A relatively simple analytical methodology for assessing the fidelity of manned flight simulators for specific vehicles and tasks is offered. The methodology is based upon an application of a structural model of the human pilot, including motion cue effects. In particular, predicted pilot/vehicle dynamic characteristics are obtained with and without simulator limitations. A procedure for selecting model parameters can be implemented, given a probable pilot control strategy. In analyzing a pair of piloting tasks for which flight and simulation data are available, the methodology correctly predicted the existence of simulator fidelity problems. The methodology permitted the analytical evaluation of a change in simulator characteristics and indicated that a major source of the fidelity problems was a visual time delay in the simulation.

  20. Information Tailoring Enhancements for Large Scale Social Data

    DTIC Science & Technology

    2016-03-15

    i.com) 1 Work Performed within This Reporting Period .................................................... 2 1.1 Implemented Temporal Analytics ...following tasks.  Implemented Temporal Analysis Algorithms for Advanced Analytics in Scraawl. We implemented our backend web service design for the...temporal analysis and we created a prototyope GUI web service of Scraawl analytics dashboard.  Upgraded Scraawl computational framework to increase

  1. MIT CSAIL and Lincoln Laboratory Task Force Report

    DTIC Science & Technology

    2016-08-01

    projects have been very diverse, spanning several areas of CSAIL concentration, including robotics, big data analytics , wireless communications...spanning several areas of CSAIL concentration, including robotics, big data analytics , wireless communications, computing architectures and...to machine learning systems and algorithms, such as recommender systems, and “Big Data ” analytics . Advanced computing architectures broadly refer to

  2. Advances in functional brain imaging technology and developmental neuro-psychology: their applications in the Jungian analytic domain.

    PubMed

    Petchkovsky, Leon

    2017-06-01

    Analytical psychology shares with many other psychotherapies the important task of repairing the consequences of developmental trauma. The majority of analytic patients come from compromised early developmental backgrounds: they may have experienced neglect, abuse, or failures of empathic resonance from their carers. Functional brain imagery techniques including Quantitative Electroencephalogram (QEEG), and functional Magnetic Resonance Imagery (fMRI), allow us to track mental processes in ways beyond verbal reportage and introspection. This independent perspective is useful for developing new psychodynamic hypotheses, testing current ones, providing diagnostic markers, and monitoring treatment progress. Jung, with the Word Association Test, grasped these principles 100 years ago. Brain imaging techniques have contributed to powerful recent advances in our understanding of neurodevelopmental processes in the first three years of life. If adequate nurturance is compromised, a range of difficulties may emerge. This has important implications for how we understand and treat our psychotherapy clients. The paper provides an overview of functional brain imaging and advances in developmental neuropsychology, and looks at applications of some of these findings (including neurofeedback) in the Jungian psychotherapy domain. © 2017, The Society of Analytical Psychology.

  3. Analytical and finite element simulation of a three-bar torsion spring

    NASA Astrophysics Data System (ADS)

    Rădoi, M.; Cicone, T.

    2016-08-01

    The present study is dedicated to the innovative 3-bar torsion spring used as suspension solution for the first time at Lunokhod-1, the first autonomous vehicle sent for the exploration of the Moon in the early 70-ies by the former USSR. The paper describes a simple analytical model for calculation of spring static characteristics, taking into account both torsion and bending effects. Closed form solutions of this model allows quick and elegant parametric analysis. A comparison with a single torsion bar with the same stiffness reveal an increase of the maximum stress with more than 50%. A 3D finite element (FE) simulation is proposed to evaluate the accuracy of the analytical model. The model was meshed in an automated pattern (sweep for hubs and tetrahedrons for bars) with mesh morphing. Very close results between analytical and numerical solutions have been found, concluding that the analytical model is accurate. The 3-D finite element simulation was used to evaluate the effects of design details like fillet radius of the bars or contact stresses in the hex hub.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vincenti, H.; Vay, J. -L.

    Due to discretization effects and truncation to finite domains, many electromagnetic simulations present non-physical modifications of Maxwell's equations in space that may generate spurious signals affecting the overall accuracy of the result. Such modifications for instance occur when Perfectly Matched Layers (PMLs) are used at simulation domain boundaries to simulate open media. Another example is the use of arbitrary order Maxwell solver with domain decomposition technique that may under some condition involve stencil truncations at subdomain boundaries, resulting in small spurious errors that do eventually build up. In each case, a careful evaluation of the characteristics and magnitude of themore » errors resulting from these approximations, and their impact at any frequency and angle, requires detailed analytical and numerical studies. To this end, we present a general analytical approach that enables the evaluation of numerical discretization errors of fully three-dimensional arbitrary order finite-difference Maxwell solver, with arbitrary modification of the local stencil in the simulation domain. The analytical model is validated against simulations of domain decomposition technique and PMLs, when these are used with very high-order Maxwell solver, as well as in the infinite order limit of pseudo-spectral solvers. Results confirm that the new analytical approach enables exact predictions in each case. It also confirms that the domain decomposition technique can be used with very high-order Maxwell solver and a reasonably low number of guard cells with negligible effects on the whole accuracy of the simulation.« less

  5. A Newton-Krylov method with an approximate analytical Jacobian for implicit solution of Navier-Stokes equations on staggered overset-curvilinear grids with immersed boundaries.

    PubMed

    Asgharzadeh, Hafez; Borazjani, Iman

    2017-02-15

    The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for nonlinear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the diagonal of the Jacobian further improves the performance by 42 - 74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal Jacobian when the stretching factor was increased, respectively. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80-90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future.

  6. A Newton–Krylov method with an approximate analytical Jacobian for implicit solution of Navier–Stokes equations on staggered overset-curvilinear grids with immersed boundaries

    PubMed Central

    Asgharzadeh, Hafez; Borazjani, Iman

    2016-01-01

    The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for nonlinear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the diagonal of the Jacobian further improves the performance by 42 – 74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal Jacobian when the stretching factor was increased, respectively. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80–90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future. PMID:28042172

  7. A Newton-Krylov method with an approximate analytical Jacobian for implicit solution of Navier-Stokes equations on staggered overset-curvilinear grids with immersed boundaries

    NASA Astrophysics Data System (ADS)

    Asgharzadeh, Hafez; Borazjani, Iman

    2017-02-01

    The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for non-linear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form a preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the diagonal of the Jacobian further improves the performance by 42-74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal and full Jacobian, respectivley, when the stretching factor was increased. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80-90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future.

  8. Analytical investigation of thermal barrier coatings on advanced power generation gas turbines

    NASA Technical Reports Server (NTRS)

    Amos, D. J.

    1977-01-01

    An analytical investigation of present and advanced gas turbine power generation cycles incorporating thermal barrier turbine component coatings was performed. Approximately 50 parametric points considering simple, recuperated, and combined cycles (including gasification) with gas turbine inlet temperatures from current levels through 1644K (2500 F) were evaluated. The results indicated that thermal barriers would be an attractive means to improve performance and reduce cost of electricity for these cycles. A recommended thermal barrier development program has been defined.

  9. The Emergence of Simulation and Gaming.

    ERIC Educational Resources Information Center

    Becker, Henk A.

    1980-01-01

    Describes the historical and international development of simulation and gaming in terms of simulation as analytical models, and games as communicative models; and forecasts possible futures of simulation and gaming. (CMV)

  10. An investigation of the information propagation and entropy transport aspects of Stirling machine numerical simulation

    NASA Technical Reports Server (NTRS)

    Goldberg, Louis F.

    1992-01-01

    Aspects of the information propagation modeling behavior of integral machine computer simulation programs are investigated in terms of a transmission line. In particular, the effects of pressure-linking and temporal integration algorithms on the amplitude ratio and phase angle predictions are compared against experimental and closed-form analytic data. It is concluded that the discretized, first order conservation balances may not be adequate for modeling information propagation effects at characteristic numbers less than about 24. An entropy transport equation suitable for generalized use in Stirling machine simulation is developed. The equation is evaluated by including it in a simulation of an incompressible oscillating flow apparatus designed to demonstrate the effect of flow oscillations on the enhancement of thermal diffusion. Numerical false diffusion is found to be a major factor inhibiting validation of the simulation predictions with experimental and closed-form analytic data. A generalized false diffusion correction algorithm is developed which allows the numerical results to match their analytic counterparts. Under these conditions, the simulation yields entropy predictions which satisfy Clausius' inequality.

  11. Social networks and smoking: exploring the effects of peer influence and smoker popularity through simulations.

    PubMed

    Schaefer, David R; Adams, Jimi; Haas, Steven A

    2013-10-01

    Adolescent smoking and friendship networks are related in many ways that can amplify smoking prevalence. Understanding and developing interventions within such a complex system requires new analytic approaches. We draw on recent advances in dynamic network modeling to develop a technique that explores the implications of various intervention strategies targeted toward micro-level processes. Our approach begins by estimating a stochastic actor-based model using data from one school in the National Longitudinal Study of Adolescent Health. The model provides estimates of several factors predicting friendship ties and smoking behavior. We then use estimated model parameters to simulate the coevolution of friendship and smoking behavior under potential intervention scenarios. Namely, we manipulate the strength of peer influence on smoking and the popularity of smokers relative to nonsmokers. We measure how these manipulations affect smoking prevalence, smoking initiation, and smoking cessation. Results indicate that both peer influence and smoking-based popularity affect smoking behavior and that their joint effects are nonlinear. This study demonstrates how a simulation-based approach can be used to explore alternative scenarios that may be achievable through intervention efforts and offers new hypotheses about the association between friendship and smoking.

  12. Social Networks and Smoking: Exploring the Effects of Influence and Smoker Popularity through Simulations

    PubMed Central

    Schaefer, David R.; adams, jimi; Haas, Steven A.

    2015-01-01

    Adolescent smoking and friendship networks are related in many ways that can amplify smoking prevalence. Understanding and developing interventions within such a complex system requires new analytic approaches. We draw upon recent advances in dynamic network modeling to develop a technique that explores the implications of various intervention strategies targeted toward micro-level processes. Our approach begins by estimating a stochastic actor-based model using data from one school in the National Longitudinal Study of Adolescent Health. The model provides estimates of several factors predicting friendship ties and smoking behavior. We then use estimated model parameters to simulate the co-evolution of friendship and smoking behavior under potential intervention scenarios. Namely, we manipulate the strength of peer influence on smoking and the popularity of smokers relative to nonsmokers. We measure how these manipulations affect smoking prevalence, smoking initiation, and smoking cessation. Results indicate that both peer influence and smoking-based popularity affect smoking behavior, and that their joint effects are nonlinear. This study demonstrates how a simulation-based approach can be used to explore alternative scenarios that may be achievable through intervention efforts and offers new hypotheses about the association between friendship and smoking. PMID:24084397

  13. Development of the CSI phase-3 evolutionary model testbed

    NASA Technical Reports Server (NTRS)

    Gronet, M. J.; Davis, D. A.; Tan, M. K.

    1994-01-01

    This report documents the development effort for the reconfiguration of the Controls-Structures Integration (CSI) Evolutionary Model (CEM) Phase-2 testbed into the CEM Phase-3 configuration. This step responds to the need to develop and test CSI technologies associated with typical planned earth science and remote sensing platforms. The primary objective of the CEM Phase-3 ground testbed is to simulate the overall on-orbit dynamic behavior of the EOS AM-1 spacecraft. Key elements of the objective include approximating the low-frequency appendage dynamic interaction of EOS AM-1, allowing for the changeout of components, and simulating the free-free on-orbit environment using an advanced suspension system. The fundamentals of appendage dynamic interaction are reviewed. A new version of the multiple scaling method is used to design the testbed to have the full-scale geometry and dynamics of the EOS AM-1 spacecraft, but at one-tenth the weight. The testbed design is discussed, along with the testing of the solar array, high gain antenna, and strut components. Analytical performance comparisons show that the CEM Phase-3 testbed simulates the EOS AM-1 spacecraft with good fidelity for the important parameters of interest.

  14. Predictive Capability of the Compressible MRG Equation for an Explosively Driven Particle with Validation

    NASA Astrophysics Data System (ADS)

    Garno, Joshua; Ouellet, Frederick; Koneru, Rahul; Balachandar, Sivaramakrishnan; Rollin, Bertrand

    2017-11-01

    An analytic model to describe the hydrodynamic forces on an explosively driven particle is not currently available. The Maxey-Riley-Gatignol (MRG) particle force equation generalized for compressible flows is well-studied in shock-tube applications, and captures the evolution of particle force extracted from controlled shock-tube experiments. In these experiments only the shock-particle interaction was examined, and the effects of the contact line were not investigated. In the present work, the predictive capability of this model is considered for the case where a particle is explosively ejected from a rigid barrel into ambient air. Particle trajectory information extracted from simulations is compared with experimental data. This configuration ensures that both the shock and contact produced by the detonation will influence the motion of the particle. The simulations are carried out using a finite volume, Euler-Lagrange code using the JWL equation of state to handle the explosive products. This work was supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program,under Contract No. DE-NA0002378.

  15. Application of Geostatistical Simulation to Enhance Satellite Image Products

    NASA Technical Reports Server (NTRS)

    Hlavka, Christine A.; Dungan, Jennifer L.; Thirulanambi, Rajkumar; Roy, David

    2004-01-01

    With the deployment of Earth Observing System (EOS) satellites that provide daily, global imagery, there is increasing interest in defining the limitations of the data and derived products due to its coarse spatial resolution. Much of the detail, i.e. small fragments and notches in boundaries, is lost with coarse resolution imagery such as the EOS MODerate-Resolution Imaging Spectroradiometer (MODIS) data. Higher spatial resolution data such as the EOS Advanced Spaceborn Thermal Emission and Reflection Radiometer (ASTER), Landsat and airborne sensor imagery provide more detailed information but are less frequently available. There are, however, both theoretical and analytical evidence that burn scars and other fragmented types of land covers form self-similar or self-affine patterns, that is, patterns that look similar when viewed at widely differing spatial scales. Therefore small features of the patterns should be predictable, at least in a statistical sense, with knowledge about the large features. Recent developments in fractal modeling for characterizing the spatial distribution of undiscovered petroleum deposits are thus applicable to generating simulations of finer resolution satellite image products. We will present example EOS products, analysis to investigate self-similarity, and simulation results.

  16. Verification of Modelica-Based Models with Analytical Solutions for Tritium Diffusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rader, Jordan D.; Greenwood, Michael Scott; Humrickhouse, Paul W.

    Here, tritium transport in metal and molten salt fluids combined with diffusion through high-temperature structural materials is an important phenomenon in both magnetic confinement fusion (MCF) and molten salt reactor (MSR) applications. For MCF, tritium is desirable to capture for fusion fuel. For MSRs, uncaptured tritium potentially can be released to the environment. In either application, quantifying the time- and space-dependent tritium concentration in the working fluid(s) and structural components is necessary.Whereas capability exists specifically for calculating tritium transport in such systems (e.g., using TMAP for fusion reactors), it is desirable to unify the calculation of tritium transport with othermore » system variables such as dynamic fluid and structure temperature combined with control systems such as those that might be found in a system code. Some capability for radioactive trace substance transport exists in thermal-hydraulic systems codes (e.g., RELAP5-3D); however, this capability is not coupled to species diffusion through solids. Combined calculations of tritium transport and thermal-hydraulic solution have been demonstrated with TRIDENT but only for a specific type of MSR.Researchers at Oak Ridge National Laboratory have developed a set of Modelica-based dynamic system modeling tools called TRANsient Simulation Framework Of Reconfigurable Models (TRANSFORM) that were used previously to model advanced fission reactors and associated systems. In this system, the augmented TRANSFORM library includes dynamically coupled fluid and solid trace substance transport and diffusion. Results from simulations are compared against analytical solutions for verification.« less

  17. Verification of Modelica-Based Models with Analytical Solutions for Tritium Diffusion

    DOE PAGES

    Rader, Jordan D.; Greenwood, Michael Scott; Humrickhouse, Paul W.

    2018-03-20

    Here, tritium transport in metal and molten salt fluids combined with diffusion through high-temperature structural materials is an important phenomenon in both magnetic confinement fusion (MCF) and molten salt reactor (MSR) applications. For MCF, tritium is desirable to capture for fusion fuel. For MSRs, uncaptured tritium potentially can be released to the environment. In either application, quantifying the time- and space-dependent tritium concentration in the working fluid(s) and structural components is necessary.Whereas capability exists specifically for calculating tritium transport in such systems (e.g., using TMAP for fusion reactors), it is desirable to unify the calculation of tritium transport with othermore » system variables such as dynamic fluid and structure temperature combined with control systems such as those that might be found in a system code. Some capability for radioactive trace substance transport exists in thermal-hydraulic systems codes (e.g., RELAP5-3D); however, this capability is not coupled to species diffusion through solids. Combined calculations of tritium transport and thermal-hydraulic solution have been demonstrated with TRIDENT but only for a specific type of MSR.Researchers at Oak Ridge National Laboratory have developed a set of Modelica-based dynamic system modeling tools called TRANsient Simulation Framework Of Reconfigurable Models (TRANSFORM) that were used previously to model advanced fission reactors and associated systems. In this system, the augmented TRANSFORM library includes dynamically coupled fluid and solid trace substance transport and diffusion. Results from simulations are compared against analytical solutions for verification.« less

  18. Onset of fractional-order thermal convection in porous media

    NASA Astrophysics Data System (ADS)

    Karani, Hamid; Rashtbehesht, Majid; Huber, Christian; Magin, Richard L.

    2017-12-01

    The macroscopic description of buoyancy-driven thermal convection in porous media is governed by advection-diffusion processes, which in the presence of thermophysical heterogeneities fail to predict the onset of thermal convection and the average rate of heat transfer. This work extends the classical model of heat transfer in porous media by including a fractional-order advective-dispersive term to account for the role of thermophysical heterogeneities in shifting the thermal instability point. The proposed fractional-order model overcomes limitations of the common closure approaches for the thermal dispersion term by replacing the diffusive assumption with a fractional-order model. Through a linear stability analysis and Galerkin procedure, we derive an analytical formula for the critical Rayleigh number as a function of the fractional model parameters. The resulting critical Rayleigh number reduces to the classical value in the absence of thermophysical heterogeneities when solid and fluid phases have similar thermal conductivities. Numerical simulations of the coupled flow equation with the fractional-order energy model near the primary bifurcation point confirm our analytical results. Moreover, data from pore-scale simulations are used to examine the potential of the proposed fractional-order model in predicting the amount of heat transfer across the porous enclosure. The linear stability and numerical results show that, unlike the classical thermal advection-dispersion models, the fractional-order model captures the advance and delay in the onset of convection in porous media and provides correct scalings for the average heat transfer in a thermophysically heterogeneous medium.

  19. Experimental Validation of the Transverse Shear Behavior of a Nomex Core for Sandwich Panels

    NASA Astrophysics Data System (ADS)

    Farooqi, M. I.; Nasir, M. A.; Ali, H. M.; Ali, Y.

    2017-05-01

    This work deals with determination of the transverse shear moduli of a Nomex® honeycomb core of sandwich panels. Their out-of-plane shear characteristics depend on the transverse shear moduli of the honeycomb core. These moduli were determined experimentally, numerically, and analytically. Numerical simulations were performed by using a unit cell model and three analytical approaches. Analytical calculations showed that two of the approaches provided reasonable predictions for the transverse shear modulus as compared with experimental results. However, the approach based upon the classical lamination theory showed large deviations from experimental data. Numerical simulations also showed a trend similar to that resulting from the analytical models.

  20. Solutions of conformal Israel-Stewart relativistic viscous fluid dynamics

    NASA Astrophysics Data System (ADS)

    Marrochio, Hugo; Noronha, Jorge; Denicol, Gabriel S.; Luzum, Matthew; Jeon, Sangyong; Gale, Charles

    2015-01-01

    We use symmetry arguments developed by Gubser to construct the first radially expanding explicit solutions of the Israel-Stewart formulation of hydrodynamics. Along with a general semi-analytical solution, an exact analytical solution is given which is valid in the cold plasma limit where viscous effects from shear viscosity and the relaxation time coefficient are important. The radially expanding solutions presented in this paper can be used as nontrivial checks of numerical algorithms employed in hydrodynamic simulations of the quark-gluon plasma formed in ultrarelativistic heavy ion collisions. We show this explicitly by comparing such analytic and semi-analytic solutions with the corresponding numerical solutions obtained using the music viscous hydrodynamics simulation code.

  1. On finding the analytic dependencies of the external field potential on the control function when optimizing the beam dynamics

    NASA Astrophysics Data System (ADS)

    Ovsyannikov, A. D.; Kozynchenko, S. A.; Kozynchenko, V. A.

    2017-12-01

    When developing a particle accelerator for generating the high-precision beams, the injection system design is of importance, because it largely determines the output characteristics of the beam. At the present paper we consider the injection systems consisting of electrodes with given potentials. The design of such systems requires carrying out simulation of beam dynamics in the electrostatic fields. For external field simulation we use the new approach, proposed by A.D. Ovsyannikov, which is based on analytical approximations, or finite difference method, taking into account the real geometry of the injection system. The software designed for solving the problems of beam dynamics simulation and optimization in the injection system for non-relativistic beams has been developed. Both beam dynamics and electric field simulations in the injection system which use analytical approach and finite difference method have been made and the results presented in this paper.

  2. Supramolecular separation mechanism of pentafluorophenyl column using ibuprofen and omeprazole as markers: LC-MS and simulation study.

    PubMed

    Hussain, Afzal; AlAjmi, Mohamed F; Ali, Imran

    2018-06-01

    The pentafluorophenyl (PFP) column is emerging as a new advancement in separation science to analyze a wide range of analytes and, thus, its separation mechanism at supramolecular level is significant. We developed a mechanism for the separation of ibuprofen and omeprazole using different combinations (ranging from 50:50 to 60:40) of water-acetonitrile containing 0.1% formic acid as the mobile phase. The column used was Waters Acquity UPLC HSS PFP (75 × 2.1 mm, 1.8 μm). The reverse order of elution was observed in different combinations of the mobile phases. The docking study indicated hydrogen bonding between ibuprofen and PFP stationary phase (binding energy was -11.30 kJ/mol). Separation at PFP stationary phase is controlled by hydrogen bonding along with π-π interactions. This stationary phase may be used to analyze both aromatic and aliphatic analytes. The developed mechanism will be useful to separate various analytes by considering the possible interactions, leading to saving of energy, time and money. In addition, this work will be highly useful in preparative chromatography where separation is the major problem at a large scale. Moreover, the developed LC-MS-QTOF method may be used to analyze ibuprofen and omeprazole in an unknown sample owing to the low value of detection limits. Copyright © 2018 John Wiley & Sons, Ltd.

  3. Nascent RNA kinetics: Transient and steady state behavior of models of transcription

    NASA Astrophysics Data System (ADS)

    Choubey, Sandeep

    2018-02-01

    Regulation of transcription is a vital process in cells, but mechanistic details of this regulation still remain elusive. The dominant approach to unravel the dynamics of transcriptional regulation is to first develop mathematical models of transcription and then experimentally test the predictions these models make for the distribution of mRNA and protein molecules at the individual cell level. However, these measurements are affected by a multitude of downstream processes which make it difficult to interpret the measurements. Recent experimental advancements allow for counting the nascent mRNA number of a gene as a function of time at the single-inglr cell level. These measurements closely reflect the dynamics of transcription. In this paper, we consider a general mechanism of transcription with stochastic initiation and deterministic elongation and probe its impact on the temporal behavior of nascent RNA levels. Using techniques from queueing theory, we derive exact analytical expressions for the mean and variance of the nascent RNA distribution as functions of time. We apply these analytical results to obtain the mean and variance of nascent RNA distribution for specific models of transcription. These models of initiation exhibit qualitatively distinct transient behaviors for both the mean and variance which further allows us to discriminate between them. Stochastic simulations confirm these results. Overall the analytical results presented here provide the necessary tools to connect mechanisms of transcription initiation to single-cell measurements of nascent RNA.

  4. Analytic Simulation of the Elastic Waves Propagation in the Neighborhood of Fluid Filled Wells with Monopole Sources

    NASA Astrophysics Data System (ADS)

    Ávila-Carrera, R.; Sánchez-Sesma, F. J.; Spurlin, James H.; Valle-Molina, C.; Rodríguez-Castellanos, A.

    2014-09-01

    An analytic formulation to understand the scattering, diffraction and attenuation of elastic waves at the neighborhood of fluid filled wells is presented. An important, and not widely exploited, technique to carefully investigate the wave propagation in exploration wells is the logging of sonic waveforms. Fundamental decisions and production planning in petroleum reservoirs are made by interpretation of such recordings. Nowadays, geophysicists and engineers face problems related to the acquisition and interpretation under complex conditions associated with conducting open-hole measurements. A crucial problem that directly affects the response of sonic logs is the eccentricity of the measuring tool with respect to the center of the borehole. Even with the employment of centralizers, this simple variation, dramatically changes the physical conditions on the wave propagation around the well. Recent works in the numerical field reported advanced studies in modeling and simulation of acoustic wave propagation around wells, including complex heterogeneities and anisotropy. However, no analytical efforts have been made to formally understand the wireline sonic logging measurements acquired with borehole-eccentered tools. In this paper, the Graf's addition theorem was used to describe monopole sources in terms of solutions of the wave equation. The formulation was developed from the three-dimensional discrete wave-number method in the frequency domain. The cylindrical Bessel functions of the third kind and order zero were re-derived to obtain a simplified set of equations projected into a bi-dimensional plane-space for displacements and stresses. This new and condensed analytic formulation allows the straightforward calculation of all converted modes and their visualization in the time domain via Fourier synthesis. The main aim was to obtain spectral surfaces of transfer functions and synthetic seismograms that might be useful to understand the wave motion produced by the eccentricity of the source and explain in detail the new arising borehole propagation modes. Finally, time histories and amplitude spectra for relevant examples are presented and the validation of time traces using the spectral element method is reported.

  5. Analytical stability and simulation response study for a coupled two-body system

    NASA Technical Reports Server (NTRS)

    Tao, K. M.; Roberts, J. R.

    1975-01-01

    An analytical stability study and a digital simulation response study of two connected rigid bodies are documented. Relative rotation of the bodies at the connection is allowed, thereby providing a model suitable for studying system stability and response during a soft-dock regime. Provisions are made of a docking port axes alignment torque and a despin torque capability for encountering spinning payloads. Although the stability analysis is based on linearized equations, the digital simulation is based on nonlinear models.

  6. Recent advancements in chemical luminescence-based lab-on-chip and microfluidic platforms for bioanalysis.

    PubMed

    Mirasoli, Mara; Guardigli, Massimo; Michelini, Elisa; Roda, Aldo

    2014-01-01

    Miniaturization of analytical procedures through microchips, lab-on-a-chip or micro total analysis systems is one of the most recent trends in chemical and biological analysis. These systems are designed to perform all the steps in an analytical procedure, with the advantages of low sample and reagent consumption, fast analysis, reduced costs, possibility of extra-laboratory application. A range of detection technologies have been employed in miniaturized analytical systems, but most applications relied on fluorescence and electrochemical detection. Chemical luminescence (which includes chemiluminescence, bioluminescence, and electrogenerated chemiluminescence) represents an alternative detection principle that offered comparable (or better) analytical performance and easier implementation in miniaturized analytical devices. Nevertheless, chemical luminescence-based ones represents only a small fraction of the microfluidic devices reported in the literature, and until now no review has been focused on these devices. Here we review the most relevant applications (since 2009) of miniaturized analytical devices based on chemical luminescence detection. After a brief overview of the main chemical luminescence systems and of the recent technological advancements regarding their implementation in miniaturized analytical devices, analytical applications are reviewed according to the nature of the device (microfluidic chips, microchip electrophoresis, lateral flow- and paper-based devices) and the type of application (micro-flow injection assays, enzyme assays, immunoassays, gene probe hybridization assays, cell assays, whole-cell biosensors). Copyright © 2013 Elsevier B.V. All rights reserved.

  7. The NASA Reanalysis Ensemble Service - Advanced Capabilities for Integrated Reanalysis Access and Intercomparison

    NASA Astrophysics Data System (ADS)

    Tamkin, G.; Schnase, J. L.; Duffy, D.; Li, J.; Strong, S.; Thompson, J. H.

    2017-12-01

    NASA's efforts to advance climate analytics-as-a-service are making new capabilities available to the research community: (1) A full-featured Reanalysis Ensemble Service (RES) comprising monthly means data from multiple reanalysis data sets, accessible through an enhanced set of extraction, analytic, arithmetic, and intercomparison operations. The operations are made accessible through NASA's climate data analytics Web services and our client-side Climate Data Services Python library, CDSlib; (2) A cloud-based, high-performance Virtual Real-Time Analytics Testbed supporting a select set of climate variables. This near real-time capability enables advanced technologies like Spark and Hadoop-based MapReduce analytics over native NetCDF files; and (3) A WPS-compliant Web service interface to our climate data analytics service that will enable greater interoperability with next-generation systems such as ESGF. The Reanalysis Ensemble Service includes the following: - New API that supports full temporal, spatial, and grid-based resolution services with sample queries - A Docker-ready RES application to deploy across platforms - Extended capabilities that enable single- and multiple reanalysis area average, vertical average, re-gridding, standard deviation, and ensemble averages - Convenient, one-stop shopping for commonly used data products from multiple reanalyses including basic sub-setting and arithmetic operations (e.g., avg, sum, max, min, var, count, anomaly) - Full support for the MERRA-2 reanalysis dataset in addition to, ECMWF ERA-Interim, NCEP CFSR, JMA JRA-55 and NOAA/ESRL 20CR… - A Jupyter notebook-based distribution mechanism designed for client use cases that combines CDSlib documentation with interactive scenarios and personalized project management - Supporting analytic services for NASA GMAO Forward Processing datasets - Basic uncertainty quantification services that combine heterogeneous ensemble products with comparative observational products (e.g., reanalysis, observational, visualization) - The ability to compute and visualize multiple reanalysis for ease of inter-comparisons - Automated tools to retrieve and prepare data collections for analytic processing

  8. Toward Usable Interactive Analytics: Coupling Cognition and Computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endert, Alexander; North, Chris; Chang, Remco

    Interactive analytics provide users a myriad of computational means to aid in extracting meaningful information from large and complex datasets. Much prior work focuses either on advancing the capabilities of machine-centric approaches by the data mining and machine learning communities, or human-driven methods by the visualization and CHI communities. However, these methods do not yet support a true human-machine symbiotic relationship where users and machines work together collaboratively and adapt to each other to advance an interactive analytic process. In this paper we discuss some of the inherent issues, outlining what we believe are the steps toward usable interactive analyticsmore » that will ultimately increase the effectiveness for both humans and computers to produce insights.« less

  9. EarthServer: Cross-Disciplinary Earth Science Through Data Cube Analytics

    NASA Astrophysics Data System (ADS)

    Baumann, P.; Rossi, A. P.

    2016-12-01

    The unprecedented increase of imagery, in-situ measurements, and simulation data produced by Earth (and Planetary) Science observations missions bears a rich, yet not leveraged potential for getting insights from integrating such diverse datasets and transform scientific questions into actual queries to data, formulated in a standardized way.The intercontinental EarthServer [1] initiative is demonstrating new directions for flexible, scalable Earth Science services based on innovative NoSQL technology. Researchers from Europe, the US and Australia have teamed up to rigorously implement the concept of the datacube. Such a datacube may have spatial and temporal dimensions (such as a satellite image time series) and may unite an unlimited number of scenes. Independently from whatever efficient data structuring a server network may perform internally, users (scientist, planners, decision makers) will always see just a few datacubes they can slice and dice.EarthServer has established client [2] and server technology for such spatio-temporal datacubes. The underlying scalable array engine, rasdaman [3,4], enables direct interaction, including 3-D visualization, common EO data processing, and general analytics. Services exclusively rely on the open OGC "Big Geo Data" standards suite, the Web Coverage Service (WCS). Conversely, EarthServer has shaped and advanced WCS based on the experience gained. The first phase of EarthServer has advanced scalable array database technology into 150+ TB services. Currently, Petabyte datacubes are being built for ad-hoc and cross-disciplinary querying, e.g. using climate, Earth observation and ocean data.We will present the EarthServer approach, its impact on OGC / ISO / INSPIRE standardization, and its platform technology, rasdaman.References: [1] Baumann, et al. (2015) DOI: 10.1080/17538947.2014.1003106 [2] Hogan, P., (2011) NASA World Wind, Proceedings of the 2nd International Conference on Computing for Geospatial Research & Applications ACM. [3] Baumann, Peter, et al. (2014) In Proc. 10th ICDM, 194-201. [4] Dumitru, A. et al. (2014) In Proc ACM SIGMOD Workshop on Data Analytics in the Cloud (DanaC'2014), 1-4.

  10. The Multi-SAG project: filling the MultiDark simulations with semi-analytic galaxies

    NASA Astrophysics Data System (ADS)

    Vega-Martínez, C. A.; Cora, S. A.; Padilla, N. D.; Muñoz Arancibia, A. M.; Orsi, A. A.; Ruiz, A. N.

    2016-08-01

    The semi-analytical model sag is a code of galaxy formation and evolution which is applied to halo catalogs and merger trees extracted from cosmological -body simulations of dark matter. This contribution describes the project of constructing a catalog of simulated galaxies by adapting and applying the model sag over two dark matter simulations of the spanish MultiDark Project publicly available. Those simulations have particles, each, in boxes with sizes of 1000 Mpc and 400 Mpc respectively with Planck cosmological parameters. They cover a large range of masses and have halo mass resolutions of , therefore each simulation is able to produce more than 150 millions of simulated galaxies. A detailed description of the method is explained, and the first statistical results are shown.

  11. Quality-assurance results for field pH and specific-conductance measurements, and for laboratory analysis, National Atmospheric Deposition Program and National Trends Network; January 1980-September 1984

    USGS Publications Warehouse

    Schroder, L.J.; Brooks, M.H.; Malo, B.A.; Willoughby, T.C.

    1986-01-01

    Five intersite comparison studies for the field determination of pH and specific conductance, using simulated-precipitation samples, were conducted by the U.S.G.S. for the National Atmospheric Deposition Program and National Trends Network. These comparisons were performed to estimate the precision of pH and specific conductance determinations made by sampling-site operators. Simulated-precipitation samples were prepared from nitric acid and deionized water. The estimated standard deviation for site-operator determination of pH was 0.25 for pH values ranging from 3.79 to 4.64; the estimated standard deviation for specific conductance was 4.6 microsiemens/cm at 25 C for specific-conductance values ranging from 10.4 to 59.0 microsiemens/cm at 25 C. Performance-audit samples with known analyte concentrations were prepared by the U.S.G.S.and distributed to the National Atmospheric Deposition Program 's Central Analytical Laboratory. The differences between the National Atmospheric Deposition Program and national Trends Network-reported analyte concentrations and known analyte concentrations were calculated, and the bias and precision were determined. For 1983, concentrations of calcium, magnesium, sodium, and chloride were biased at the 99% confidence limit; concentrations of potassium and sulfate were unbiased at the 99% confidence limit. Four analytical laboratories routinely analyzing precipitation were evaluated in their analysis of identical natural- and simulated precipitation samples. Analyte bias for each laboratory was examined using analysis of variance coupled with Duncan 's multiple-range test on data produced by these laboratories, from the analysis of identical simulated-precipitation samples. Analyte precision for each laboratory has been estimated by calculating a pooled variance for each analyte. Interlaboratory comparability results may be used to normalize natural-precipitation chemistry data obtained from two or more of these laboratories. (Author 's abstract)

  12. Supersymmetric quantum mechanics method for the Fokker-Planck equation with applications to protein folding dynamics

    NASA Astrophysics Data System (ADS)

    Polotto, Franciele; Drigo Filho, Elso; Chahine, Jorge; Oliveira, Ronaldo Junio de

    2018-03-01

    This work developed analytical methods to explore the kinetics of the time-dependent probability distributions over thermodynamic free energy profiles of protein folding and compared the results with simulation. The Fokker-Planck equation is mapped onto a Schrödinger-type equation due to the well-known solutions of the latter. Through a semi-analytical description, the supersymmetric quantum mechanics formalism is invoked and the time-dependent probability distributions are obtained with numerical calculations by using the variational method. A coarse-grained structure-based model of the two-state protein Tm CSP was simulated at a Cα level of resolution and the thermodynamics and kinetics were fully characterized. Analytical solutions from non-equilibrium conditions were obtained with the simulated double-well free energy potential and kinetic folding times were calculated. It was found that analytical folding time as a function of temperature agrees, quantitatively, with simulations and experiments from the literature of Tm CSP having the well-known 'U' shape of the Chevron Plots. The simple analytical model developed in this study has a potential to be used by theoreticians and experimentalists willing to explore, quantitatively, rates and the kinetic behavior of their system by informing the thermally activated barrier. The theory developed describes a stochastic process and, therefore, can be applied to a variety of biological as well as condensed-phase two-state systems.

  13. Advances in NMR Spectroscopy for Lipid Oxidation Assessment

    USDA-ARS?s Scientific Manuscript database

    Although there are many analytical methods developed for the assessment of lipid oxidation, different analytical methods often give different, sometimes even contradictory, results. The reason for this inconsistency is that although there are many different kinds of oxidation products, most methods ...

  14. Model for Atmospheric Propagation of Spatially Combined Laser Beams

    DTIC Science & Technology

    2016-09-01

    thesis modeling tools is discussed. In Chapter 6, the thesis validated the model with analytical computations and simulations result from...using propagation model . Based on both the analytical computation and WaveTrain results, the diraction e ects simulated in the propagation model are...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS MODEL FOR ATMOSPHERIC PROPAGATION OF SPATIALLY COMBINED LASER BEAMS by Kum Leong Lee

  15. A note on a simplified and general approach to simulating from multivariate copula functions

    Treesearch

    Barry K. Goodwin

    2013-01-01

    Copulas have become an important analytic tool for characterizing multivariate distributions and dependence. One is often interested in simulating data from copula estimates. The process can be analytically and computationally complex and usually involves steps that are unique to a given parametric copula. We describe an alternative approach that uses ‘Probability-...

  16. Building America House Simulation Protocols

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendron, Robert; Engebrecht, Cheryn

    2010-09-01

    The House Simulation Protocol document was developed to track and manage progress toward Building America's multi-year, average whole-building energy reduction research goals for new construction and existing homes, using a consistent analytical reference point. This report summarizes the guidelines for developing and reporting these analytical results in a consistent and meaningful manner for all home energy uses using standard operating conditions.

  17. Understanding selective molecular recognition in integrated carbon nanotube-polymer sensors by simulating physical analyte binding on carbon nanotube-polymer scaffolds.

    PubMed

    Lin, Shangchao; Zhang, Jingqing; Strano, Michael S; Blankschtein, Daniel

    2014-08-28

    Macromolecular scaffolds made of polymer-wrapped single-walled carbon nanotubes (SWCNTs) have been explored recently (Zhang et al., Nature Nanotechnology, 2013) as a new class of molecular-recognition motifs. However, selective analyte recognition is still challenging and lacks the underlying fundamental understanding needed for its practical implementation in biological sensors. In this report, we combine coarse-grained molecular dynamics (CGMD) simulations, physical adsorption/binding theories, and photoluminescence (PL) experiments to provide molecular insight into the selectivity of such sensors towards a large set of biologically important analytes. We find that the physical binding affinities of the analytes on a bare SWCNT partially correlate with their distribution coefficients in a bulk water/octanol system, suggesting that the analyte hydrophobicity plays a key role in determining the binding affinities of the analytes considered, along with the various specific interactions between the analytes and the polymer anchor groups. Two distinct categories of analytes are identified to demonstrate a complex picture for the correlation between optical sensor signals and the simulated binding affinities. Specifically, a good correlation was found between the sensor signals and the physical binding affinities of the three hormones (estradiol, melatonin, and thyroxine), the neurotransmitter (dopamine), and the vitamin (riboflavin) to the SWCNT-polymer scaffold. The four amino acids (aspartate, glycine, histidine, and tryptophan) and the two monosaccharides (fructose and glucose) considered were identified as blank analytes which are unable to induce sensor signals. The results indicate great success of our physical adsorption-based model in explaining the ranking in sensor selectivities. The combined framework presented here can be used to screen and select polymers that can potentially be used for creating synthetic molecular recognition motifs.

  18. Advanced LIGO constraints on neutron star mergers and r-process sites

    DOE PAGES

    Côté, Benoit; Belczynski, Krzysztof; Fryer, Chris L.; ...

    2017-02-20

    The role of compact binary mergers as the main production site of r-process elements is investigated by combining stellar abundances of Eu observed in the Milky Way, galactic chemical evolution (GCE) simulations, and binary population synthesis models, and gravitational wave measurements from Advanced LIGO. We compiled and reviewed seven recent GCE studies to extract the frequency of neutron star–neutron star (NS–NS) mergers that is needed in order to reproduce the observed [Eu/Fe] versus [Fe/H] relationship. We used our simple chemical evolution code to explore the impact of different analytical delay-time distribution functions for NS–NS mergers. We then combined our metallicity-dependent population synthesis models with our chemical evolution code to bring their predictions, for both NS–NS mergers and black hole–neutron star mergers, into a GCE context. Finally, we convolved our results with the cosmic star formation history to provide a direct comparison with current and upcoming Advanced LIGO measurements. When assuming that NS–NS mergers are the exclusive r-process sites, and that the ejected r-process mass per merger event is 0.01 Mmore » $${}_{\\odot }$$, the number of NS–NS mergers needed in GCE studies is about 10 times larger than what is predicted by standard population synthesis models. Here, these two distinct fields can only be consistent with each other when assuming optimistic rates, massive NS–NS merger ejecta, and low Fe yields for massive stars. For now, population synthesis models and GCE simulations are in agreement with the current upper limit (O1) established by Advanced LIGO during their first run of observations. Upcoming measurements will provide an important constraint on the actual local NS–NS merger rate, will provide valuable insights on the plausibility of the GCE requirement, and will help to define whether or not compact binary mergers can be the dominant source of r-process elements in the universe.« less

  19. Detailed analysis of the effects of stencil spatial variations with arbitrary high-order finite-difference Maxwell solver

    DOE PAGES

    Vincenti, H.; Vay, J. -L.

    2015-11-22

    Due to discretization effects and truncation to finite domains, many electromagnetic simulations present non-physical modifications of Maxwell's equations in space that may generate spurious signals affecting the overall accuracy of the result. Such modifications for instance occur when Perfectly Matched Layers (PMLs) are used at simulation domain boundaries to simulate open media. Another example is the use of arbitrary order Maxwell solver with domain decomposition technique that may under some condition involve stencil truncations at subdomain boundaries, resulting in small spurious errors that do eventually build up. In each case, a careful evaluation of the characteristics and magnitude of themore » errors resulting from these approximations, and their impact at any frequency and angle, requires detailed analytical and numerical studies. To this end, we present a general analytical approach that enables the evaluation of numerical discretization errors of fully three-dimensional arbitrary order finite-difference Maxwell solver, with arbitrary modification of the local stencil in the simulation domain. The analytical model is validated against simulations of domain decomposition technique and PMLs, when these are used with very high-order Maxwell solver, as well as in the infinite order limit of pseudo-spectral solvers. Results confirm that the new analytical approach enables exact predictions in each case. It also confirms that the domain decomposition technique can be used with very high-order Maxwell solver and a reasonably low number of guard cells with negligible effects on the whole accuracy of the simulation.« less

  20. Usefulness of Analytical Research: Rethinking Analytical R&D&T Strategies.

    PubMed

    Valcárcel, Miguel

    2017-11-07

    This Perspective is intended to help foster true innovation in Research & Development & Transfer (R&D&T) in Analytical Chemistry in the form of advances that are primarily useful for analytical purposes rather than solely for publishing. Devising effective means to strengthen the crucial contribution of Analytical Chemistry to progress in Chemistry, Science & Technology, and Society requires carefully examining the present status of our discipline and also identifying internal and external driving forces with a potential adverse impact on its development. The diagnostic process should be followed by administration of an effective therapy and supported by adoption of a theragnostic strategy if Analytical Chemistry is to enjoy a better future.

  1. Big Data Analytics for a Smart Green Infrastructure Strategy

    NASA Astrophysics Data System (ADS)

    Barrile, Vincenzo; Bonfa, Stefano; Bilotta, Giuliana

    2017-08-01

    As well known, Big Data is a term for data sets so large or complex that traditional data processing applications aren’t sufficient to process them. The term “Big Data” is referred to using predictive analytics. It is often related to user behavior analytics, or other advanced data analytics methods which from data extract value, and rarely to a particular size of data set. This is especially true for the huge amount of Earth Observation data that satellites constantly orbiting the earth daily transmit.

  2. Solar dynamic power for the Space Station

    NASA Technical Reports Server (NTRS)

    Archer, J. S.; Diamant, E. S.

    1986-01-01

    This paper describes a computer code which provides a significant advance in the systems analysis capabilities of solar dynamic power modules. While the code can be used to advantage in the preliminary analysis of terrestrial solar dynamic modules its real value lies in the adaptions which make it particularly useful for the conceptualization of optimized power modules for space applications. In particular, as illustrated in the paper, the code can be used to establish optimum values of concentrator diameter, concentrator surface roughness, concentrator rim angle and receiver aperture corresponding to the main heat cycle options - Organic Rankine and Brayton - and for certain receiver design options. The code can also be used to establish system sizing margins to account for the loss of reflectivity in orbit or the seasonal variation of insolation. By the simulation of the interactions among the major components of a solar dynamic module and through simplified formulations of the major thermal-optic-thermodynamic interactions the code adds a powerful, efficient and economic analytical tool to the repertory of techniques available for the design of advanced space power systems.

  3. Multi-Evaporator Miniature Loop Heat Pipe for Small Spacecraft Thermal Control

    NASA Technical Reports Server (NTRS)

    Ku, Jentung; Ottenstein, Laura; Douglas, Donya

    2008-01-01

    This paper presents the development of the Thermal Loop experiment under NASA's New Millennium Program Space Technology 8 (ST8) Project. The Thermal Loop experiment was originally planned for validating in space an advanced heat transport system consisting of a miniature loop heat pipe (MLHP) with multiple evaporators and multiple condensers. Details of the thermal loop concept, technical advances and benefits, Level 1 requirements and the technology validation approach are described. An MLHP breadboard has been built and tested in the laboratory and thermal vacuum environments, and has demonstrated excellent performance that met or exceeded the design requirements. The MLHP retains all features of state-of-the-art loop heat pipes and offers additional advantages to enhance the functionality, performance, versatility, and reliability of the system. In addition, an analytical model has been developed to simulate the steady state and transient operation of the MHLP, and the model predictions agreed very well with experimental results. A protoflight MLHP has been built and is being tested in a thermal vacuum chamber to validate its performance and technical readiness for a flight experiment.

  4. Recent advances in the analysis of behavioural organization and interpretation as indicators of animal welfare

    PubMed Central

    Asher, Lucy; Collins, Lisa M.; Ortiz-Pelaez, Angel; Drewe, Julian A.; Nicol, Christine J.; Pfeiffer, Dirk U.

    2009-01-01

    While the incorporation of mathematical and engineering methods has greatly advanced in other areas of the life sciences, they have been under-utilized in the field of animal welfare. Exceptions are beginning to emerge and share a common motivation to quantify ‘hidden’ aspects in the structure of the behaviour of an individual, or group of animals. Such analyses have the potential to quantify behavioural markers of pain and stress and quantify abnormal behaviour objectively. This review seeks to explore the scope of such analytical methods as behavioural indicators of welfare. We outline four classes of analyses that can be used to quantify aspects of behavioural organization. The underlying principles, possible applications and limitations are described for: fractal analysis, temporal methods, social network analysis, and agent-based modelling and simulation. We hope to encourage further application of analyses of behavioural organization by highlighting potential applications in the assessment of animal welfare, and increasing awareness of the scope for the development of new mathematical methods in this area. PMID:19740922

  5. Nanoscale surface analysis on second generation advanced high strength steel after hot dip galvanizing.

    PubMed

    Arndt, M; Duchoslav, J; Preis, K; Samek, L; Stifter, D

    2013-09-01

    Second generation advanced high strength steel is one promising material of choice for modern automotive structural parts because of its outstanding maximal elongation and tensile strength. Nonetheless there is still a lack of corrosion protection for this material due to the fact that cost efficient hot dip galvanizing cannot be applied. The reason for the insufficient coatability with zinc is found in the segregation of manganese to the surface during annealing and the formation of manganese oxides prior coating. This work analyses the structure and chemical composition of the surface oxides on so called nano-TWIP (twinning induced plasticity) steel on the nanoscopic scale after hot dip galvanizing in a simulator with employed analytical methods comprising scanning Auger electron spectroscopy (SAES), energy dispersive X-ray spectroscopy (EDX), and focused ion beam (FIB) for cross section preparation. By the combination of these methods, it was possible to obtain detailed chemical images serving a better understanding which processes exactly occur on the surface of this novel kind of steel and how to promote in the future for this material system galvanic protection.

  6. New test techniques and analytical procedures for understanding the behavior of advanced propellers

    NASA Technical Reports Server (NTRS)

    Stefko, G. L.; Bober, L. J.; Neumann, H. E.

    1983-01-01

    Analytical procedures and experimental techniques were developed to improve the capability to design advanced high speed propellers. Some results from the propeller lifting line and lifting surface aerodynamic analysis codes are compared with propeller force data, probe data and laser velocimeter data. In general, the code comparisons with data indicate good qualitative agreement. A rotating propeller force balance demonstrated good accuracy and reduced test time by 50 percent. Results from three propeller flow visualization techniques are shown which illustrate some of the physical phenomena occurring on these propellers.

  7. Contributions of Analytical Chemistry to the Clinical Laboratory.

    ERIC Educational Resources Information Center

    Skogerboe, Kristen J.

    1988-01-01

    Highlights several analytical techniques that are being used in state-of-the-art clinical labs. Illustrates how other advances in instrumentation may contribute to clinical chemistry in the future. Topics include: biosensors, polarization spectroscopy, chemiluminescence, fluorescence, photothermal deflection, and chromatography in clinical…

  8. Analytical technique characterizes all trace contaminants in water

    NASA Technical Reports Server (NTRS)

    Foster, J. N.; Lysyj, I.; Nelson, K. H.

    1967-01-01

    Properly programmed combination of advanced chemical and physical analytical techniques characterize critically all trace contaminants in both the potable and waste water from the Apollo Command Module. This methodology can also be applied to the investigation of the source of water pollution.

  9. The World Spatiotemporal Analytics and Mapping Project (WSTAMP): Further Progress in Discovering, Exploring, and Mapping Spatiotemporal Patterns Across the World's Largest Open Source Data Sets

    NASA Astrophysics Data System (ADS)

    Piburn, J.; Stewart, R.; Myers, A.; Sorokine, A.; Axley, E.; Anderson, D.; Burdette, J.; Biddle, C.; Hohl, A.; Eberle, R.; Kaufman, J.; Morton, A.

    2017-10-01

    Spatiotemporal (ST) analytics applied to major data sources such as the World Bank and World Health Organization has shown tremendous value in shedding light on the evolution of cultural, health, economic, and geopolitical landscapes on a global level. WSTAMP engages this opportunity by situating analysts, data, and analytics together within a visually rich and computationally rigorous online analysis environment. Since introducing WSTAMP at the First International Workshop on Spatiotemporal Computing, several transformative advances have occurred. Collaboration with human computer interaction experts led to a complete interface redesign that deeply immerses the analyst within a ST context, significantly increases visual and textual content, provides navigational crosswalks for attribute discovery, substantially reduce mouse and keyboard actions, and supports user data uploads. Secondly, the database has been expanded to include over 16,000 attributes, 50 years of time, and 200+ nation states and redesigned to support non-annual, non-national, city, and interaction data. Finally, two new analytics are implemented for analyzing large portfolios of multi-attribute data and measuring the behavioral stability of regions along different dimensions. These advances required substantial new approaches in design, algorithmic innovations, and increased computational efficiency. We report on these advances and inform how others may freely access the tool.

  10. How can we probe the atom mass currents induced by synthetic gauge fields?

    NASA Astrophysics Data System (ADS)

    Paramekanti, Arun; Killi, Matthew; Trotzky, Stefan

    2013-05-01

    Ultracold atomic fermions and bosons in an optical lattice can have quantum ground states which support equilibrium currents in the presence of synthetic magnetic fields or spin orbit coupling. As a tool to uncover these mass currents, we propose using an anisotropic quantum quench of the optical lattice which dynamically converts the current patterns into measurable density patterns. Using analytical calculations and numerical simulations, we show that this scheme can probe diverse equilibrium bulk current patterns in Bose superfluids and Fermi fluids induced by synthetic magnetic fields, as well as detect the chiral edge currents in topological states of atomic matter such as quantum Hall and quantum spin Hall insulators. This work is supported by NSERC of Canada and the Canadian Institute for Advanced Research.

  11. Structural controllability of unidirectional bipartite networks

    NASA Astrophysics Data System (ADS)

    Nacher, Jose C.; Akutsu, Tatsuya

    2013-04-01

    The interactions between fundamental life molecules, people and social organisations build complex architectures that often result in undesired behaviours. Despite all of the advances made in our understanding of network structures over the past decade, similar progress has not been achieved in the controllability of real-world networks. In particular, an analytical framework to address the controllability of bipartite networks is still absent. Here, we present a dominating set (DS)-based approach to bipartite network controllability that identifies the topologies that are relatively easy to control with the minimum number of driver nodes. Our theoretical calculations, assisted by computer simulations and an evaluation of real-world networks offer a promising framework to control unidirectional bipartite networks. Our analysis should open a new approach to reverting the undesired behaviours in unidirectional bipartite networks at will.

  12. Nonlinear dynamics of mini-satellite respinup by weak internal controllable torques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Somov, Yevgeny, E-mail: e-somov@mail.ru

    Contemporary space engineering advanced new problem before theoretical mechanics and motion control theory: a spacecraft directed respinup by the weak restricted control internal forces. The paper presents some results on this problem, which is very actual for energy supply of information mini-satellites (for communication, geodesy, radio- and opto-electronic observation of the Earth et al.) with electro-reaction plasma thrusters and gyro moment cluster based on the reaction wheels or the control moment gyros. The solution achieved is based on the methods for synthesis of nonlinear robust control and on rigorous analytical proof for the required spacecraft rotation stability by Lyapunov functionmore » method. These results were verified by a computer simulation of strongly nonlinear oscillatory processes at respinuping of a flexible spacecraft.« less

  13. Initial alignment method for free space optics laser beam

    NASA Astrophysics Data System (ADS)

    Shimada, Yuta; Tashiro, Yuki; Izumi, Kiyotaka; Yoshida, Koichi; Tsujimura, Takeshi

    2016-08-01

    The authors have newly proposed and constructed an active free space optics transmission system. It is equipped with a motor driven laser emitting mechanism and positioning photodiodes, and it transmits a collimated thin laser beam and accurately steers the laser beam direction. It is necessary to introduce the laser beam within sensible range of the receiver in advance of laser beam tracking control. This paper studies an estimation method of laser reaching point for initial laser beam alignment. Distributed photodiodes detect laser luminescence at respective position, and the optical axis of laser beam is analytically presumed based on the Gaussian beam optics. Computer simulation evaluates the accuracy of the proposed estimation methods, and results disclose that the methods help us to guide the laser beam to a distant receiver.

  14. Models and techniques for evaluating the effectiveness of aircraft computing systems

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.

    1982-01-01

    Models, measures, and techniques for evaluating the effectiveness of aircraft computing systems were developed. By "effectiveness" in this context we mean the extent to which the user, i.e., a commercial air carrier, may expect to benefit from the computational tasks accomplished by a computing system in the environment of an advanced commercial aircraft. Thus, the concept of effectiveness involves aspects of system performance, reliability, and worth (value, benefit) which are appropriately integrated in the process of evaluating system effectiveness. Specifically, the primary objectives are: the development of system models that provide a basis for the formulation and evaluation of aircraft computer system effectiveness, the formulation of quantitative measures of system effectiveness, and the development of analytic and simulation techniques for evaluating the effectiveness of a proposed or existing aircraft computer.

  15. Step to improve neural cryptography against flipping attacks.

    PubMed

    Zhou, Jiantao; Xu, Qinzhen; Pei, Wenjiang; He, Zhenya; Szu, Harold

    2004-12-01

    Synchronization of neural networks by mutual learning has been demonstrated to be possible for constructing key exchange protocol over public channel. However, the neural cryptography schemes presented so far are not the securest under regular flipping attack (RFA) and are completely insecure under majority flipping attack (MFA). We propose a scheme by splitting the mutual information and the training process to improve the security of neural cryptosystem against flipping attacks. Both analytical and simulation results show that the success probability of RFA on the proposed scheme can be decreased to the level of brute force attack (BFA) and the success probability of MFA still decays exponentially with the weights' level L. The synchronization time of the parties also remains polynomial with L. Moreover, we analyze the security under an advanced flipping attack.

  16. Modeling and advanced sliding mode controls of crawler cranes considering wire rope elasticity and complicated operations

    NASA Astrophysics Data System (ADS)

    Tuan, Le Anh; Lee, Soon-Geul

    2018-03-01

    In this study, a new mathematical model of crawler cranes is developed for heavy working conditions, with payload-lifting and boom-hoisting motions simultaneously activated. The system model is built with full consideration of wind disturbances, geometrical nonlinearities, and cable elasticities of cargo lifting and boom luffing. On the basis of this dynamic model, three versions of sliding mode control are analyzed and designed to control five system outputs with only two inputs. When used in complicated operations, the effectiveness of the controllers is analyzed using analytical investigation and numerical simulation. Results indicate the effectiveness of the control algorithms and the proposed dynamic model. The control algorithms asymptotically stabilize the system with finite-time convergences, remaining robust amid disturbances and parametric uncertainties.

  17. Advanced reservoir characterization and evaluation of CO{sub 2} gravity drainage in the naturally fractured Spraberry Trend Area, Class III

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heckman, Tracy; Schechter, David S.

    2000-04-11

    The overall goal of this project was to assess the economic feasibility of CO{sub 2} flooding the naturally fractured Spraberry Trend Area in West Texas. This objective was accomplished by conducting research in four areas: (1) extensive characterization of the reservoirs, (2) experimental studies of crude oil/brine/rock (COBR) interaction in the reservoirs, (3) analytical and numerical simulation of Spraberry reservoirs, and, (4) experimental investigations on CO{sub 2} gravity drainage in Spraberry whole cores. This report provides results of the fourth year of the five-year project for each of the four areas including a status report of field activities leading upmore » to injection of CO{sub 2}.« less

  18. Development of comprehensive numerical schemes for predicting evaporating gas-droplets flow processes of a liquid-fueled combustor

    NASA Technical Reports Server (NTRS)

    Chen, C. P.

    1990-01-01

    An existing Computational Fluid Dynamics code for simulating complex turbulent flows inside a liquid rocket combustion chamber was validated and further developed. The Advanced Rocket Injector/Combustor Code (ARICC) is simplified and validated against benchmark flow situations for laminar and turbulent flows. The numerical method used in ARICC Code is re-examined for incompressible flow calculations. For turbulent flows, both the subgrid and the two equation k-epsilon turbulence models are studied. Cases tested include idealized Burger's equation in complex geometries and boundaries, a laminar pipe flow, a high Reynolds number turbulent flow, and a confined coaxial jet with recirculations. The accuracy of the algorithm is examined by comparing the numerical results with the analytical solutions as well as experimented data with different grid sizes.

  19. Ionization waves of arbitrary velocity driven by a flying focus

    NASA Astrophysics Data System (ADS)

    Palastro, J. P.; Turnbull, D.; Bahk, S.-W.; Follett, R. K.; Shaw, J. L.; Haberberger, D.; Bromage, J.; Froula, D. H.

    2018-03-01

    A chirped laser pulse focused by a chromatic lens exhibits a dynamic, or flying, focus in which the trajectory of the peak intensity decouples from the group velocity. In a medium, the flying focus can trigger an ionization front that follows this trajectory. By adjusting the chirp, the ionization front can be made to travel at an arbitrary velocity along the optical axis. We present analytical calculations and simulations describing the propagation of the flying focus pulse, the self-similar form of its intensity profile, and ionization wave formation. The ability to control the speed of the ionization wave and, in conjunction, mitigate plasma refraction has the potential to advance several laser-based applications, including Raman amplification, photon acceleration, high-order-harmonic generation, and THz generation.

  20. Particle rings and astrophysical accretion discs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lovelace, R. V. E., E-mail: RVL1@cornell.edu; Romanova, M. M., E-mail: romanova@astro.cornell.edu

    Norman Rostoker had a wide range of interests and significant impact on the plasma physics research at Cornell during the time he was a Cornell professor. His interests ranged from the theory of energetic electron and ion beams and strong particle rings to the related topics of astrophysical accretion discs. We outline some of the topics related to rings and discs including the Rossby wave instability which leads to formation of anticyclonic vortices in astrophysical discs. These vorticies are regions of high pressure and act to trap dust particles which in turn may facilitate planetesimals growth in proto-planetary disks andmore » could be important for planet formation. Analytical methods and global 3D magneto-hydrodynamic simulations have led to rapid advances in our understanding of discs in recent years.« less

  1. Advanced study of video signal processing in low signal to noise environments

    NASA Technical Reports Server (NTRS)

    Carden, F.; Henry, R.

    1972-01-01

    A nonlinear analysis of a multifilter phase-lockloop (MPLL) by using the method of harmonic balance is presented. The particular MPLL considered has a low-pass filter and a band-pass filter in parallel. An analytic expression for the relationship between the input signal phase deviation and the phase error is determined for sinusoidal FM in the absence of noise. The expression is used to determine bounds on the proper operating region for the MPLL and to investigate the jump phenomenon previously observed. From these results the proper modulation index, modulating frequency, etc. used for the design of a MPLL are determined. Data for the loop unlock boundary obtained from the theoretical expression are compared to data obtained from analog computer simulations of the MPLL.

  2. Surface Wave Cloak from Graded Refractive Index Nanocomposites

    PubMed Central

    La Spada, L.; McManus, T. M.; Dyke, A.; Haq, S.; Zhang, L.; Cheng, Q.; Hao, Y.

    2016-01-01

    Recently, a great deal of interest has been re-emerged on the possibility to manipulate surface waves, in particular, towards the THz and optical regime. Both concepts of Transformation Optics (TO) and metamaterials have been regarded as one of key enablers for such applications in applied electromagnetics. In this paper, we experimentally demonstrate for the first time a dielectric surface wave cloak from engineered gradient index materials to illustrate the possibility of using nanocomposites to control surface wave propagation through advanced additive manufacturing. The device is designed analytically and validated through numerical simulations and measurements, showing good agreement and performance as an effective surface wave cloak. The underlying design approach has much wider applications, which span from microwave to optics for the control of surface plasmon polaritons (SPPs) and radiation of nanoantennas. PMID:27416815

  3. Parachute-deployment-parameter identification based on an analytical simulation of Viking BLDT AV-4

    NASA Technical Reports Server (NTRS)

    Talay, T. A.

    1974-01-01

    A six-degree-of-freedom analytical simulation of parachute deployment dynamics developed at the Langley Research Center is presented. A comparison study was made using flight results from the Viking Balloon Launched Decelerator Test (BLDT) AV-4. Since there are significant voids in the knowledge of vehicle and decelerator aerodynamics and suspension system physical properties, a set of deployment-parameter input has been defined which may be used as a basis for future studies of parachute deployment dynamics. The study indicates the analytical model is sufficiently sophisticated to investigate parachute deployment dynamics with reasonable accuracy.

  4. Poster - Thur Eve - 68: Evaluation and analytical comparison of different 2D and 3D treatment planning systems using dosimetry in anthropomorphic phantom.

    PubMed

    Khosravi, H R; Nodehi, Mr Golrokh; Asnaashari, Kh; Mahdavi, S R; Shirazi, A R; Gholami, S

    2012-07-01

    The aim of this study was to evaluate and analytically compare different calculation algorithms applied in our country radiotherapy centers base on the methodology developed by IAEA for treatment planning systems (TPS) commissioning (IAEA TEC-DOC 1583). Thorax anthropomorphic phantom (002LFC CIRS inc.), was used to measure 7 tests that simulate the whole chain of external beam TPS. The dose were measured with ion chambers and the deviation between measured and TPS calculated dose was reported. This methodology, which employs the same phantom and the same setup test cases, was tested in 4 different hospitals which were using 5 different algorithms/ inhomogeneity correction methods implemented in different TPS. The algorithms in this study were divided into two groups including correction based and model based algorithms. A total of 84 clinical test case datasets for different energies and calculation algorithms were produced, which amounts of differences in inhomogeneity points with low density (lung) and high density (bone) was decreased meaningfully with advanced algorithms. The number of deviations outside agreement criteria was increased with the beam energy and decreased with advancement of the TPS calculation algorithm. Large deviations were seen in some correction based algorithms, so sophisticated algorithms, would be preferred in clinical practices, especially for calculation in inhomogeneous media. Use of model based algorithms with lateral transport calculation, is recommended. Some systematic errors which were revealed during this study, is showing necessity of performing periodic audits on TPS in radiotherapy centers. © 2012 American Association of Physicists in Medicine.

  5. Improving the trust in results of numerical simulations and scientific data analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cappello, Franck; Constantinescu, Emil; Hovland, Paul

    This white paper investigates several key aspects of the trust that a user can give to the results of numerical simulations and scientific data analytics. In this document, the notion of trust is related to the integrity of numerical simulations and data analytics applications. This white paper complements the DOE ASCR report on Cybersecurity for Scientific Computing Integrity by (1) exploring the sources of trust loss; (2) reviewing the definitions of trust in several areas; (3) providing numerous cases of result alteration, some of them leading to catastrophic failures; (4) examining the current notion of trust in numerical simulation andmore » scientific data analytics; (5) providing a gap analysis; and (6) suggesting two important research directions and their respective research topics. To simplify the presentation without loss of generality, we consider that trust in results can be lost (or the results’ integrity impaired) because of any form of corruption happening during the execution of the numerical simulation or the data analytics application. In general, the sources of such corruption are threefold: errors, bugs, and attacks. Current applications are already using techniques to deal with different types of corruption. However, not all potential corruptions are covered by these techniques. We firmly believe that the current level of trust that a user has in the results is at least partially founded on ignorance of this issue or the hope that no undetected corruptions will occur during the execution. This white paper explores the notion of trust and suggests recommendations for developing a more scientifically grounded notion of trust in numerical simulation and scientific data analytics. We first formulate the problem and show that it goes beyond previous questions regarding the quality of results such as V&V, uncertainly quantification, and data assimilation. We then explore the complexity of this difficult problem, and we sketch complementary general approaches to address it. This paper does not focus on the trust that the execution will actually complete. The product of simulation or of data analytic executions is the final element of a potentially long chain of transformations, where each stage has the potential to introduce harmful corruptions. These corruptions may produce results that deviate from the user-expected accuracy without notifying the user of this deviation. There are many potential sources of corruption before and during the execution; consequently, in this white paper we do not focus on the protection of the end result after the execution.« less

  6. Alternative Stable States, Coral Reefs, and Smooth Dynamics with a Kick.

    PubMed

    Ippolito, Stephen; Naudot, Vincent; Noonburg, Erik G

    2016-03-01

    We consider a computer simulation, which was found to be faithful to time series data for Caribbean coral reefs, and an analytical model to help understand the dynamics of the simulation. The analytical model is a system of ordinary differential equations (ODE), and the authors claim this model demonstrates the existence of alternative stable states. The existence of an alternative stable state should consider a sudden shift in coral and macroalgae populations, while the grazing rate remains constant. The results of such shifts, however, are often confounded by changes in grazing rate. Although the ODE suggest alternative stable states, the ODE need modification to explicitly account for shifts or discrete events such as hurricanes. The goal of this paper will be to study the simulation dynamics through a simplified analytical representation. We proceed by modifying the original analytical model through incorporating discrete changes into the ODE. We then analyze the resulting dynamics and their bifurcations with respect to changes in grazing rate and hurricane frequency. In particular, a "kick" enabling the ODE to consider impulse events is added. Beyond adding a "kick" we employ the grazing function that is suggested by the simulation. The extended model was fit to the simulation data to support its use and predicts the existence cycles depending nonlinearly on grazing rates and hurricane frequency. These cycles may bring new insights into consideration for reef health, restoration and dynamics.

  7. Overview of Experimental Capabilities - Supersonics

    NASA Technical Reports Server (NTRS)

    Banks, Daniel W.

    2007-01-01

    This viewgraph presentation gives an overview of experimental capabilities applicable to the area of supersonic research. The contents include: 1) EC Objectives; 2) SUP.11: Elements; 3) NRA; 4) Advanced Flight Simulator Flexible Aircraft Simulation Studies; 5) Advanced Flight Simulator Flying Qualities Guideline Development for Flexible Supersonic Transport Aircraft; 6) Advanced Flight Simulator Rigid/Flex Flight Control; 7) Advanced Flight Simulator Rapid Sim Model Exchange; 8) Flight Test Capabilities Advanced In-Flight Infrared (IR) Thermography; 9) Flight Test Capabilities In-Flight Schlieren; 10) Flight Test Capabilities CLIP Flow Calibration; 11) Flight Test Capabilities PFTF Flowfield Survey; 12) Ground Test Capabilities Laser-Induced Thermal Acoustics (LITA); 13) Ground Test Capabilities Doppler Global Velocimetry (DGV); 14) Ground Test Capabilities Doppler Global Velocimetry (DGV); and 15) Ground Test Capabilities EDL Optical Measurement Capability (PIV) for Rigid/Flexible Decelerator Models.

  8. Analytical and numerical simulation of the steady-state hydrologic effects of mining aggregate in hypothetical sand-and-gravel and fractured crystalline-rock aquifers

    USGS Publications Warehouse

    Arnold, L.R.; Langer, William H.; Paschke, Suzanne Smith

    2003-01-01

    Analytical solutions and numerical models were used to predict the extent of steady-state drawdown caused by mining of aggregate below the water table in hypothetical sand-and-gravel and fractured crystalline-rock aquifers representative of hydrogeologic settings in the Front Range area of Colorado. Analytical solutions were used to predict the extent of drawdown under a wide range of hydrologic and mining conditions that assume aquifer homogeneity, isotropy, and infinite extent. Numerical ground-water flow models were used to estimate the extent of drawdown under conditions that consider heterogeneity, anisotropy, and hydrologic boundaries and to simulate complex or unusual conditions not readily simulated using analytical solutions. Analytical simulations indicated that the drawdown radius (or distance) of influence increased as horizontal hydraulic conductivity of the aquifer, mine penetration of the water table, and mine radius increased; radius of influence decreased as aquifer recharge increased. Sensitivity analysis of analytical simulations under intermediate conditions in sand-and-gravel and fractured crystalline-rock aquifers indicated that the drawdown radius of influence was most sensitive to mine penetration of the water table and least sensitive to mine radius. Radius of influence was equally sensitive to changes in horizontal hydraulic conductivity and recharge. Numerical simulations of pits in sand-and- gravel aquifers indicated that the area of influence in a vertically anisotropic sand-and-gravel aquifer of medium size was nearly identical to that in an isotropic aquifer of the same size. Simulated area of influence increased as aquifer size increased and aquifer boundaries were farther away from the pit, and simulated drawdown was greater near the pit when aquifer boundaries were close to the pit. Pits simulated as lined with slurry walls caused mounding to occur upgradient from the pits and drawdown to occur downgradient from the pits. Pits simulated as refilled with water and undergoing evaporative losses had little hydro- logic effect on the aquifer. Numerical sensitivity analyses for simulations of pits in sand-and-gravel aquifers indicated that simulated head was most sensitive to horizontal hydraulic conductivity and the hydraulic conductance of general-head boundaries in the models. Simulated head was less sensitive to riverbed conductance and recharge and relatively insensitive to vertical hydraulic conductivity. Numerical simulations of quarries in fractured crystalline-rock aquifers indicated that the area of influence in a horizontally anisotropic aquifer was elongated in the direction of higher horizontal hydraulic conductivity and shortened in the direction of lower horizontal hydraulic conductivity compared to area of influence in a homogeneous, isotropic aquifer. Area of influence was larger in an aquifer with ground-water flow in deep, low-permeability fractures than in a homogeneous, isotropic aquifer. Area of influence was larger for a quarry intersected by a hydraulically conductive fault zone and smaller for a quarry intersected by a low-conductivity fault zone. Numerical sensitivity analyses for simulations of quarries in fractured crystalline-rock aquifers indicated simulated head was most sensitive to variations in recharge and horizontal hydraulic conductivity, had little sensitivity to vertical hydraulic conductivity and drain cells used to simulate valleys, and was relatively insensitive to drain cells used to simulate the quarry.

  9. An Antarctic research outpost as a model for planetary exploration.

    PubMed

    Andersen, D T; McKay, C P; Wharton, R A; Rummel, J D

    1990-01-01

    During the next 50 years, human civilization may well begin expanding into the solar system. This colonization of extraterrestrial bodies will most likely begin with the establishment of small research outposts on the Moon and/or Mars. In all probability these facilities, designed primarily for conducting exploration and basic science, will have international participation in their crews, logistical support and funding. High fidelity Earth-based simulations of planetary exploration could help prepare for these expensive and complex operations. Antarctica provides one possible venue for such a simulation. The hostile and remote dry valleys of southern Victoria Land offer a valid analog to the Martian environment but are sufficiently accessible to allow routine logistical support and to assure the relative safety of their inhabitants. An Antarctic research outpost designed as a planetary exploration simulation facility would have great potential as a testbed and training site for the operation of future Mars bases and represents a near-term, relatively low-cost alternative to other precursor activities. Antarctica already enjoys an international dimension, an aspect that is more than symbolically appropriate to an international endeavor of unprecedented scientific and social significance--planetary exploration by humans. Potential uses of such a facility include: 1) studying human factors in an isolated environment (including long-term interactions among an international crew); 2) testing emerging technologies (e.g., advanced life support facilities such as a partial bioregenerative life support system, advanced analytical and sample acquisition instrumentation and equipment, etc.); and 3) conducting basic scientific research similar to the research that will be conducted on Mars, while contributing to the planning for human exploration. (Research of this type is already ongoing in Antarctica).

  10. RICH: OPEN-SOURCE HYDRODYNAMIC SIMULATION ON A MOVING VORONOI MESH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yalinewich, Almog; Steinberg, Elad; Sari, Re’em

    2015-02-01

    We present here RICH, a state-of-the-art two-dimensional hydrodynamic code based on Godunov’s method, on an unstructured moving mesh (the acronym stands for Racah Institute Computational Hydrodynamics). This code is largely based on the code AREPO. It differs from AREPO in the interpolation and time-advancement schemeS as well as a novel parallelization scheme based on Voronoi tessellation. Using our code, we study the pros and cons of a moving mesh (in comparison to a static mesh). We also compare its accuracy to other codes. Specifically, we show that our implementation of external sources and time-advancement scheme is more accurate and robustmore » than is AREPO when the mesh is allowed to move. We performed a parameter study of the cell rounding mechanism (Lloyd iterations) and its effects. We find that in most cases a moving mesh gives better results than a static mesh, but it is not universally true. In the case where matter moves in one way and a sound wave is traveling in the other way (such that relative to the grid the wave is not moving) a static mesh gives better results than a moving mesh. We perform an analytic analysis for finite difference schemes that reveals that a Lagrangian simulation is better than a Eulerian simulation in the case of a highly supersonic flow. Moreover, we show that Voronoi-based moving mesh schemes suffer from an error, which is resolution independent, due to inconsistencies between the flux calculation and the change in the area of a cell. Our code is publicly available as open source and designed in an object-oriented, user-friendly way that facilitates incorporation of new algorithms and physical processes.« less

  11. Commissioning of a grid-based Boltzmann solver for cervical cancer brachytherapy treatment planning with shielded colpostats.

    PubMed

    Mikell, Justin K; Klopp, Ann H; Price, Michael; Mourtada, Firas

    2013-01-01

    We sought to commission a gynecologic shielded colpostat analytic model provided from a treatment planning system (TPS) library. We have reported retrospectively the dosimetric impact of this applicator model in a cohort of patients. A commercial TPS with a grid-based Boltzmann solver (GBBS) was commissioned for (192)Ir high-dose-rate (HDR) brachytherapy for cervical cancer with stainless steel-shielded colpostats. Verification of the colpostat analytic model was verified using a radiograph and vendor schematics. MCNPX v2.6 Monte Carlo simulations were performed to compare dose distributions around the applicator in water with the TPS GBBS dose predictions. Retrospectively, the dosimetric impact was assessed over 24 cervical cancer patients' HDR plans. Applicator (TPS ID #AL13122005) shield dimensions were within 0.4 mm of the independent shield dimensions verification. GBBS profiles in planes bisecting the cap around the applicator agreed with Monte Carlo simulations within 2% at most locations; differing screw representations resulted in differences of up to 9%. For the retrospective study, the GBBS doses differed from TG-43 as follows (mean value ± standard deviation [min, max]): International Commission on Radiation units [ICRU]rectum (-8.4 ± 2.5% [-14.1, -4.1%]), ICRUbladder (-7.2 ± 3.6% [-15.7, -2.1%]), D2cc-rectum (-6.2 ± 2.6% [-11.9, -0.8%]), D2cc-sigmoid (-5.6 ± 2.6% [-9.3, -2.0%]), and D2cc-bladder (-3.4 ± 1.9% [-7.2, -1.1%]). As brachytherapy TPSs implement advanced model-based dose calculations, the analytic applicator models stored in TPSs should be independently validated before clinical use. For this cohort, clinically meaningful differences (>5%) from TG-43 were observed. Accurate dosimetric modeling of shielded applicators may help to refine organ toxicity studies. Copyright © 2013 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  12. An Advanced Framework for Improving Situational Awareness in Electric Power Grid Operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Huang, Zhenyu; Zhou, Ning

    With the deployment of new smart grid technologies and the penetration of renewable energy in power systems, significant uncertainty and variability is being introduced into power grid operation. Traditionally, the Energy Management System (EMS) operates the power grid in a deterministic mode, and thus will not be sufficient for the future control center in a stochastic environment with faster dynamics. One of the main challenges is to improve situational awareness. This paper reviews the current status of power grid operation and presents a vision of improving wide-area situational awareness for a future control center. An advanced framework, consisting of parallelmore » state estimation, state prediction, parallel contingency selection, parallel contingency analysis, and advanced visual analytics, is proposed to provide capabilities needed for better decision support by utilizing high performance computing (HPC) techniques and advanced visual analytic techniques. Research results are presented to support the proposed vision and framework.« less

  13. Monte-Carlo simulation of OCT structural images of human skin using experimental B-scans and voxel based approach to optical properties distribution

    NASA Astrophysics Data System (ADS)

    Frolov, S. V.; Potlov, A. Yu.; Petrov, D. A.; Proskurin, S. G.

    2017-03-01

    A method of optical coherence tomography (OCT) structural images reconstruction using Monte Carlo simulations is described. Biological object is considered as a set of 3D elements that allow simulation of media, structure of which cannot be described analytically. Each voxel is characterized by its refractive index and anisotropy parameter, scattering and absorption coefficients. B-scans of the inner structure are used to reconstruct a simulated image instead of analytical representation of the boundary geometry. Henye-Greenstein scattering function, Beer-Lambert-Bouguer law and Fresnel equations are used for photon transport description. Efficiency of the described technique is checked by the comparison of the simulated and experimentally acquired A-scans.

  14. An Experimental Introduction to Interlaboratory Exercises in Analytical Chemistry

    ERIC Educational Resources Information Center

    Puignou, L.; Llaurado, M.

    2005-01-01

    An experimental exercise on analytical proficiency studies in collaborative trials is proposed. This practical provides students in advanced undergraduate courses in chemistry, pharmacy, and biochemistry, with the opportunity to improve their quality assurance skills. It involves an environmental analysis, determining the concentration of a…

  15. Strategic analytics: towards fully embedding evidence in healthcare decision-making.

    PubMed

    Garay, Jason; Cartagena, Rosario; Esensoy, Ali Vahit; Handa, Kiren; Kane, Eli; Kaw, Neal; Sadat, Somayeh

    2015-01-01

    Cancer Care Ontario (CCO) has implemented multiple information technology solutions and collected health-system data to support its programs. There is now an opportunity to leverage these data and perform advanced end-to-end analytics that inform decisions around improving health-system performance. In 2014, CCO engaged in an extensive assessment of its current data capacity and capability, with the intent to drive increased use of data for evidence-based decision-making. The breadth and volume of data at CCO uniquely places the organization to contribute to not only system-wide operational reporting, but more advanced modelling of current and future state system management and planning. In 2012, CCO established a strategic analytics practice to assist the agency's programs contextualize and inform key business decisions and to provide support through innovative predictive analytics solutions. This paper describes the organizational structure, services and supporting operations that have enabled progress to date, and discusses the next steps towards the vision of embedding evidence fully into healthcare decision-making. Copyright © 2014 Longwoods Publishing.

  16. VASA: Interactive Computational Steering of Large Asynchronous Simulation Pipelines for Societal Infrastructure.

    PubMed

    Ko, Sungahn; Zhao, Jieqiong; Xia, Jing; Afzal, Shehzad; Wang, Xiaoyu; Abram, Greg; Elmqvist, Niklas; Kne, Len; Van Riper, David; Gaither, Kelly; Kennedy, Shaun; Tolone, William; Ribarsky, William; Ebert, David S

    2014-12-01

    We present VASA, a visual analytics platform consisting of a desktop application, a component model, and a suite of distributed simulation components for modeling the impact of societal threats such as weather, food contamination, and traffic on critical infrastructure such as supply chains, road networks, and power grids. Each component encapsulates a high-fidelity simulation model that together form an asynchronous simulation pipeline: a system of systems of individual simulations with a common data and parameter exchange format. At the heart of VASA is the Workbench, a visual analytics application providing three distinct features: (1) low-fidelity approximations of the distributed simulation components using local simulation proxies to enable analysts to interactively configure a simulation run; (2) computational steering mechanisms to manage the execution of individual simulation components; and (3) spatiotemporal and interactive methods to explore the combined results of a simulation run. We showcase the utility of the platform using examples involving supply chains during a hurricane as well as food contamination in a fast food restaurant chain.

  17. Development of advanced, continuous mild gasification process for the production of co-products

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ness, R.O. Jr.; Aulich, T.R.

    1991-05-01

    The current objective of the University of North Dakota Energy and Environmental Research Center (EERC) mild gasification project is to optimize reaction char and marketable liquids production on a 100-lb/hr scale using Wyodak subbituminous and Indiana No. 3 bituminous coals. Tests performed using the EERC 100-lb/hr process development unit (PDU) include a refractory-cure (Test P001), a test using petroleum coke (Test P002), and tests using Wyodak and Indiana coals. The reactor system used for the 11 PDU tests conducted to date consists of a spouted, fluid-bed carbonizer equipped with an on-line condensation train that yields three boiling point fractions ofmore » coal liquids ranging in volatility from about (77{degrees}--750{degrees}F) (25{degrees}--400{degrees}C). The September--December 1990 quarterly report described reaction conditions and the bulk of the analytical results for Tests P010 and P011. This report describes further P010 and P011 analytical work, including the generation of simulated distillation curves for liquid samples on the basis of sulfur content, using gas chromatography coupled with atomic emission detection (GC/AED) analysis. 13 figs., 3 tabs.« less

  18. Boeing Smart Rotor Full-scale Wind Tunnel Test Data Report

    NASA Technical Reports Server (NTRS)

    Kottapalli, Sesi; Hagerty, Brandon; Salazar, Denise

    2016-01-01

    A full-scale helicopter smart material actuated rotor technology (SMART) rotor test was conducted in the USAF National Full-Scale Aerodynamics Complex 40- by 80-Foot Wind Tunnel at NASA Ames. The SMART rotor system is a five-bladed MD 902 bearingless rotor with active trailing-edge flaps. The flaps are actuated using piezoelectric actuators. Rotor performance, structural loads, and acoustic data were obtained over a wide range of rotor shaft angles of attack, thrust, and airspeeds. The primary test objective was to acquire unique validation data for the high-performance computing analyses developed under the Defense Advanced Research Project Agency (DARPA) Helicopter Quieting Program (HQP). Other research objectives included quantifying the ability of the on-blade flaps to achieve vibration reduction, rotor smoothing, and performance improvements. This data set of rotor performance and structural loads can be used for analytical and experimental comparison studies with other full-scale rotor systems and for analytical validation of computer simulation models. The purpose of this final data report is to document a comprehensive, highquality data set that includes only data points where the flap was actively controlled and each of the five flaps behaved in a similar manner.

  19. Workplace Skills Taught in a Simulated Analytical Department

    NASA Astrophysics Data System (ADS)

    Sonchik Marine, Susan

    2001-11-01

    Integration of workplace skills into the academic setting is paramount for any chemical technology program. In addition to the expected chemistry content, courses must build proficiency in oral and written communication skills, computer skills, laboratory safety, and logical troubleshooting. Miami University's Chemical Technology II course is set up as a contract analytical laboratory. Students apply the advanced sampling techniques, quality assurance, standard methods, and statistical analyses they have studied. For further integration of workplace skills, weekly "department meetings" are held where the student, as members of the department, report on their work in process, present completed projects, and share what they have learned and what problems they have encountered. Information is shared between the experienced members of the department and those encountering problems or starting a new project. The instructor as department manager makes announcements, reviews company and department status, and assigns work for the coming week. The department members report results to clients in formal reports or in short memos. Factors affecting the success of the "department meeting" approach include the formality of the meeting room, use of an official agenda, the frequency, time, and duration of the meeting, and accountability of the students.

  20. Atomistic models of vacancy-mediated diffusion in silicon

    NASA Astrophysics Data System (ADS)

    Dunham, Scott T.; Wu, Can Dong

    1995-08-01

    Vacancy-mediated diffusion of dopants in silicon is investigated using Monte Carlo simulations of hopping diffusion, as well as analytic approximations based on atomistic considerations. Dopant/vacancy interaction potentials are assumed to extend out to third-nearest neighbor distances, as required for pair diffusion theories. Analysis focusing on the third-nearest neighbor sites as bridging configurations for uncorrelated hops leads to an improved analytic model for vacancy-mediated dopant diffusion. The Monte Carlo simulations of vacancy motion on a doped silicon lattice verify the analytic results for moderate doping levels. For very high doping (≳2×1020 cm-3) the simulations show a very rapid increase in pair diffusivity due to interactions of vacancies with more than one dopant atom. This behavior has previously been observed experimentally for group IV and V atoms in silicon [Nylandsted Larsen et al., J. Appl. Phys. 73, 691 (1993)], and the simulations predict both the point of onset and doping dependence of the experimentally observed diffusivity enhancement.

  1. SU-G-JeP2-15: Proton Beam Behavior in the Presence of Realistic Magnet Fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santos, D M; Wachowicz, K; Fallone, B G

    2016-06-15

    Purpose: To investigate the effects of magnetic fields on proton therapy beams for integration with MRI. Methods: 3D magnetic fields from an open-bore superconducting MRI model (previously developed by our group) and 3D magnetic fields from an in-house gradient coil design were applied to various mono energetic proton pencil beam (80MeV to 250MeV) simulations. In all simulations, the z-axis of the simulation geometry coincided with the direction of the B0 field and magnet isocentre. In each simulation, the initial beam trajectory was varied. The first set of simulations performed was based on analytic magnetic force equations (analytic simulations), which couldmore » be rapidly calculated yet were limited to propagating proton beams in vacuum. The second set is full Monte Carlo (MC) simulations, which used GEANT4 MC toolkit. Metrics such as the beam position and dose profiles were extracted. Comparisons between the cases with and without magnetic fields present were made. Results: The analytic simulations served as verification checks for the MC simulations when the same simulation geometries were used. The results of the analytic simulations agreed with the MC simulations performed in vacuum. The presence of the MRI’s static magnetic field causes proton pencil beams to follow a slight helical trajectory when there were some initial off-axis components. The 80MeV, 150MeV, and 250MeV proton beams rotated by 4.9o, 3.6o, and 2.8o, respectively, when they reached z=0cm. The deflections caused by gradient coils’ magnetic fields show spatially invariant patterns with a maximum range of 0.5mm at z=0cm. Conclusion: This investigation reveals that both the MRI’s B0 and gradient magnetic fields can cause small but observable deflections of proton beams at energies studied. The MRI’s static field caused a rotation of the beam while the gradient coils’ fields effects were spatially invariant. Dr. B Gino Fallone is a co-founder and CEO of MagnetTx Oncology Solutions (under discussions to license Alberta bi-planar linac MR for commercialization)« less

  2. 33 CFR 385.33 - Revisions to models and analytical tools.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Management District, and other non-Federal sponsors shall rely on the best available science including models..., and assessment of projects. The selection of models and analytical tools shall be done in consultation... system-wide simulation models and analytical tools used in the evaluation and assessment of projects, and...

  3. Influence of the track quality and of the properties of the wheel-rail rolling contact on vehicle dynamics

    NASA Astrophysics Data System (ADS)

    Suarez, Berta; Felez, Jesus; Lozano, José Antonio; Rodriguez, Pablo

    2013-02-01

    This work describes an analytical approach to determine what degree of accuracy is required in the definition of the rail vehicle models used for dynamic simulations. This way it would be possible to know in advance how the results of simulations may be altered due to the existence of errors in the creation of rolling stock models, whilst also identifying their critical parameters. This would make it possible to maximise the time available to enhance dynamic analysis and focus efforts on factors that are strictly necessary. In particular, the parameters related both to the track quality and to the rolling contact were considered in this study. With this aim, a sensitivity analysis was performed to assess their influence on the vehicle dynamic behaviour. To do this, 72 dynamic simulations were performed modifying, one at a time, the track quality, the wheel-rail friction coefficient and the equivalent conicity of both new and worn wheels. Three values were assigned to each parameter, and two wear states were considered for each type of wheel, one for new wheels and another one for reprofiled wheels. After processing the results of these simulations, it was concluded that all the parameters considered show very high influence, though the friction coefficient shows the highest influence. Therefore, it is recommended to undertake any future simulation job with measured track geometry and track irregularities, measured wheel profiles and normative values of the wheel-rail friction coefficient.

  4. Modeling and Simulation Tools for Heavy Lift Airships

    NASA Technical Reports Server (NTRS)

    Hochstetler, Ron; Chachad, Girish; Hardy, Gordon; Blanken, Matthew; Melton, John

    2016-01-01

    For conventional fixed wing and rotary wing aircraft a variety of modeling and simulation tools have been developed to provide designers the means to thoroughly investigate proposed designs and operational concepts. However, lighter-than-air (LTA) airships, hybrid air vehicles, and aerostats have some important aspects that are different from heavier-than-air (HTA) vehicles. In order to account for these differences, modifications are required to the standard design tools to fully characterize the LTA vehicle design and performance parameters.. To address these LTA design and operational factors, LTA development organizations have created unique proprietary modeling tools, often at their own expense. An expansion of this limited LTA tool set could be accomplished by leveraging existing modeling and simulation capabilities available in the National laboratories and public research centers. Development of an expanded set of publicly available LTA modeling and simulation tools for LTA developers would mitigate the reliance on proprietary LTA design tools in use today. A set of well researched, open source, high fidelity LTA design modeling and simulation tools would advance LTA vehicle development and also provide the analytical basis for accurate LTA operational cost assessments. This paper will present the modeling and analysis tool capabilities required for LTA vehicle design, analysis of operations, and full life-cycle support. A survey of the tools currently available will be assessed to identify the gaps between their capabilities and the LTA industry's needs. Options for development of new modeling and analysis capabilities to supplement contemporary tools will also be presented.

  5. Compilation of Abstracts for SC12 Conference Proceedings

    NASA Technical Reports Server (NTRS)

    Morello, Gina Francine (Compiler)

    2012-01-01

    1 A Breakthrough in Rotorcraft Prediction Accuracy Using Detached Eddy Simulation; 2 Adjoint-Based Design for Complex Aerospace Configurations; 3 Simulating Hypersonic Turbulent Combustion for Future Aircraft; 4 From a Roar to a Whisper: Making Modern Aircraft Quieter; 5 Modeling of Extended Formation Flight on High-Performance Computers; 6 Supersonic Retropropulsion for Mars Entry; 7 Validating Water Spray Simulation Models for the SLS Launch Environment; 8 Simulating Moving Valves for Space Launch System Liquid Engines; 9 Innovative Simulations for Modeling the SLS Solid Rocket Booster Ignition; 10 Solid Rocket Booster Ignition Overpressure Simulations for the Space Launch System; 11 CFD Simulations to Support the Next Generation of Launch Pads; 12 Modeling and Simulation Support for NASA's Next-Generation Space Launch System; 13 Simulating Planetary Entry Environments for Space Exploration Vehicles; 14 NASA Center for Climate Simulation Highlights; 15 Ultrascale Climate Data Visualization and Analysis; 16 NASA Climate Simulations and Observations for the IPCC and Beyond; 17 Next-Generation Climate Data Services: MERRA Analytics; 18 Recent Advances in High-Resolution Global Atmospheric Modeling; 19 Causes and Consequences of Turbulence in the Earths Protective Shield; 20 NASA Earth Exchange (NEX): A Collaborative Supercomputing Platform; 21 Powering Deep Space Missions: Thermoelectric Properties of Complex Materials; 22 Meeting NASA's High-End Computing Goals Through Innovation; 23 Continuous Enhancements to the Pleiades Supercomputer for Maximum Uptime; 24 Live Demonstrations of 100-Gbps File Transfers Across LANs and WANs; 25 Untangling the Computing Landscape for Climate Simulations; 26 Simulating Galaxies and the Universe; 27 The Mysterious Origin of Stellar Masses; 28 Hot-Plasma Geysers on the Sun; 29 Turbulent Life of Kepler Stars; 30 Modeling Weather on the Sun; 31 Weather on Mars: The Meteorology of Gale Crater; 32 Enhancing Performance of NASAs High-End Computing Applications; 33 Designing Curiosity's Perfect Landing on Mars; 34 The Search Continues: Kepler's Quest for Habitable Earth-Sized Planets.

  6. Epp: A C++ EGSnrc user code for x-ray imaging and scattering simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lippuner, Jonas; Elbakri, Idris A.; Cui Congwu

    2011-03-15

    Purpose: Easy particle propagation (Epp) is a user code for the EGSnrc code package based on the C++ class library egspp. A main feature of egspp (and Epp) is the ability to use analytical objects to construct simulation geometries. The authors developed Epp to facilitate the simulation of x-ray imaging geometries, especially in the case of scatter studies. While direct use of egspp requires knowledge of C++, Epp requires no programming experience. Methods: Epp's features include calculation of dose deposited in a voxelized phantom and photon propagation to a user-defined imaging plane. Projection images of primary, single Rayleigh scattered, singlemore » Compton scattered, and multiple scattered photons may be generated. Epp input files can be nested, allowing for the construction of complex simulation geometries from more basic components. To demonstrate the imaging features of Epp, the authors simulate 38 keV x rays from a point source propagating through a water cylinder 12 cm in diameter, using both analytical and voxelized representations of the cylinder. The simulation generates projection images of primary and scattered photons at a user-defined imaging plane. The authors also simulate dose scoring in the voxelized version of the phantom in both Epp and DOSXYZnrc and examine the accuracy of Epp using the Kawrakow-Fippel test. Results: The results of the imaging simulations with Epp using voxelized and analytical descriptions of the water cylinder agree within 1%. The results of the Kawrakow-Fippel test suggest good agreement between Epp and DOSXYZnrc. Conclusions: Epp provides the user with useful features, including the ability to build complex geometries from simpler ones and the ability to generate images of scattered and primary photons. There is no inherent computational time saving arising from Epp, except for those arising from egspp's ability to use analytical representations of simulation geometries. Epp agrees with DOSXYZnrc in dose calculation, since they are both based on the well-validated standard EGSnrc radiation transport physics model.« less

  7. Open-Source Integrated Design-Analysis Environment For Nuclear Energy Advanced Modeling & Simulation Final Scientific/Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Leary, Patrick

    The framework created through the Open-Source Integrated Design-Analysis Environment (IDAE) for Nuclear Energy Advanced Modeling & Simulation grant has simplify and democratize advanced modeling and simulation in the nuclear energy industry that works on a range of nuclear engineering applications. It leverages millions of investment dollars from the Department of Energy's Office of Nuclear Energy for modeling and simulation of light water reactors and the Office of Nuclear Energy's research and development. The IDEA framework enhanced Kitware’s Computational Model Builder (CMB) while leveraging existing open-source toolkits and creating a graphical end-to-end umbrella guiding end-users and developers through the nuclear energymore » advanced modeling and simulation lifecycle. In addition, the work deliver strategic advancements in meshing and visualization for ensembles.« less

  8. Parsec-Scale Obscuring Accretion Disk with Large-Scale Magnetic Field in AGNs

    NASA Technical Reports Server (NTRS)

    Dorodnitsyn, A.; Kallman, T.

    2017-01-01

    A magnetic field dragged from the galactic disk, along with inflowing gas, can provide vertical support to the geometrically and optically thick pc (parsec) -scale torus in AGNs (Active Galactic Nuclei). Using the Soloviev solution initially developed for Tokamaks, we derive an analytical model for a rotating torus that is supported and confined by a magnetic field. We further perform three-dimensional magneto-hydrodynamic simulations of X-ray irradiated, pc-scale, magnetized tori. We follow the time evolution and compare models that adopt initial conditions derived from our analytic model with simulations in which the initial magnetic flux is entirely contained within the gas torus. Numerical simulations demonstrate that the initial conditions based on the analytic solution produce a longer-lived torus that produces obscuration that is generally consistent with observed constraints.

  9. A review of the analytical simulation of aircraft crash dynamics

    NASA Technical Reports Server (NTRS)

    Fasanella, Edwin L.; Carden, Huey D.; Boitnott, Richard L.; Hayduk, Robert J.

    1990-01-01

    A large number of full scale tests of general aviation aircraft, helicopters, and one unique air-to-ground controlled impact of a transport aircraft were performed. Additionally, research was also conducted on seat dynamic performance, load-limiting seats, load limiting subfloor designs, and emergency-locator-transmitters (ELTs). Computer programs were developed to provide designers with methods for predicting accelerations, velocities, and displacements of collapsing structure and for estimating the human response to crash loads. The results of full scale aircraft and component tests were used to verify and guide the development of analytical simulation tools and to demonstrate impact load attenuating concepts. Analytical simulation of metal and composite aircraft crash dynamics are addressed. Finite element models are examined to determine their degree of corroboration by experimental data and to reveal deficiencies requiring further development.

  10. Validation of an Active Gear, Flexible Aircraft Take-off and Landing analysis (AGFATL)

    NASA Technical Reports Server (NTRS)

    Mcgehee, J. R.

    1984-01-01

    The results of an analytical investigation using a computer program for active gear, flexible aircraft take off and landing analysis (AGFATL) are compared with experimental data from shaker tests, drop tests, and simulated landing tests to validate the AGFATL computer program. Comparison of experimental and analytical responses for both passive and active gears indicates good agreement for shaker tests and drop tests. For the simulated landing tests, the passive and active gears were influenced by large strut binding friction forces. The inclusion of these undefined forces in the analytical simulations was difficult, and consequently only fair to good agreement was obtained. An assessment of the results from the investigation indicates that the AGFATL computer program is a valid tool for the study and initial design of series hydraulic active control landing gear systems.

  11. New analytic results for speciation times in neutral models.

    PubMed

    Gernhard, Tanja

    2008-05-01

    In this paper, we investigate the standard Yule model, and a recently studied model of speciation and extinction, the "critical branching process." We develop an analytic way-as opposed to the common simulation approach-for calculating the speciation times in a reconstructed phylogenetic tree. Simple expressions for the density and the moments of the speciation times are obtained. Methods for dating a speciation event become valuable, if for the reconstructed phylogenetic trees, no time scale is available. A missing time scale could be due to supertree methods, morphological data, or molecular data which violates the molecular clock. Our analytic approach is, in particular, useful for the model with extinction, since simulations of birth-death processes which are conditioned on obtaining n extant species today are quite delicate. Further, simulations are very time consuming for big n under both models.

  12. Parsec-scale Obscuring Accretion Disk with Large-scale Magnetic Field in AGNs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dorodnitsyn, A.; Kallman, T.

    A magnetic field dragged from the galactic disk, along with inflowing gas, can provide vertical support to the geometrically and optically thick pc-scale torus in AGNs. Using the Soloviev solution initially developed for Tokamaks, we derive an analytical model for a rotating torus that is supported and confined by a magnetic field. We further perform three-dimensional magneto-hydrodynamic simulations of X-ray irradiated, pc-scale, magnetized tori. We follow the time evolution and compare models that adopt initial conditions derived from our analytic model with simulations in which the initial magnetic flux is entirely contained within the gas torus. Numerical simulations demonstrate thatmore » the initial conditions based on the analytic solution produce a longer-lived torus that produces obscuration that is generally consistent with observed constraints.« less

  13. SIMULATING LOCAL DENSE AREAS USING PMMA TO ASSESS AUTOMATIC EXPOSURE CONTROL IN DIGITAL MAMMOGRAPHY.

    PubMed

    Bouwman, R W; Binst, J; Dance, D R; Young, K C; Broeders, M J M; den Heeten, G J; Veldkamp, W J H; Bosmans, H; van Engen, R E

    2016-06-01

    Current digital mammography (DM) X-ray systems are equipped with advanced automatic exposure control (AEC) systems, which determine the exposure factors depending on breast composition. In the supplement of the European guidelines for quality assurance in breast cancer screening and diagnosis, a phantom-based test is included to evaluate the AEC response to local dense areas in terms of signal-to-noise ratio (SNR). This study evaluates the proposed test in terms of SNR and dose for four DM systems. The glandular fraction represented by the local dense area was assessed by analytic calculations. It was found that the proposed test simulates adipose to fully glandular breast compositions in attenuation. The doses associated with the phantoms were found to match well with the patient dose distribution. In conclusion, after some small adaptations, the test is valuable for the assessment of the AEC performance in terms of both SNR and dose. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. Tailoring the response of Autonomous Reactivity Control (ARC) systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qvist, Staffan A.; Hellesen, Carl; Gradecka, Malwina

    The Autonomous Reactivity Control (ARC) system was developed to ensure inherent safety of Generation IV reactors while having a minimal impact on reactor performance and economic viability. In this study we present the transient response of fast reactor cores to postulated accident scenarios with and without ARC systems installed. Using a combination of analytical methods and numerical simulation, the principles of ARC system design that assure stability and avoids oscillatory behavior have been identified. A comprehensive transient analysis study for ARC-equipped cores, including a series of Unprotected Loss of Flow (ULOF) and Unprotected Loss of Heat Sink (ULOHS) simulations, weremore » performed for Argonne National Laboratory (ANL) Advanced Burner Reactor (ABR) designs. With carefully designed ARC-systems installed in the fuel assemblies, the cores exhibit a smooth non-oscillatory transition to stabilization at acceptable temperatures following all postulated transients. To avoid oscillations in power and temperature, the reactivity introduced per degree of temperature change in the ARC system needs to be kept below a certain threshold the value of which is system dependent, the temperature span of actuation needs to be as large as possible.« less

  15. Biomolecular logic systems: applications to biosensors and bioactuators

    NASA Astrophysics Data System (ADS)

    Katz, Evgeny

    2014-05-01

    The paper presents an overview of recent advances in biosensors and bioactuators based on the biocomputing concept. Novel biosensors digitally process multiple biochemical signals through Boolean logic networks of coupled biomolecular reactions and produce output in the form of YES/NO response. Compared to traditional single-analyte sensing devices, biocomputing approach enables a high-fidelity multi-analyte biosensing, particularly beneficial for biomedical applications. Multi-signal digital biosensors thus promise advances in rapid diagnosis and treatment of diseases by processing complex patterns of physiological biomarkers. Specifically, they can provide timely detection and alert to medical emergencies, along with an immediate therapeutic intervention. Application of the biocomputing concept has been successfully demonstrated for systems performing logic analysis of biomarkers corresponding to different injuries, particularly exemplified for liver injury. Wide-ranging applications of multi-analyte digital biosensors in medicine, environmental monitoring and homeland security are anticipated. "Smart" bioactuators, for example for signal-triggered drug release, were designed by interfacing switchable electrodes and biocomputing systems. Integration of novel biosensing and bioactuating systems with the biomolecular information processing systems keeps promise for further scientific advances and numerous practical applications.

  16. Analytical model of flame spread in full-scale room/corner tests (ISO9705)

    Treesearch

    Mark Dietenberger; Ondrej Grexa

    1999-01-01

    A physical, yet analytical, model of fire growth has predicted flame spread and rate of heat release (RHR) for an ISO9705 test scenario using bench-scale data from the cone calorimeter. The test scenario simulated was the propane ignition burner at the comer with a 100/300 kW program and the specimen lined on the walls only. Four phases of fire growth were simulated....

  17. Reply to ``Comment on `Free surface Hele-Shaw flows around an obstacle: A random walk simulation' ''

    NASA Astrophysics Data System (ADS)

    Bogoyavlenskiy, Vladislav A.; Cotts, Eric J.

    2007-09-01

    As pointed out by Vasconcelos in his Comment, our computer simulations of Hele-Shaw flows around series of wedges differ from analytical solutions existing for this problem. We attribute the discrepancy to the notion that these analytical solutions correspond to ideal, steady-state flow regimes which are hardly applicable when a rigid obstacle interacts with a moving liquid-gas interface.

  18. Heat and mass transfer in wooden dowels during a simulated fire: an experimental and analytical study

    Treesearch

    J. A. Mardini; A. S. Lavine; V. K. Dhir

    1996-01-01

    Abstract--An experimental and analytical study of heat and mass transfer in wooden dowels during a simulated fire is presented in this paper. The goal of this study is to understand the processes of heat and mass transfer in wood during wildland fires. A mathematical model is developed to describe the processes of heating, drying and pyrolysis of wood until ignition...

  19. Coronal heating by the resonant absorption of Alfven waves - Importance of the global mode and scaling laws

    NASA Technical Reports Server (NTRS)

    Steinolfson, Richard S.; Davila, Joseph M.

    1993-01-01

    Numerical simulations of the MHD equations for a fully compressible, low-beta, resistive plasma are used to study the resonance absorption process for the heating of coronal active region loops. Comparisons with more approximate analytic models show that the major predictions of the analytic theories are, to a large extent, confirmed by the numerical computations. The simulations demonstrate that the dissipation occurs primarily in a thin resonance layer. Some of the analytically predicted features verified by the simulations are (a) the position of the resonance layer within the initial inhomogeneity; (b) the importance of the global mode for a large range of loop densities; (c) the dependence of the resonance layer thickness and the steady-state heating rate on the dissipation coefficient; and (d) the time required for the resonance layer to form. In contrast with some previous analytic and simulation results, the time for the loop to reach a steady state is found to be the phase-mixing time rather than a dissipation time. This disagreement is shown to result from neglect of the existence of the global mode in some of the earlier analyses. The resonant absorption process is also shown to behave similar to a classical driven harmonic oscillator.

  20. Theory for the three-dimensional Mercedes-Benz model of water.

    PubMed

    Bizjak, Alan; Urbic, Tomaz; Vlachy, Vojko; Dill, Ken A

    2009-11-21

    The two-dimensional Mercedes-Benz (MB) model of water has been widely studied, both by Monte Carlo simulations and by integral equation methods. Here, we study the three-dimensional (3D) MB model. We treat water as spheres that interact through Lennard-Jones potentials and through a tetrahedral Gaussian hydrogen bonding function. As the "right answer," we perform isothermal-isobaric Monte Carlo simulations on the 3D MB model for different pressures and temperatures. The purpose of this work is to develop and test Wertheim's Ornstein-Zernike integral equation and thermodynamic perturbation theories. The two analytical approaches are orders of magnitude more efficient than the Monte Carlo simulations. The ultimate goal is to find statistical mechanical theories that can efficiently predict the properties of orientationally complex molecules, such as water. Also, here, the 3D MB model simply serves as a useful workbench for testing such analytical approaches. For hot water, the analytical theories give accurate agreement with the computer simulations. For cold water, the agreement is not as good. Nevertheless, these approaches are qualitatively consistent with energies, volumes, heat capacities, compressibilities, and thermal expansion coefficients versus temperature and pressure. Such analytical approaches offer a promising route to a better understanding of water and also the aqueous solvation.

  1. Theory for the three-dimensional Mercedes-Benz model of water

    PubMed Central

    Bizjak, Alan; Urbic, Tomaz; Vlachy, Vojko; Dill, Ken A.

    2009-01-01

    The two-dimensional Mercedes-Benz (MB) model of water has been widely studied, both by Monte Carlo simulations and by integral equation methods. Here, we study the three-dimensional (3D) MB model. We treat water as spheres that interact through Lennard-Jones potentials and through a tetrahedral Gaussian hydrogen bonding function. As the “right answer,” we perform isothermal-isobaric Monte Carlo simulations on the 3D MB model for different pressures and temperatures. The purpose of this work is to develop and test Wertheim’s Ornstein–Zernike integral equation and thermodynamic perturbation theories. The two analytical approaches are orders of magnitude more efficient than the Monte Carlo simulations. The ultimate goal is to find statistical mechanical theories that can efficiently predict the properties of orientationally complex molecules, such as water. Also, here, the 3D MB model simply serves as a useful workbench for testing such analytical approaches. For hot water, the analytical theories give accurate agreement with the computer simulations. For cold water, the agreement is not as good. Nevertheless, these approaches are qualitatively consistent with energies, volumes, heat capacities, compressibilities, and thermal expansion coefficients versus temperature and pressure. Such analytical approaches offer a promising route to a better understanding of water and also the aqueous solvation. PMID:19929057

  2. Theory for the three-dimensional Mercedes-Benz model of water

    NASA Astrophysics Data System (ADS)

    Bizjak, Alan; Urbic, Tomaz; Vlachy, Vojko; Dill, Ken A.

    2009-11-01

    The two-dimensional Mercedes-Benz (MB) model of water has been widely studied, both by Monte Carlo simulations and by integral equation methods. Here, we study the three-dimensional (3D) MB model. We treat water as spheres that interact through Lennard-Jones potentials and through a tetrahedral Gaussian hydrogen bonding function. As the "right answer," we perform isothermal-isobaric Monte Carlo simulations on the 3D MB model for different pressures and temperatures. The purpose of this work is to develop and test Wertheim's Ornstein-Zernike integral equation and thermodynamic perturbation theories. The two analytical approaches are orders of magnitude more efficient than the Monte Carlo simulations. The ultimate goal is to find statistical mechanical theories that can efficiently predict the properties of orientationally complex molecules, such as water. Also, here, the 3D MB model simply serves as a useful workbench for testing such analytical approaches. For hot water, the analytical theories give accurate agreement with the computer simulations. For cold water, the agreement is not as good. Nevertheless, these approaches are qualitatively consistent with energies, volumes, heat capacities, compressibilities, and thermal expansion coefficients versus temperature and pressure. Such analytical approaches offer a promising route to a better understanding of water and also the aqueous solvation.

  3. Recent Methodology in Ginseng Analysis

    PubMed Central

    Baek, Seung-Hoon; Bae, Ok-Nam; Park, Jeong Hill

    2012-01-01

    As much as the popularity of ginseng in herbal prescriptions or remedies, ginseng has become the focus of research in many scientific fields. Analytical methodologies for ginseng, referred to as ginseng analysis hereafter, have been developed for bioactive component discovery, phytochemical profiling, quality control, and pharmacokinetic studies. This review summarizes the most recent advances in ginseng analysis in the past half-decade including emerging techniques and analytical trends. Ginseng analysis includes all of the leading analytical tools and serves as a representative model for the analytical research of herbal medicines. PMID:23717112

  4. Visual Analytics 101

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scholtz, Jean; Burtner, Edwin R.; Cook, Kristin A.

    This course will introduce the field of Visual Analytics to HCI researchers and practitioners highlighting the contributions they can make to this field. Topics will include a definition of visual analytics along with examples of current systems, types of tasks and end users, issues in defining user requirements, design of visualizations and interactions, guidelines and heuristics, the current state of user-centered evaluations, and metrics for evaluation. We encourage designers, HCI researchers, and HCI practitioners to attend to learn how their skills can contribute to advancing the state of the art of visual analytics

  5. Formulation of advanced consumables management models: Executive summary. [modeling spacecraft environmental control, life support, and electric power supply systems

    NASA Technical Reports Server (NTRS)

    Daly, J. K.; Torian, J. G.

    1979-01-01

    An overview of studies conducted to establish the requirements for advanced subsystem analytical tools is presented. Modifications are defined for updating current computer programs used to analyze environmental control, life support, and electric power supply systems so that consumables for future advanced spacecraft may be managed.

  6. SolarPILOT | Concentrating Solar Power | NREL

    Science.gov Websites

    tools. Unlike exclusively ray-tracing tools, SolarPILOT runs the analytical simulation engine that uses engine alongside a ray-tracing core for more detailed simulations. The SolTrace simulation engine is

  7. Predictive simulation of guide-wave structural health monitoring

    NASA Astrophysics Data System (ADS)

    Giurgiutiu, Victor

    2017-04-01

    This paper presents an overview of recent developments on predictive simulation of guided wave structural health monitoring (SHM) with piezoelectric wafer active sensor (PWAS) transducers. The predictive simulation methodology is based on the hybrid global local (HGL) concept which allows fast analytical simulation in the undamaged global field and finite element method (FEM) simulation in the local field around and including the damage. The paper reviews the main results obtained in this area by researchers of the Laboratory for Active Materials and Smart Structures (LAMSS) at the University of South Carolina, USA. After thematic introduction and research motivation, the paper covers four main topics: (i) presentation of the HGL analysis; (ii) analytical simulation in 1D and 2D; (iii) scatter field generation; (iv) HGL examples. The paper ends with summary, discussion, and suggestions for future work.

  8. An analytical method to simulate the H I 21-cm visibility signal for intensity mapping experiments

    NASA Astrophysics Data System (ADS)

    Sarkar, Anjan Kumar; Bharadwaj, Somnath; Marthi, Visweshwar Ram

    2018-01-01

    Simulations play a vital role in testing and validating H I 21-cm power spectrum estimation techniques. Conventional methods use techniques like N-body simulations to simulate the sky signal which is then passed through a model of the instrument. This makes it necessary to simulate the H I distribution in a large cosmological volume, and incorporate both the light-cone effect and the telescope's chromatic response. The computational requirements may be particularly large if one wishes to simulate many realizations of the signal. In this paper, we present an analytical method to simulate the H I visibility signal. This is particularly efficient if one wishes to simulate a large number of realizations of the signal. Our method is based on theoretical predictions of the visibility correlation which incorporate both the light-cone effect and the telescope's chromatic response. We have demonstrated this method by applying it to simulate the H I visibility signal for the upcoming Ooty Wide Field Array Phase I.

  9. Emerging Cyber Infrastructure for NASA's Large-Scale Climate Data Analytics

    NASA Astrophysics Data System (ADS)

    Duffy, D.; Spear, C.; Bowen, M. K.; Thompson, J. H.; Hu, F.; Yang, C. P.; Pierce, D.

    2016-12-01

    The resolution of NASA climate and weather simulations have grown dramatically over the past few years with the highest-fidelity models reaching down to 1.5 KM global resolutions. With each doubling of the resolution, the resulting data sets grow by a factor of eight in size. As the climate and weather models push the envelope even further, a new infrastructure to store data and provide large-scale data analytics is necessary. The NASA Center for Climate Simulation (NCCS) has deployed the Data Analytics Storage Service (DASS) that combines scalable storage with the ability to perform in-situ analytics. Within this system, large, commonly used data sets are stored in a POSIX file system (write once/read many); examples of data stored include Landsat, MERRA2, observing system simulation experiments, and high-resolution downscaled reanalysis. The total size of this repository is on the order of 15 petabytes of storage. In addition to the POSIX file system, the NCCS has deployed file system connectors to enable emerging analytics built on top of the Hadoop File System (HDFS) to run on the same storage servers within the DASS. Coupled with a custom spatiotemporal indexing approach, users can now run emerging analytical operations built on MapReduce and Spark on the same data files stored within the POSIX file system without having to make additional copies. This presentation will discuss the architecture of this system and present benchmark performance measurements from traditional TeraSort and Wordcount to large-scale climate analytical operations on NetCDF data.

  10. Recent Progress in Understanding the Shock Response of Ferroelectric Ceramics

    NASA Astrophysics Data System (ADS)

    Setchell, R. E.

    2002-07-01

    Ferroelectric ceramics exhibit a permanent remanent polarization, and shock depoling of these materials to achieve pulsed sources of electrical power was proposed in the late 1950s. During the following twenty years, extensive studies were conducted to examine the shock response of ferroelectric ceramics primarily based on lead zirconate titanate (PZT). Under limited conditions, relatively simple analytical models were found to adequately describe the observed electrical behavior. A more complex behavior was indicated over broader conditions, however, resulting in the incorporation of shock-induced conductivity and dielectric relaxation into analytical models. Unfortunately, few experimental studies were undertaken over the next twenty years, and the development of more comprehensive models was inhibited. In recent years, a strong interest in advancing numerical simulation capabilities has motivated new experimental studies and corresponding model development. More than seventy gas gun experiments have examined several ferroelectric ceramics, with most experiments on lead zirconate titanate having a Zr:Ti ratio of 95:5 and modified with 2% niobium (PZT 95/5). This material is nominally ferroelectric but is near an antiferroelectric phase boundary, and depoling results from a shock-driven phase transition. Experiments have examined unpoled, normally poled, and axially poled PZT 95/5 over broad ranges of shock pressure and peak electric field. The extensive base of new data provides quantitative insights into both the stress and field dependencies of depoling kinetics, and the significance of pore collapse at higher stresses. The results are being actively utilized to develop and refine material response models used in numerical simulations of pulsed power devices.

  11. Reliability of stellar inclination estimated from asteroseismology: analytical criteria, mock simulations and Kepler data analysis

    NASA Astrophysics Data System (ADS)

    Kamiaka, Shoya; Benomar, Othman; Suto, Yasushi

    2018-05-01

    Advances in asteroseismology of solar-like stars, now provide a unique method to estimate the stellar inclination i⋆. This enables to evaluate the spin-orbit angle of transiting planetary systems, in a complementary fashion to the Rossiter-McLaughlineffect, a well-established method to estimate the projected spin-orbit angle λ. Although the asteroseismic method has been broadly applied to the Kepler data, its reliability has yet to be assessed intensively. In this work, we evaluate the accuracy of i⋆ from asteroseismology of solar-like stars using 3000 simulated power spectra. We find that the low signal-to-noise ratio of the power spectra induces a systematic under-estimate (over-estimate) bias for stars with high (low) inclinations. We derive analytical criteria for the reliable asteroseismic estimate, which indicates that reliable measurements are possible in the range of 20° ≲ i⋆ ≲ 80° only for stars with high signal-to-noise ratio. We also analyse and measure the stellar inclination of 94 Kepler main-sequence solar-like stars, among which 33 are planetary hosts. According to our reliability criteria, a third of them (9 with planets, 22 without) have accurate stellar inclination. Comparison of our asteroseismic estimate of vsin i⋆ against spectroscopic measurements indicates that the latter suffers from a large uncertainty possibly due to the modeling of macro-turbulence, especially for stars with projected rotation speed vsin i⋆ ≲ 5km/s. This reinforces earlier claims, and the stellar inclination estimated from the combination of measurements from spectroscopy and photometric variation for slowly rotating stars needs to be interpreted with caution.

  12. Mathematical modeling and SAR simulation multifunction SAR technology efforts

    NASA Technical Reports Server (NTRS)

    Griffin, C. R.; Estes, J. M.

    1981-01-01

    The orbital SAR (synthetic aperture radar) simulation data was used in several simulation efforts directed toward advanced SAR development. Efforts toward simulating an operational radar, simulation of antenna polarization effects, and simulation of SAR images at serveral different wavelengths are discussed. Avenues for improvements in the orbital SAR simulation and its application to the development of advanced digital radar data processing schemes are indicated.

  13. Apollo: Giving application developers a single point of access to public health models using structured vocabularies and Web services

    PubMed Central

    Wagner, Michael M.; Levander, John D.; Brown, Shawn; Hogan, William R.; Millett, Nicholas; Hanna, Josh

    2013-01-01

    This paper describes the Apollo Web Services and Apollo-SV, its related ontology. The Apollo Web Services give an end-user application a single point of access to multiple epidemic simulators. An end user can specify an analytic problem—which we define as a configuration and a query of results—exactly once and submit it to multiple epidemic simulators. The end user represents the analytic problem using a standard syntax and vocabulary, not the native languages of the simulators. We have demonstrated the feasibility of this design by implementing a set of Apollo services that provide access to two epidemic simulators and two visualizer services. PMID:24551417

  14. Apollo: giving application developers a single point of access to public health models using structured vocabularies and Web services.

    PubMed

    Wagner, Michael M; Levander, John D; Brown, Shawn; Hogan, William R; Millett, Nicholas; Hanna, Josh

    2013-01-01

    This paper describes the Apollo Web Services and Apollo-SV, its related ontology. The Apollo Web Services give an end-user application a single point of access to multiple epidemic simulators. An end user can specify an analytic problem-which we define as a configuration and a query of results-exactly once and submit it to multiple epidemic simulators. The end user represents the analytic problem using a standard syntax and vocabulary, not the native languages of the simulators. We have demonstrated the feasibility of this design by implementing a set of Apollo services that provide access to two epidemic simulators and two visualizer services.

  15. Communication — Modeling polymer-electrolyte fuel-cell agglomerates with double-trap kinetics

    DOE PAGES

    Pant, Lalit M.; Weber, Adam Z.

    2017-04-14

    A new semi-analytical agglomerate model is presented for polymer-electrolyte fuel-cell cathodes. The model uses double-trap kinetics for the oxygen-reduction reaction, which can capture the observed potential-dependent coverage and Tafel-slope changes. An iterative semi-analytical approach is used to obtain reaction rate constants from the double-trap kinetics, oxygen concentration at the agglomerate surface, and overall agglomerate reaction rate. The analytical method can predict reaction rates within 2% of the numerically simulated values for a wide range of oxygen concentrations, overpotentials, and agglomerate sizes, while saving simulation time compared to a fully numerical approach.

  16. Technology advancement for integrative stem cell analyses.

    PubMed

    Jeong, Yoon; Choi, Jonghoon; Lee, Kwan Hyi

    2014-12-01

    Scientists have endeavored to use stem cells for a variety of applications ranging from basic science research to translational medicine. Population-based characterization of such stem cells, while providing an important foundation to further development, often disregard the heterogeneity inherent among individual constituents within a given population. The population-based analysis and characterization of stem cells and the problems associated with such a blanket approach only underscore the need for the development of new analytical technology. In this article, we review current stem cell analytical technologies, along with the advantages and disadvantages of each, followed by applications of these technologies in the field of stem cells. Furthermore, while recent advances in micro/nano technology have led to a growth in the stem cell analytical field, underlying architectural concepts allow only for a vertical analytical approach, in which different desirable parameters are obtained from multiple individual experiments and there are many technical challenges that limit vertically integrated analytical tools. Therefore, we propose--by introducing a concept of vertical and horizontal approach--that there is the need of adequate methods to the integration of information, such that multiple descriptive parameters from a stem cell can be obtained from a single experiment.

  17. Technology Advancement for Integrative Stem Cell Analyses

    PubMed Central

    Jeong, Yoon

    2014-01-01

    Scientists have endeavored to use stem cells for a variety of applications ranging from basic science research to translational medicine. Population-based characterization of such stem cells, while providing an important foundation to further development, often disregard the heterogeneity inherent among individual constituents within a given population. The population-based analysis and characterization of stem cells and the problems associated with such a blanket approach only underscore the need for the development of new analytical technology. In this article, we review current stem cell analytical technologies, along with the advantages and disadvantages of each, followed by applications of these technologies in the field of stem cells. Furthermore, while recent advances in micro/nano technology have led to a growth in the stem cell analytical field, underlying architectural concepts allow only for a vertical analytical approach, in which different desirable parameters are obtained from multiple individual experiments and there are many technical challenges that limit vertically integrated analytical tools. Therefore, we propose—by introducing a concept of vertical and horizontal approach—that there is the need of adequate methods to the integration of information, such that multiple descriptive parameters from a stem cell can be obtained from a single experiment. PMID:24874188

  18. Analytical Verifications in Cryogenic Testing of NGST Advanced Mirror System Demonstrators

    NASA Technical Reports Server (NTRS)

    Cummings, Ramona; Levine, Marie; VanBuren, Dave; Kegley, Jeff; Green, Joseph; Hadaway, James; Presson, Joan; Cline, Todd; Stahl, H. Philip (Technical Monitor)

    2002-01-01

    Ground based testing is a critical and costly part of component, assembly, and system verifications of large space telescopes. At such tests, however, with integral teamwork by planners, analysts, and test personnel, segments can be included to validate specific analytical parameters and algorithms at relatively low additional cost. This paper opens with strategy of analytical verification segments added to vacuum cryogenic testing of Advanced Mirror System Demonstrator (AMSD) assemblies. These AMSD assemblies incorporate material and architecture concepts being considered in the Next Generation Space Telescope (NGST) design. The test segments for workmanship testing, cold survivability, and cold operation optical throughput are supplemented by segments for analytical verifications of specific structural, thermal, and optical parameters. Utilizing integrated modeling and separate materials testing, the paper continues with support plan for analyses, data, and observation requirements during the AMSD testing, currently slated for late calendar year 2002 to mid calendar year 2003. The paper includes anomaly resolution as gleaned by authors from similar analytical verification support of a previous large space telescope, then closes with draft of plans for parameter extrapolations, to form a well-verified portion of the integrated modeling being done for NGST performance predictions.

  19. Advanced propeller aerodynamic analysis

    NASA Technical Reports Server (NTRS)

    Bober, L. J.

    1980-01-01

    The analytical approaches as well as the capabilities of three advanced analyses for predicting propeller aerodynamic performance are presented. It is shown that two of these analyses use a lifting line representation for the propeller blades, and the third uses a lifting surface representation.

  20. RLV Turbine Performance Optimization

    NASA Technical Reports Server (NTRS)

    Griffin, Lisa W.; Dorney, Daniel J.

    2001-01-01

    A task was developed at NASA/Marshall Space Flight Center (MSFC) to improve turbine aerodynamic performance through the application of advanced design and analysis tools. There are four major objectives of this task: 1) to develop, enhance, and integrate advanced turbine aerodynamic design and analysis tools; 2) to develop the methodology for application of the analytical techniques; 3) to demonstrate the benefits of the advanced turbine design procedure through its application to a relevant turbine design point; and 4) to verify the optimized design and analysis with testing. Final results of the preliminary design and the results of the two-dimensional (2D) detailed design of the first-stage vane of a supersonic turbine suitable for a reusable launch vehicle (R-LV) are presented. Analytical techniques for obtaining the results are also discussed.

  1. Numerical and analytical simulation of the production process of ZrO2 hollow particles

    NASA Astrophysics Data System (ADS)

    Safaei, Hadi; Emami, Mohsen Davazdah

    2017-12-01

    In this paper, the production process of hollow particles from the agglomerated particles is addressed analytically and numerically. The important parameters affecting this process, in particular, the initial porosity level of particles and the plasma gun types are investigated. The analytical model adopts a combination of quasi-steady thermal equilibrium and mechanical balance. In the analytical model, the possibility of a solid core existing in agglomerated particles is examined. In this model, a range of particle diameters (50μm ≤ D_{p0} ≤ 160 μ m) and various initial porosities ( 0.2 ≤ p ≤ 0.7) are considered. The numerical model employs the VOF technique for two-phase compressible flows. The production process of hollow particles from the agglomerated particles is simulated, considering an initial diameter of D_{p0} = 60 μm and initial porosity of p = 0.3, p = 0.5, and p = 0.7. Simulation results of the analytical model indicate that the solid core diameter is independent of the initial porosity, whereas the thickness of the particle shell strongly depends on the initial porosity. In both models, a hollow particle may hardly develop at small initial porosity values ( p < 0.3), while the particle disintegrates at high initial porosity values ( p > 0.6.

  2. TERRA: Building New Communities for Advanced Biofuels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cornelius, Joe; Mockler, Todd; Tuinstra, Mitch

    ARPA-E’s Transportation Energy Resources from Renewable Agriculture (TERRA) program is bringing together top experts from different disciplines – agriculture, robotics and data analytics – to rethink the production of advanced biofuel crops. ARPA-E Program Director Dr. Joe Cornelius discusses the TERRA program and explains how ARPA-E’s model enables multidisciplinary collaboration among diverse communities. The video focuses on two TERRA projects—Donald Danforth Center and Purdue University—that are developing and integrating cutting-edge remote sensing platforms, complex data analytics tools and plant breeding technologies to tackle the challenge of sustainably increasing biofuel stocks.

  3. A three-dimensional finite-element thermal/mechanical analytical technique for high-performance traveling wave tubes

    NASA Technical Reports Server (NTRS)

    Bartos, Karen F.; Fite, E. Brian; Shalkhauser, Kurt A.; Sharp, G. Richard

    1991-01-01

    Current research in high-efficiency, high-performance traveling wave tubes (TWT's) has led to the development of novel thermal/ mechanical computer models for use with helical slow-wave structures. A three-dimensional, finite element computer model and analytical technique used to study the structural integrity and thermal operation of a high-efficiency, diamond-rod, K-band TWT designed for use in advanced space communications systems. This analysis focused on the slow-wave circuit in the radiofrequency section of the TWT, where an inherent localized heating problem existed and where failures were observed during earlier cold compression, or 'coining' fabrication technique that shows great potential for future TWT development efforts. For this analysis, a three-dimensional, finite element model was used along with MARC, a commercially available finite element code, to simulate the fabrication of a diamond-rod TWT. This analysis was conducted by using component and material specifications consistent with actual TWT fabrication and was verified against empirical data. The analysis is nonlinear owing to material plasticity introduced by the forming process and also to geometric nonlinearities presented by the component assembly configuration. The computer model was developed by using the high efficiency, K-band TWT design but is general enough to permit similar analyses to be performed on a wide variety of TWT designs and styles. The results of the TWT operating condition and structural failure mode analysis, as well as a comparison of analytical results to test data are presented.

  4. A three-dimensional finite-element thermal/mechanical analytical technique for high-performance traveling wave tubes

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Kurt A.; Bartos, Karen F.; Fite, E. B.; Sharp, G. R.

    1992-01-01

    Current research in high-efficiency, high-performance traveling wave tubes (TWT's) has led to the development of novel thermal/mechanical computer models for use with helical slow-wave structures. A three-dimensional, finite element computer model and analytical technique used to study the structural integrity and thermal operation of a high-efficiency, diamond-rod, K-band TWT designed for use in advanced space communications systems. This analysis focused on the slow-wave circuit in the radiofrequency section of the TWT, where an inherent localized heating problem existed and where failures were observed during earlier cold compression, or 'coining' fabrication technique that shows great potential for future TWT development efforts. For this analysis, a three-dimensional, finite element model was used along with MARC, a commercially available finite element code, to simulate the fabrication of a diamond-rod TWT. This analysis was conducted by using component and material specifications consistent with actual TWT fabrication and was verified against empirical data. The analysis is nonlinear owing to material plasticity introduced by the forming process and also to geometric nonlinearities presented by the component assembly configuration. The computer model was developed by using the high efficiency, K-band TWT design but is general enough to permit similar analyses to be performed on a wide variety of TWT designs and styles. The results of the TWT operating condition and structural failure mode analysis, as well as a comparison of analytical results to test data are presented.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    The Chemical Technology (CMT) Division is a diverse technical organization with principal emphases in environmental management and development of advanced energy sources. The Division conducts research and development in three general areas: (1) development of advanced power sources for stationary and transportation applications and for consumer electronics, (2) management of high-level and low-level nuclear wastes and hazardous wastes, and (3) electrometallurgical treatment of spent nuclear fuel. The Division also performs basic research in catalytic chemistry involving molecular energy resources, mechanisms of ion transport in lithium battery electrolytes, and the chemistry of technology-relevant materials and electrified interfaces. In addition, the Divisionmore » operates the Analytical Chemistry Laboratory, which conducts research in analytical chemistry and provides analytical services for programs at Argonne National Laboratory (ANL) and other organizations. Technical highlights of the Division`s activities during 1997 are presented.« less

  6. Electrochemical detection for microscale analytical systems: a review.

    PubMed

    Wang, Joseph

    2002-02-11

    As the field of chip-based microscale systems continues its rapid growth, there are urgent needs for developing compatible detection modes. Electrochemistry detection offers considerable promise for such microfluidic systems, with features that include remarkable sensitivity, inherent miniaturization and portability, independence of optical path length or sample turbidity, low cost, low-power requirements and high compatibility with advanced micromachining and microfabrication technologies. This paper highlights recent advances, directions and key strategies in controlled-potential electrochemical detectors for miniaturized analytical systems. Subjects covered include the design and integration of the electrochemical detection system, its requirements and operational principles, common electrode materials, derivatization reactions, electrical-field decouplers, typical applications and future prospects. It is expected that electrochemical detection will become a powerful tool for microscale analytical systems and will facilitate the creation of truly portable (and possibly disposable) devices.

  7. Study and characterization of a MEMS micromirror device

    NASA Astrophysics Data System (ADS)

    Furlong, Cosme; Pryputniewicz, Ryszard J.

    2004-08-01

    In this paper, advances in our study and characterization of a MEMS micromirror device are presented. The micromirror device, of 510 mm characteristic length, operates in a dynamic mode with a maximum displacement on the order of 10 mm along its principal optical axis and oscillation frequencies of up to 1.3 kHz. Developments are carried on by analytical, computational, and experimental methods. Analytical and computational nonlinear geometrical models are developed in order to determine the optimal loading-displacement operational characteristics of the micromirror. Due to the operational mode of the micromirror, the experimental characterization of its loading-displacement transfer function requires utilization of advanced optical metrology methods. Optoelectronic holography (OEH) methodologies based on multiple wavelengths that we are developing to perform such characterization are described. It is shown that the analytical, computational, and experimental approach is effective in our developments.

  8. Applications of Optical Microcavity Resonators in Analytical Chemistry

    PubMed Central

    Wade, James H.; Bailey, Ryan C.

    2018-01-01

    Optical resonator sensors are an emerging class of analytical technologies that use recirculating light confined within a microcavity to sensitively measure the surrounding environment. Bolstered by advances in microfabrication, these devices can be configured for a wide variety of chemical or biomolecular sensing applications. The review begins with a brief description of optical resonator sensor operation followed by discussions regarding sensor design, including different geometries, choices of material systems, methods of sensor interrogation, and new approaches to sensor operation. Throughout, key recent developments are highlighted, including advancements in biosensing and other applications of optical sensors. Alternative sensing mechanisms and hybrid sensing devices are then discussed in terms of their potential for more sensitive and rapid analyses. Brief concluding statements offer our perspective on the future of optical microcavity sensors and their promise as versatile detection elements within analytical chemistry. PMID:27049629

  9. Applying Behavior Analytic Procedures to Effectively Teach Literacy Skills in the Classroom

    ERIC Educational Resources Information Center

    Joseph, Laurice M.; Alber-Morgan, Sheila; Neef, Nancy

    2016-01-01

    The purpose of this article is to discuss the application of behavior analytic procedures for advancing and evaluating methods for teaching literacy skills in the classroom. Particularly, applied behavior analysis has contributed substantially to examining the relationship between teacher behavior and student literacy performance. Teacher…

  10. Protein Quantification by Elemental Mass Spectrometry: An Experiment for Graduate Students

    ERIC Educational Resources Information Center

    Schwarz, Gunnar; Ickert, Stefanie; Wegner, Nina; Nehring, Andreas; Beck, Sebastian; Tiemann, Ruediger; Linscheid, Michael W.

    2014-01-01

    A multiday laboratory experiment was designed to integrate inductively coupled plasma-mass spectrometry (ICP-MS) in the context of protein quantification into an advanced practical course in analytical and environmental chemistry. Graduate students were familiar with the analytical methods employed, whereas the combination of bioanalytical assays…

  11. Let's Not Forget: Learning Analytics Are about Learning

    ERIC Educational Resources Information Center

    Gaševic, Dragan; Dawson, Shane; Siemens, George

    2015-01-01

    The analysis of data collected from the interaction of users with educational and information technology has attracted much attention as a promising approach for advancing our understanding of the learning process. This promise motivated the emergence of the new research field, learning analytics, and its closely related discipline, educational…

  12. ENVIRONMENTAL RESEARCH BRIEF : ANALYTIC ELEMENT MODELING OF GROUND-WATER FLOW AND HIGH PERFORMANCE COMPUTING

    EPA Science Inventory

    Several advances in the analytic element method have been made to enhance its performance and facilitate three-dimensional ground-water flow modeling in a regional aquifer setting. First, a new public domain modular code (ModAEM) has been developed for modeling ground-water flow ...

  13. Analytics to Literacies: The Development of a Learning Analytics Framework for Multiliteracies Assessment

    ERIC Educational Resources Information Center

    Dawson, Shane; Siemens, George

    2014-01-01

    The rapid advances in information and communication technologies, coupled with increased access to information and the formation of global communities, have resulted in interest among researchers and academics to revise educational practice to move beyond traditional "literacy" skills towards an enhanced set of…

  14. Computer simulations for bioequivalence trials: Selection of analyte in BCS class II and IV drugs with first-pass metabolism, two metabolic pathways and intestinal efflux transporter.

    PubMed

    Mangas-Sanjuan, Victor; Navarro-Fontestad, Carmen; García-Arieta, Alfredo; Trocóniz, Iñaki F; Bermejo, Marival

    2018-05-30

    A semi-physiological two compartment pharmacokinetic model with two active metabolites (primary (PM) and secondary metabolites (SM)) with saturable and non-saturable pre-systemic efflux transporter, intestinal and hepatic metabolism has been developed. The aim of this work is to explore in several scenarios which analyte (parent drug or any of the metabolites) is the most sensitive to changes in drug product performance (i.e. differences in in vivo dissolution) and to make recommendations based on the simulations outcome. A total of 128 scenarios (2 Biopharmaceutics Classification System (BCS) drug types, 2 levels of K M Pgp , in 4 metabolic scenarios at 2 dose levels in 4 quality levels of the drug product) were simulated for BCS class II and IV drugs. Monte Carlo simulations of all bioequivalence studies were performed in NONMEM 7.3. Results showed the parent drug (PD) was the most sensitive analyte for bioequivalence trials in all the studied scenarios. PM and SM revealed less or the same sensitivity to detect differences in pharmaceutical quality as the PD. Another relevant result is that mean point estimate of C max and AUC methodology from Monte Carlo simulations allows to select more accurately the most sensitive analyte compared to the criterion on the percentage of failed or successful BE studies, even for metabolites which frequently show greater variability than PD. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Cascaded analysis of signal and noise propagation through a heterogeneous breast model.

    PubMed

    Mainprize, James G; Yaffe, Martin J

    2010-10-01

    The detectability of lesions in radiographic images can be impaired by patterns caused by the surrounding anatomic structures. The presence of such patterns is often referred to as anatomic noise. Others have previously extended signal and noise propagation theory to include variable background structure as an additional noise term and used in simulations for analysis by human and ideal observers. Here, the analytic forms of the signal and noise transfer are derived to obtain an exact expression for any input random distribution and the "power law" filter used to generate the texture of the tissue distribution. A cascaded analysis of propagation through a heterogeneous model is derived for x-ray projection through simulated heterogeneous backgrounds. This is achieved by considering transmission through the breast as a correlated amplification point process. The analytic forms of the cascaded analysis were compared to monoenergetic Monte Carlo simulations of x-ray propagation through power law structured backgrounds. As expected, it was found that although the quantum noise power component scales linearly with the x-ray signal, the anatomic noise will scale with the square of the x-ray signal. There was a good agreement between results obtained using analytic expressions for the noise power and those from Monte Carlo simulations for different background textures, random input functions, and x-ray fluence. Analytic equations for the signal and noise properties of heterogeneous backgrounds were derived. These may be used in direct analysis or as a tool to validate simulations in evaluating detectability.

  16. Nonparametric density estimation and optimal bandwidth selection for protein unfolding and unbinding data

    NASA Astrophysics Data System (ADS)

    Bura, E.; Zhmurov, A.; Barsegov, V.

    2009-01-01

    Dynamic force spectroscopy and steered molecular simulations have become powerful tools for analyzing the mechanical properties of proteins, and the strength of protein-protein complexes and aggregates. Probability density functions of the unfolding forces and unfolding times for proteins, and rupture forces and bond lifetimes for protein-protein complexes allow quantification of the forced unfolding and unbinding transitions, and mapping the biomolecular free energy landscape. The inference of the unknown probability distribution functions from the experimental and simulated forced unfolding and unbinding data, as well as the assessment of analytically tractable models of the protein unfolding and unbinding requires the use of a bandwidth. The choice of this quantity is typically subjective as it draws heavily on the investigator's intuition and past experience. We describe several approaches for selecting the "optimal bandwidth" for nonparametric density estimators, such as the traditionally used histogram and the more advanced kernel density estimators. The performance of these methods is tested on unimodal and multimodal skewed, long-tailed distributed data, as typically observed in force spectroscopy experiments and in molecular pulling simulations. The results of these studies can serve as a guideline for selecting the optimal bandwidth to resolve the underlying distributions from the forced unfolding and unbinding data for proteins.

  17. Use of Simulation Technology in Dental Education.

    ERIC Educational Resources Information Center

    Buchanan, Judith Ann

    2001-01-01

    Discusses the impact of current simulation laboratories on dental education and reviews advanced technology simulation that has recently become available or is in the developmental stage. Addresses the abilities of advanced technology simulation, its advantages and disadvantages, and its potential to affect dental education. (EV)

  18. Hierarchical analytical and simulation modelling of human-machine systems with interference

    NASA Astrophysics Data System (ADS)

    Braginsky, M. Ya; Tarakanov, D. V.; Tsapko, S. G.; Tsapko, I. V.; Baglaeva, E. A.

    2017-01-01

    The article considers the principles of building the analytical and simulation model of the human operator and the industrial control system hardware and software. E-networks as the extension of Petri nets are used as the mathematical apparatus. This approach allows simulating complex parallel distributed processes in human-machine systems. The structural and hierarchical approach is used as the building method for the mathematical model of the human operator. The upper level of the human operator is represented by the logical dynamic model of decision making based on E-networks. The lower level reflects psychophysiological characteristics of the human-operator.

  19. Evaluation of air traffic control models and simulations.

    DOT National Transportation Integrated Search

    1971-06-01

    Approximately two hundred reports were identified as describing Air Traffic Control (ATC) modeling and simulation efforts. Of these, about ninety analytical and simulation models dealing with virtually all aspects of ATC were formally evaluated. The ...

  20. The Convergence of High Performance Computing and Large Scale Data Analytics

    NASA Astrophysics Data System (ADS)

    Duffy, D.; Bowen, M. K.; Thompson, J. H.; Yang, C. P.; Hu, F.; Wills, B.

    2015-12-01

    As the combinations of remote sensing observations and model outputs have grown, scientists are increasingly burdened with both the necessity and complexity of large-scale data analysis. Scientists are increasingly applying traditional high performance computing (HPC) solutions to solve their "Big Data" problems. While this approach has the benefit of limiting data movement, the HPC system is not optimized to run analytics, which can create problems that permeate throughout the HPC environment. To solve these issues and to alleviate some of the strain on the HPC environment, the NASA Center for Climate Simulation (NCCS) has created the Advanced Data Analytics Platform (ADAPT), which combines both HPC and cloud technologies to create an agile system designed for analytics. Large, commonly used data sets are stored in this system in a write once/read many file system, such as Landsat, MODIS, MERRA, and NGA. High performance virtual machines are deployed and scaled according to the individual scientist's requirements specifically for data analysis. On the software side, the NCCS and GMU are working with emerging commercial technologies and applying them to structured, binary scientific data in order to expose the data in new ways. Native NetCDF data is being stored within a Hadoop Distributed File System (HDFS) enabling storage-proximal processing through MapReduce while continuing to provide accessibility of the data to traditional applications. Once the data is stored within HDFS, an additional indexing scheme is built on top of the data and placed into a relational database. This spatiotemporal index enables extremely fast mappings of queries to data locations to dramatically speed up analytics. These are some of the first steps toward a single unified platform that optimizes for both HPC and large-scale data analysis, and this presentation will elucidate the resulting and necessary exascale architectures required for future systems.

  1. An Evaluative Review of Simulated Dynamic Smart 3d Objects

    NASA Astrophysics Data System (ADS)

    Romeijn, H.; Sheth, F.; Pettit, C. J.

    2012-07-01

    Three-dimensional (3D) modelling of plants can be an asset for creating agricultural based visualisation products. The continuum of 3D plants models ranges from static to dynamic objects, also known as smart 3D objects. There is an increasing requirement for smarter simulated 3D objects that are attributed mathematically and/or from biological inputs. A systematic approach to plant simulation offers significant advantages to applications in agricultural research, particularly in simulating plant behaviour and the influences of external environmental factors. This approach of 3D plant object visualisation is primarily evident from the visualisation of plants using photographed billboarded images, to more advanced procedural models that come closer to simulating realistic virtual plants. However, few programs model physical reactions of plants to external factors and even fewer are able to grow plants based on mathematical and/or biological parameters. In this paper, we undertake an evaluation of plant-based object simulation programs currently available, with a focus upon the components and techniques involved in producing these objects. Through an analytical review process we consider the strengths and weaknesses of several program packages, the features and use of these programs and the possible opportunities in deploying these for creating smart 3D plant-based objects to support agricultural research and natural resource management. In creating smart 3D objects the model needs to be informed by both plant physiology and phenology. Expert knowledge will frame the parameters and procedures that will attribute the object and allow the simulation of dynamic virtual plants. Ultimately, biologically smart 3D virtual plants that react to changes within an environment could be an effective medium to visually represent landscapes and communicate land management scenarios and practices to planners and decision-makers.

  2. Analytical and numerical analysis of charge carriers extracted by linearly increasing voltage in a metal-insulator-semiconductor structure relevant to bulk heterojunction organic solar cells

    NASA Astrophysics Data System (ADS)

    Yumnam, Nivedita; Hirwa, Hippolyte; Wagner, Veit

    2017-12-01

    Analysis of charge extraction by linearly increasing voltage is conducted on metal-insulator-semiconductor capacitors in a structure relevant to organic solar cells. For this analysis, an analytical model is developed and is used to determine the conductivity of the active layer. Numerical simulations of the transient current were performed as a way to confirm the applicability of our analytical model and other analytical models existing in the literature. Our analysis is applied to poly(3-hexylthiophene)(P3HT) : phenyl-C61-butyric acid methyl ester (PCBM) which allows to determine the electron and hole mobility independently. A combination of experimental data analysis and numerical simulations reveals the effect of trap states on the transient current and where this contribution is crucial for data analysis.

  3. Probabilistic power flow using improved Monte Carlo simulation method with correlated wind sources

    NASA Astrophysics Data System (ADS)

    Bie, Pei; Zhang, Buhan; Li, Hang; Deng, Weisi; Wu, Jiasi

    2017-01-01

    Probabilistic Power Flow (PPF) is a very useful tool for power system steady-state analysis. However, the correlation among different random injection power (like wind power) brings great difficulties to calculate PPF. Monte Carlo simulation (MCS) and analytical methods are two commonly used methods to solve PPF. MCS has high accuracy but is very time consuming. Analytical method like cumulants method (CM) has high computing efficiency but the cumulants calculating is not convenient when wind power output does not obey any typical distribution, especially when correlated wind sources are considered. In this paper, an Improved Monte Carlo simulation method (IMCS) is proposed. The joint empirical distribution is applied to model different wind power output. This method combines the advantages of both MCS and analytical method. It not only has high computing efficiency, but also can provide solutions with enough accuracy, which is very suitable for on-line analysis.

  4. Research study on stabilization and control: Modern sampled-data control theory. Continuous and discrete describing function analysis of the LST system. [with emphasis on the control moment gyroscope control loop

    NASA Technical Reports Server (NTRS)

    Kuo, B. C.; Singh, G.

    1974-01-01

    The dynamics of the Large Space Telescope (LST) control system were studied in order to arrive at a simplified model for computer simulation without loss of accuracy. The frictional nonlinearity of the Control Moment Gyroscope (CMG) Control Loop was analyzed in a model to obtain data for the following: (1) a continuous describing function for the gimbal friction nonlinearity; (2) a describing function of the CMG nonlinearity using an analytical torque equation; and (3) the discrete describing function and function plots for CMG functional linearity. Preliminary computer simulations are shown for the simplified LST system, first without, and then with analytical torque expressions. Transfer functions of the sampled-data LST system are also described. A final computer simulation is presented which uses elements of the simplified sampled-data LST system with analytical CMG frictional torque expressions.

  5. Functionality of empirical model-based predictive analytics for the early detection of hemodynamic instabilty.

    PubMed

    Summers, Richard L; Pipke, Matt; Wegerich, Stephan; Conkright, Gary; Isom, Kristen C

    2014-01-01

    Background. Monitoring cardiovascular hemodynamics in the modern clinical setting is a major challenge. Increasing amounts of physiologic data must be analyzed and interpreted in the context of the individual patient’s pathology and inherent biologic variability. Certain data-driven analytical methods are currently being explored for smart monitoring of data streams from patients as a first tier automated detection system for clinical deterioration. As a prelude to human clinical trials, an empirical multivariate machine learning method called Similarity-Based Modeling (“SBM”), was tested in an In Silico experiment using data generated with the aid of a detailed computer simulator of human physiology (Quantitative Circulatory Physiology or “QCP”) which contains complex control systems with realistic integrated feedback loops. Methods. SBM is a kernel-based, multivariate machine learning method that that uses monitored clinical information to generate an empirical model of a patient’s physiologic state. This platform allows for the use of predictive analytic techniques to identify early changes in a patient’s condition that are indicative of a state of deterioration or instability. The integrity of the technique was tested through an In Silico experiment using QCP in which the output of computer simulations of a slowly evolving cardiac tamponade resulted in progressive state of cardiovascular decompensation. Simulator outputs for the variables under consideration were generated at a 2-min data rate (0.083Hz) with the tamponade introduced at a point 420 minutes into the simulation sequence. The functionality of the SBM predictive analytics methodology to identify clinical deterioration was compared to the thresholds used by conventional monitoring methods. Results. The SBM modeling method was found to closely track the normal physiologic variation as simulated by QCP. With the slow development of the tamponade, the SBM model are seen to disagree while the simulated biosignals in the early stages of physiologic deterioration and while the variables are still within normal ranges. Thus, the SBM system was found to identify pathophysiologic conditions in a timeframe that would not have been detected in a usual clinical monitoring scenario. Conclusion. In this study the functionality of a multivariate machine learning predictive methodology that that incorporates commonly monitored clinical information was tested using a computer model of human physiology. SBM and predictive analytics were able to differentiate a state of decompensation while the monitored variables were still within normal clinical ranges. This finding suggests that the SBM could provide for early identification of a clinical deterioration using predictive analytic techniques. predictive analytics, hemodynamic, monitoring.

  6. GENETIC-BASED ANALYTICAL METHODS FOR BACTERIA AND FUNGI

    EPA Science Inventory

    In the past two decades, advances in high-throughput sequencing technologies have lead to a veritable explosion in the generation of nucleic acid sequence information (1). While these advances are illustrated most prominently by the successful sequencing of the human genome, they...

  7. Determination of the Performance Parameters of a Spectrophotometer: An Advanced Experiment.

    ERIC Educational Resources Information Center

    Cope, Virgil W.

    1978-01-01

    Describes an advanced analytical chemistry laboratory experiment developed for the determination of the performance parameters of a spectrophotometer. Among the parameters are the baseline linearity with wavelength, wavelength accuracy and respectability, stray light, noise level and pen response time. (HM)

  8. Analytical Chemistry Division annual progress report for period ending December 31, 1988

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The Analytical Chemistry Division of Oak Ridge National Laboratory (ORNL) is a large and diversified organization. As such, it serves a multitude of functions for a clientele that exists both in and outside of ORNL. These functions fall into the following general categories: (1) Analytical Research, Development, and Implementation. The division maintains a program to conceptualize, investigate, develop, assess, improve, and implement advanced technology for chemical and physicochemical measurements. Emphasis is on problems and needs identified with ORNL and Department of Energy (DOE) programs; however, attention is also given to advancing the analytical sciences themselves. (2) Programmatic Research, Development, andmore » Utilization. The division carries out a wide variety of chemical work that typically involves analytical research and/or development plus the utilization of analytical capabilities to expedite programmatic interests. (3) Technical Support. The division performs chemical and physicochemical analyses of virtually all types. The Analytical Chemistry Division is organized into four major sections, each of which may carry out any of the three types of work mentioned above. Chapters 1 through 4 of this report highlight progress within the four sections during the period January 1 to December 31, 1988. A brief discussion of the division's role in an especially important environmental program is given in Chapter 5. Information about quality assurance, safety, and training programs is presented in Chapter 6, along with a tabulation of analyses rendered. Publications, oral presentations, professional activities, educational programs, and seminars are cited in Chapters 7 and 8.« less

  9. Charging and Heating Dynamics of Nanoparticles in Nonthermal Plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kortshagen, Uwe R.

    2014-08-15

    The focus of this award was to understand the interactions of nanometer-sized particles with ionized gases, also called plasmas. Plasmas are widely used in the fabrication of electronic circuits such as microprocessors and memory devices, in plasma display panels, as well as in medical applications. Recently, these ionized gases are finding applications in the synthesis of advanced nanomaterials with novel properties, which are based on nanometer-sized particulate (nanoparticles) building blocks. As these nanoparticles grow in the plasma environment, they interact with the plasmas species such as electrons and ions which critically determines the nanoparticle properties. The University of Minnesota researchersmore » conducting this project performed numerical simulations and developed analytical models that described the interaction of plasma-bound nanoparticles with the plasma ions. The plasma ions bombard the nanoparticle surface with substantial energy, which can result in the rearrangement of the nanoparticles’ atoms, giving them often desirable structures at the atomic scale. Being able to tune the ion energies allows to control the properties of nanoparticles produced in order to tailor their attributes for certain applications. For instance, when used in high efficiency light emitting devices, nanoparticles produced under high fluxes of highly energetic ions may show superior light emission to particles produced under low fluxes of less energetic ions. The analytical models developed by the University of Minnesota researchers enable the research community to easily determine the energy of ions bombarding the nanoparticles. The researchers extensively tested the validity of the analytical models by comparing them to sophisticated computer simulations based on stochastic particle modeling, also called Monte Carlo modeling, which simulated the motion of hundreds of thousands of ions and their interaction with the nanoparticle surfaces. Beyond the scientific intellectual merits, this award had significant broader impacts. Two graduate students received their doctoral degrees and both have joined a U.S. manufacturer of plasma-based semiconductor processing equipment. Four undergraduate students participated in research conducted under this grant and gained valuable hands-on laboratory experience. A middle school science teacher observed research conducted under this grant and developed three new course modules that introduce middle school students to the concepts of nanometer scale, the atomic structure of matter, and the composition of matter of different chemical elements.« less

  10. Application of the correlation constrained multivariate curve resolution alternating least-squares method for analyte quantitation in the presence of unexpected interferences using first-order instrumental data.

    PubMed

    Goicoechea, Héctor C; Olivieri, Alejandro C; Tauler, Romà

    2010-03-01

    Correlation constrained multivariate curve resolution-alternating least-squares is shown to be a feasible method for processing first-order instrumental data and achieve analyte quantitation in the presence of unexpected interferences. Both for simulated and experimental data sets, the proposed method could correctly retrieve the analyte and interference spectral profiles and perform accurate estimations of analyte concentrations in test samples. Since no information concerning the interferences was present in calibration samples, the proposed multivariate calibration approach including the correlation constraint facilitates the achievement of the so-called second-order advantage for the analyte of interest, which is known to be present for more complex higher-order richer instrumental data. The proposed method is tested using a simulated data set and two experimental data systems, one for the determination of ascorbic acid in powder juices using UV-visible absorption spectral data, and another for the determination of tetracycline in serum samples using fluorescence emission spectroscopy.

  11. Advanced Modeling, Simulation and Analysis (AMSA) Capability Roadmap Progress Review

    NASA Technical Reports Server (NTRS)

    Antonsson, Erik; Gombosi, Tamas

    2005-01-01

    Contents include the following: NASA capability roadmap activity. Advanced modeling, simulation, and analysis overview. Scientific modeling and simulation. Operations modeling. Multi-special sensing (UV-gamma). System integration. M and S Environments and Infrastructure.

  12. Advanced methods of structural and trajectory analysis for transport aircraft

    NASA Technical Reports Server (NTRS)

    Ardema, Mark D.

    1995-01-01

    This report summarizes the efforts in two areas: (1) development of advanced methods of structural weight estimation, and (2) development of advanced methods of trajectory optimization. The majority of the effort was spent in the structural weight area. A draft of 'Analytical Fuselage and Wing Weight Estimation of Transport Aircraft', resulting from this research, is included as an appendix.

  13. Recent Advances in Bioprinting and Applications for Biosensing

    PubMed Central

    Dias, Andrew D.; Kingsley, David M.; Corr, David T.

    2014-01-01

    Future biosensing applications will require high performance, including real-time monitoring of physiological events, incorporation of biosensors into feedback-based devices, detection of toxins, and advanced diagnostics. Such functionality will necessitate biosensors with increased sensitivity, specificity, and throughput, as well as the ability to simultaneously detect multiple analytes. While these demands have yet to be fully realized, recent advances in biofabrication may allow sensors to achieve the high spatial sensitivity required, and bring us closer to achieving devices with these capabilities. To this end, we review recent advances in biofabrication techniques that may enable cutting-edge biosensors. In particular, we focus on bioprinting techniques (e.g., microcontact printing, inkjet printing, and laser direct-write) that may prove pivotal to biosensor fabrication and scaling. Recent biosensors have employed these fabrication techniques with success, and further development may enable higher performance, including multiplexing multiple analytes or cell types within a single biosensor. We also review recent advances in 3D bioprinting, and explore their potential to create biosensors with live cells encapsulated in 3D microenvironments. Such advances in biofabrication will expand biosensor utility and availability, with impact realized in many interdisciplinary fields, as well as in the clinic. PMID:25587413

  14. A Review of Numerical Simulation and Analytical Modeling for Medical Devices Safety in MRI

    PubMed Central

    Kabil, J.; Belguerras, L.; Trattnig, S.; Pasquier, C.; Missoffe, A.

    2016-01-01

    Summary Objectives To review past and present challenges and ongoing trends in numerical simulation for MRI (Magnetic Resonance Imaging) safety evaluation of medical devices. Methods A wide literature review on numerical and analytical simulation on simple or complex medical devices in MRI electromagnetic fields shows the evolutions through time and a growing concern for MRI safety over the years. Major issues and achievements are described, as well as current trends and perspectives in this research field. Results Numerical simulation of medical devices is constantly evolving, supported by calculation methods now well-established. Implants with simple geometry can often be simulated in a computational human model, but one issue remaining today is the experimental validation of these human models. A great concern is to assess RF heating on implants too complex to be traditionally simulated, like pacemaker leads. Thus, ongoing researches focus on alternative hybrids methods, both numerical and experimental, with for example a transfer function method. For the static field and gradient fields, analytical models can be used for dimensioning simple implants shapes, but limited for complex geometries that cannot be studied with simplifying assumptions. Conclusions Numerical simulation is an essential tool for MRI safety testing of medical devices. The main issues remain the accuracy of simulations compared to real life and the studies of complex devices; but as the research field is constantly evolving, some promising ideas are now under investigation to take up the challenges. PMID:27830244

  15. Adjustable internal structure for reconstructing gradient index profile of crystalline lens.

    PubMed

    Bahrami, Mehdi; Goncharov, Alexander V; Pierscionek, Barbara K

    2014-03-01

    Employing advanced technologies in studying the crystalline lens of the eye has improved our understanding of the refractive index gradient of the lens. Reconstructing and studying such a complex structure requires models with adaptable internal geometry that can be altered to simulate geometrical and optical changes of the lens with aging. In this Letter, we introduce an optically well-defined, geometrical structure for modeling the gradient refractive index profile of the crystalline lens with the advantage of an adjustable internal structure that is not available with existing models. The refractive index profile assigned to this rotationally symmetric geometry is calculated numerically, yet it is shown that this does not limit the model. The study provides a basis for developing lens models with sophisticated external and internal structures without the need for analytical solutions to calculate refractive index profiles.

  16. Ionization waves of arbitrary velocity driven by a flying focus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palastro, J. P.; Turnbull, D.; Bahk, S. -W.

    A chirped laser pulse focused by a chromatic lens exhibits a dynamic, or flying, focus in which the trajectory of the peak intensity decouples from the group velocity. In a medium, the flying focus can trigger an ionization front that follows this trajectory. By adjusting the chirp, the ionization front can be made to travel at an arbitrary velocity along the optical axis. For this study, we present analytical calculations and simulations describing the propagation of the flying focus pulse, the self-similar form of its intensity profile, and ionization wave formation. The ability to control the speed of the ionizationmore » wave and, in conjunction, mitigate plasma refraction has the potential to advance several laser-based applications, including Raman amplification, photon acceleration, high-order-harmonic generation, and THz generation.« less

  17. Airborne Forward-Looking Interferometer for the Detection of Terminal-Area Hazards

    NASA Technical Reports Server (NTRS)

    West, Leanne; Gimmestad, Gary; Lane, Sarah; Smith, Bill L.; Kireev, Stanislav; Daniels, Taumi S.; Cornman, Larry; Sharman, Bob

    2014-01-01

    The Forward Looking Interferometer (FLI) program was a multi-year cooperative research effort to investigate the use of imaging radiometers with high spectral resolution, using both modeling/simulation and field experiments, along with sophisticated data analysis techniques that were originally developed for analysis of data from space-based radiometers and hyperspectral imagers. This investigation has advanced the state of knowledge in this technical area, and the FLI program developed a greatly improved understanding of the radiometric signal strength of aviation hazards in a wide range of scenarios, in addition to a much better understanding of the real-world functionality requirements for hazard detection instruments. The project conducted field experiments on three hazards (turbulence, runway conditions, and wake vortices) and analytical studies on several others including volcanic ash, reduced visibility conditions, in flight icing conditions, and volcanic ash.

  18. Ionization waves of arbitrary velocity driven by a flying focus

    DOE PAGES

    Palastro, J. P.; Turnbull, D.; Bahk, S. -W.; ...

    2018-03-01

    A chirped laser pulse focused by a chromatic lens exhibits a dynamic, or flying, focus in which the trajectory of the peak intensity decouples from the group velocity. In a medium, the flying focus can trigger an ionization front that follows this trajectory. By adjusting the chirp, the ionization front can be made to travel at an arbitrary velocity along the optical axis. For this study, we present analytical calculations and simulations describing the propagation of the flying focus pulse, the self-similar form of its intensity profile, and ionization wave formation. The ability to control the speed of the ionizationmore » wave and, in conjunction, mitigate plasma refraction has the potential to advance several laser-based applications, including Raman amplification, photon acceleration, high-order-harmonic generation, and THz generation.« less

  19. Spacing distribution functions for 1D point island model with irreversible attachment

    NASA Astrophysics Data System (ADS)

    Gonzalez, Diego; Einstein, Theodore; Pimpinelli, Alberto

    2011-03-01

    We study the configurational structure of the point island model for epitaxial growth in one dimension. In particular, we calculate the island gap and capture zone distributions. Our model is based on an approximate description of nucleation inside the gaps. Nucleation is described by the joint probability density p xy n (x,y), which represents the probability density to have nucleation at position x within a gap of size y. Our proposed functional form for p xy n (x,y) describes excellently the statistical behavior of the system. We compare our analytical model with extensive numerical simulations. Our model retains the most relevant physical properties of the system. This work was supported by the NSF-MRSEC at the University of Maryland, Grant No. DMR 05-20471, with ancillary support from the Center for Nanophysics and Advanced Materials (CNAM).

  20. INTEGRATING DATA ANALYTICS AND SIMULATION METHODS TO SUPPORT MANUFACTURING DECISION MAKING

    PubMed Central

    Kibira, Deogratias; Hatim, Qais; Kumara, Soundar; Shao, Guodong

    2017-01-01

    Modern manufacturing systems are installed with smart devices such as sensors that monitor system performance and collect data to manage uncertainties in their operations. However, multiple parameters and variables affect system performance, making it impossible for a human to make informed decisions without systematic methodologies and tools. Further, the large volume and variety of streaming data collected is beyond simulation analysis alone. Simulation models are run with well-prepared data. Novel approaches, combining different methods, are needed to use this data for making guided decisions. This paper proposes a methodology whereby parameters that most affect system performance are extracted from the data using data analytics methods. These parameters are used to develop scenarios for simulation inputs; system optimizations are performed on simulation data outputs. A case study of a machine shop demonstrates the proposed methodology. This paper also reviews candidate standards for data collection, simulation, and systems interfaces. PMID:28690363

  1. Applying advanced analytics to guide emergency department operational decisions: A proof-of-concept study examining the effects of boarding.

    PubMed

    Andrew Taylor, R; Venkatesh, Arjun; Parwani, Vivek; Chekijian, Sharon; Shapiro, Marc; Oh, Andrew; Harriman, David; Tarabar, Asim; Ulrich, Andrew

    2018-01-04

    Emergency Department (ED) leaders are increasingly confronted with large amounts of data with the potential to inform and guide operational decisions. Routine use of advanced analytic methods may provide additional insights. To examine the practical application of available advanced analytic methods to guide operational decision making around patient boarding. Retrospective analysis of the effect of boarding on ED operational metrics from a single site between 1/2015 and 1/2017. Times series were visualized through decompositional techniques accounting for seasonal trends, to determine the effect of boarding on ED performance metrics and to determine the impact of boarding "shocks" to the system on operational metrics over several days. There were 226,461 visits with the mean (IQR) number of visits per day was 273 (258-291). Decomposition of the boarding count time series illustrated an upward trend in the last 2-3 quarters as well as clear seasonal components. All performance metrics were significantly impacted (p<0.05) by boarding count, except for overall Press Ganey scores (p<0.65). For every additional increase in boarder count, overall length-of-stay (LOS) increased by 1.55min (0.68, 1.50). Smaller effects were seen for waiting room LOS and treat and release LOS. The impulse responses indicate that the boarding shocks are characterized by changes in the performance metrics within the first day that fade out after 4-5days. In this study regarding the use of advanced analytics in daily ED operations, time series analysis provided multiple useful insights into boarding and its impact on performance metrics. Copyright © 2018. Published by Elsevier Inc.

  2. Fast solver for large scale eddy current non-destructive evaluation problems

    NASA Astrophysics Data System (ADS)

    Lei, Naiguang

    Eddy current testing plays a very important role in non-destructive evaluations of conducting test samples. Based on Faraday's law, an alternating magnetic field source generates induced currents, called eddy currents, in an electrically conducting test specimen. The eddy currents generate induced magnetic fields that oppose the direction of the inducing magnetic field in accordance with Lenz's law. In the presence of discontinuities in material property or defects in the test specimen, the induced eddy current paths are perturbed and the associated magnetic fields can be detected by coils or magnetic field sensors, such as Hall elements or magneto-resistance sensors. Due to the complexity of the test specimen and the inspection environments, the availability of theoretical simulation models is extremely valuable for studying the basic field/flaw interactions in order to obtain a fuller understanding of non-destructive testing phenomena. Theoretical models of the forward problem are also useful for training and validation of automated defect detection systems. Theoretical models generate defect signatures that are expensive to replicate experimentally. In general, modelling methods can be classified into two categories: analytical and numerical. Although analytical approaches offer closed form solution, it is generally not possible to obtain largely due to the complex sample and defect geometries, especially in three-dimensional space. Numerical modelling has become popular with advances in computer technology and computational methods. However, due to the huge time consumption in the case of large scale problems, accelerations/fast solvers are needed to enhance numerical models. This dissertation describes a numerical simulation model for eddy current problems using finite element analysis. Validation of the accuracy of this model is demonstrated via comparison with experimental measurements of steam generator tube wall defects. These simulations generating two-dimension raster scan data typically takes one to two days on a dedicated eight-core PC. A novel direct integral solver for eddy current problems and GPU-based implementation is also investigated in this research to reduce the computational time.

  3. The Effects of Time Advance Mechanism on Simple Agent Behaviors in Combat Simulations

    DTIC Science & Technology

    2011-12-01

    modeling packages that illustrate the differences between discrete-time simulation (DTS) and discrete-event simulation ( DES ) methodologies. Many combat... DES ) models , often referred to as “next-event” (Law and Kelton 2000) or discrete time simulation (DTS), commonly referred to as “time-step.” DTS...discrete-time simulation (DTS) and discrete-event simulation ( DES ) methodologies. Many combat models use DTS as their simulation time advance mechanism

  4. An overview of city analytics

    PubMed Central

    Higham, Desmond J.; Batty, Michael; Bettencourt, Luís M. A.; Greetham, Danica Vukadinović; Grindrod, Peter

    2017-01-01

    We introduce the 14 articles in the Royal Society Open Science themed issue on City Analytics. To provide a high-level, strategic, overview, we summarize the topics addressed and the analytical tools deployed. We then give a more detailed account of the individual contributions. Our overall aims are (i) to highlight exciting advances in this emerging, interdisciplinary field, (ii) to encourage further activity and (iii) to emphasize the variety of new, public-domain, datasets that are available to researchers. PMID:28386454

  5. Simulator design for advanced ISDN satellite design and experiments

    NASA Technical Reports Server (NTRS)

    Pepin, Gerald R.

    1992-01-01

    This simulation design task completion report documents the simulation techniques associated with the network models of both the Interim Service ISDN (integrated services digital network) Satellite (ISIS) and the Full Service ISDN Satellite (FSIS) architectures. The ISIS network model design represents satellite systems like the Advanced Communication Technology Satellite (ACTS) orbiting switch. The FSIS architecture, the ultimate aim of this element of the Satellite Communications Applications Research (SCAR) program, moves all control and switching functions on-board the next generation ISDN communication satellite. The technical and operational parameters for the advanced ISDN communications satellite design will be obtained from the simulation of ISIS and FSIS engineering software models for their major subsystems. Discrete events simulation experiments will be performed with these models using various traffic scenarios, design parameters and operational procedures. The data from these simulations will be used to determine the engineering parameters for the advanced ISDN communications satellite.

  6. The Exponential Expansion of Simulation: How Simulation has Grown as a Research Tool

    DTIC Science & Technology

    2012-09-01

    exponential growth of computing power. Although other analytic approaches also benefit from this trend, keyword searches of several scholarly search ... engines reveal that the reliance on simulation is increasing more rapidly. A descriptive analysis paints a compelling picture: simulation is frequently

  7. Conceptual design study for an advanced cab and visual system, volume 2

    NASA Technical Reports Server (NTRS)

    Rue, R. J.; Cyrus, M. L.; Garnett, T. A.; Nachbor, J. W.; Seery, J. A.; Starr, R. L.

    1980-01-01

    The performance, design, construction and testing requirements are defined for developing an advanced cab and visual system. The rotorcraft system integration simulator is composed of the advanced cab and visual system and the rotorcraft system motion generator, and is part of an existing simulation facility. User's applications for the simulator include rotorcraft design development, product improvement, threat assessment, and accident investigation.

  8. Analytical bond order potential for simulations of BeO 1D and 2D nanostructures and plasma-surface interactions

    NASA Astrophysics Data System (ADS)

    Byggmästar, J.; Hodille, E. A.; Ferro, Y.; Nordlund, K.

    2018-04-01

    An analytical interatomic bond order potential for the Be-O system is presented. The potential is fitted and compared to a large database of bulk BeO and point defect properties obtained using density functional theory. Its main applications include simulations of plasma-surface interactions involving oxygen or oxide layers on beryllium, as well as simulations of BeO nanotubes and nanosheets. We apply the potential in a study of oxygen irradiation of Be surfaces, and observe the early stages of an oxide layer forming on the Be surface. Predicted thermal and elastic properties of BeO nanotubes and nanosheets are simulated and compared with published ab initio data.

  9. Polybrominated Diphenyl Ethers in Dryer Lint: An Advanced Analysis Laboratory

    ERIC Educational Resources Information Center

    Thompson, Robert Q.

    2008-01-01

    An advanced analytical chemistry laboratory experiment is described that involves environmental analysis and gas chromatography-mass spectrometry. Students analyze lint from clothes dryers for traces of flame retardant chemicals, polybrominated diphenylethers (PBDEs), compounds receiving much attention recently. In a typical experiment, ng/g…

  10. The Aggregate Exposure Pathway (AEP): A conceptual framework for advancing exposure science research and applications

    EPA Science Inventory

    Historically, risk assessment has relied upon toxicological data to obtain hazard-based reference levels, which are subsequently compared to exposure estimates to determine whether an unacceptable risk to public health may exist. Recent advances in analytical methods, biomarker ...

  11. Analytical Ultrasonics in Materials Research and Testing

    NASA Technical Reports Server (NTRS)

    Vary, A.

    1986-01-01

    Research results in analytical ultrasonics for characterizing structural materials from metals and ceramics to composites are presented. General topics covered by the conference included: status and advances in analytical ultrasonics for characterizing material microstructures and mechanical properties; status and prospects for ultrasonic measurements of microdamage, degradation, and underlying morphological factors; status and problems in precision measurements of frequency-dependent velocity and attenuation for materials analysis; procedures and requirements for automated, digital signal acquisition, processing, analysis, and interpretation; incentives for analytical ultrasonics in materials research and materials processing, testing, and inspection; and examples of progress in ultrasonics for interrelating microstructure, mechanical properites, and dynamic response.

  12. Bridging Numerical and Analytical Models of Transient Travel Time Distributions: Challenges and Opportunities

    NASA Astrophysics Data System (ADS)

    Danesh Yazdi, M.; Klaus, J.; Condon, L. E.; Maxwell, R. M.

    2017-12-01

    Recent advancements in analytical solutions to quantify water and solute time-variant travel time distributions (TTDs) and the related StorAge Selection (SAS) functions synthesize catchment complexity into a simplified, lumped representation. While these analytical approaches are easy and efficient in application, they require high frequency hydrochemical data for parameter estimation. Alternatively, integrated hydrologic models coupled to Lagrangian particle-tracking approaches can directly simulate age under different catchment geometries and complexity at a greater computational expense. Here, we compare and contrast the two approaches by exploring the influence of the spatial distribution of subsurface heterogeneity, interactions between distinct flow domains, diversity of flow pathways, and recharge rate on the shape of TTDs and the relating SAS functions. To this end, we use a parallel three-dimensional variably saturated groundwater model, ParFlow, to solve for the velocity fields in the subsurface. A particle-tracking model, SLIM, is then implemented to determine the age distributions at every real time and domain location, facilitating a direct characterization of the SAS functions as opposed to analytical approaches requiring calibration of such functions. Steady-state results reveal that the assumption of random age sampling scheme might only hold in the saturated region of homogeneous catchments resulting in an exponential TTD. This assumption is however violated when the vadose zone is included as the underlying SAS function gives a higher preference to older ages. The dynamical variability of the true SAS functions is also shown to be largely masked by the smooth analytical SAS functions. As the variability of subsurface spatial heterogeneity increases, the shape of TTD approaches a power-law distribution function, including a broader distribution of shorter and longer travel times. We further found that larger (smaller) magnitude of effective precipitation shifts the scale of TTD towards younger (older) travel times, while the shape of the TTD remains untouched. This work constitutes a first step in linking a numerical transport model and analytical solutions of TTD to study their assumptions and limitations, providing physical inferences for empirical parameters.

  13. A fast analytical undulator model for realistic high-energy FEL simulations

    NASA Astrophysics Data System (ADS)

    Tatchyn, R.; Cremer, T.

    1997-02-01

    A number of leading FEL simulation codes used for modeling gain in the ultralong undulators required for SASE saturation in the <100 Å range employ simplified analytical models both for field and error representations. Although it is recognized that both the practical and theoretical validity of such codes could be enhanced by incorporating realistic undulator field calculations, the computational cost of doing this can be prohibitive, especially for point-to-point integration of the equations of motion through each undulator period. In this paper we describe a simple analytical model suitable for modeling realistic permanent magnet (PM), hybrid/PM, and non-PM undulator structures, and discuss selected techniques for minimizing computation time.

  14. Let's Talk Learning Analytics: A Framework for Implementation in Relation to Student Retention

    ERIC Educational Resources Information Center

    West, Deborah; Heath, David; Huijser, Henk

    2016-01-01

    This paper presents a dialogical tool for the advancement of learning analytics implementation for student retention in Higher Education institutions. The framework was developed as an outcome of a project commissioned and funded by the Australian Government's "Office for Learning and Teaching". The project took a mixed-method approach…

  15. Learning Analytics in Higher Education Development: A Roadmap

    ERIC Educational Resources Information Center

    Adejo, Olugbenga; Connolly, Thomas

    2017-01-01

    The increase in education data and advance in technology are bringing about enhanced teaching and learning methodology. The emerging field of Learning Analytics (LA) continues to seek ways to improve the different methods of gathering, analysing, managing and presenting learners' data with the sole aim of using it to improve the student learning…

  16. Learning Analytics as a Counterpart to Surveys of Student Experience

    ERIC Educational Resources Information Center

    Borden, Victor M. H.; Coates, Hamish

    2017-01-01

    Analytics derived from the student learning environment provide new insights into the collegiate experience; they can be used as a supplement to or, to some extent, in place of traditional surveys. To serve this purpose, however, greater attention must be paid to conceptual frameworks and to advancing institutional systems, activating new…

  17. An Attenuated Total Reflectance Sensor for Copper: An Experiment for Analytical or Physical Chemistry

    ERIC Educational Resources Information Center

    Shtoyko, Tanya; Zudans, Imants; Seliskar, Carl J.; Heineman, William R.; Richardson, John N.

    2004-01-01

    A sensor experiment which can be applied to advanced undergraduate laboratory course in physical or analytical chemistry is described along with certain concepts like the demonstration of chemical sensing, preparation of thin films on a substrate, microtitration, optical determination of complex ion stoichiometry and isosbestic point. It is seen…

  18. Juicing the Juice: A Laboratory-Based Case Study for an Instrumental Analytical Chemistry Course

    ERIC Educational Resources Information Center

    Schaber, Peter M.; Dinan, Frank J.; St. Phillips, Michael; Larson, Renee; Pines, Harvey A.; Larkin, Judith E.

    2011-01-01

    A young, inexperienced Food and Drug Administration (FDA) chemist is asked to distinguish between authentic fresh orange juice and suspected reconstituted orange juice falsely labeled as fresh. In an advanced instrumental analytical chemistry application of this case, inductively coupled plasma (ICP) spectroscopy is used to distinguish between the…

  19. Advances in organic-inorganic hybrid sorbents for the extraction of organic and inorganic pollutants in different types of food and environmental samples.

    PubMed

    Ng, Nyuk-Ting; Kamaruddin, Amirah Farhan; Wan Ibrahim, Wan Aini; Sanagi, Mohd Marsin; Abdul Keyon, Aemi S

    2018-01-01

    The efficiency of the extraction and removal of pollutants from food and the environment has been an important issue in analytical science. By incorporating inorganic species into an organic matrix, a new material known as an organic-inorganic hybrid material is formed. As it possesses high selectivity, permeability, and mechanical and chemical stabilities, organic-inorganic hybrid materials constitute an emerging research field and have become popular to serve as sorbents in various separaton science methods. Here, we review recent significant advances in analytical solid-phase extraction employing organic-inorganic composite/nanocomposite sorbents for the extraction of organic and inorganic pollutants from various types of food and environmental matrices. The physicochemical characteristics, extraction properties, and analytical performances of sorbents are discussed; including morphology and surface characteristics, types of functional groups, interaction mechanism, selectivity and sensitivity, accuracy, and regeneration abilities. Organic-inorganic hybrid sorbents combined with extraction techniques are highly promising for sample preparation of various food and environmental matrixes with analytes at trace levels. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Simulation Credibility: Advances in Verification, Validation, and Uncertainty Quantification

    NASA Technical Reports Server (NTRS)

    Mehta, Unmeel B. (Editor); Eklund, Dean R.; Romero, Vicente J.; Pearce, Jeffrey A.; Keim, Nicholas S.

    2016-01-01

    Decision makers and other users of simulations need to know quantified simulation credibility to make simulation-based critical decisions and effectively use simulations, respectively. The credibility of a simulation is quantified by its accuracy in terms of uncertainty, and the responsibility of establishing credibility lies with the creator of the simulation. In this volume, we present some state-of-the-art philosophies, principles, and frameworks. The contributing authors involved in this publication have been dedicated to advancing simulation credibility. They detail and provide examples of key advances over the last 10 years in the processes used to quantify simulation credibility: verification, validation, and uncertainty quantification. The philosophies and assessment methods presented here are anticipated to be useful to other technical communities conducting continuum physics-based simulations; for example, issues related to the establishment of simulation credibility in the discipline of propulsion are discussed. We envision that simulation creators will find this volume very useful to guide and assist them in quantitatively conveying the credibility of their simulations.

  1. Upset due to a single particle caused propagated transients in a bulk CMOS microprocessor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leavy, J.F.; Hoffmann, L.F.; Shoran, R.W.

    1991-12-01

    This paper reports on data pattern advances observed in preset, single event upset (SEU) hardened clocked flip-flops, during static Cf-252 exposures on a bulk CMOS microprocessor, that were attributable to particle caused anomalous clock signals, or propagated transients. SPICE simulations established that particle strikes in the output nodes of a clock control logic flip-flop could produce transients of sufficient amplitude and duration to be accepted as legitimate pulses by clock buffers fed by the flip-flop's output nodes. The buffers would then output false clock pulses, thereby advancing the state of the present flip-flops. Masking the clock logic on one ofmore » the test chips made the flip-flop data advance cease, confirming the clock logic as the source of the SEU. By introducing N{sub 2} gas, at reduced pressures, into the SEU test chamber to attenuate Cf-252 particle LET's, a 24-26 MeV-cm{sup 2}/mg LET threshold was deduced. Subsequent tests, at the 88-inch cyclotron at Berkeley, established an LET threshold of 30 MeV-cm{sup 2}/mg (283 MeV Cu at 0{degrees}) for the generation of false clocks. Cyclotron SEU tests are considered definitive, while Cf-252 data usually is not. However, in this instance Cf-252 tests proved analytically useful, providing SEU characterization data that was both timely and inexpensive.« less

  2. Predictive analytics and child protection: constraints and opportunities.

    PubMed

    Russell, Jesse

    2015-08-01

    This paper considers how predictive analytics might inform, assist, and improve decision making in child protection. Predictive analytics represents recent increases in data quantity and data diversity, along with advances in computing technology. While the use of data and statistical modeling is not new to child protection decision making, its use in child protection is experiencing growth, and efforts to leverage predictive analytics for better decision-making in child protection are increasing. Past experiences, constraints and opportunities are reviewed. For predictive analytics to make the most impact on child protection practice and outcomes, it must embrace established criteria of validity, equity, reliability, and usefulness. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Distributed data networks: a blueprint for Big Data sharing and healthcare analytics.

    PubMed

    Popovic, Jennifer R

    2017-01-01

    This paper defines the attributes of distributed data networks and outlines the data and analytic infrastructure needed to build and maintain a successful network. We use examples from one successful implementation of a large-scale, multisite, healthcare-related distributed data network, the U.S. Food and Drug Administration-sponsored Sentinel Initiative. Analytic infrastructure-development concepts are discussed from the perspective of promoting six pillars of analytic infrastructure: consistency, reusability, flexibility, scalability, transparency, and reproducibility. This paper also introduces one use case for machine learning algorithm development to fully utilize and advance the portfolio of population health analytics, particularly those using multisite administrative data sources. © 2016 New York Academy of Sciences.

  4. Recent advances in immunosensor for narcotic drug detection

    PubMed Central

    Gandhi, Sonu; Suman, Pankaj; Kumar, Ashok; Sharma, Prince; Capalash, Neena; Suri, C. Raman

    2015-01-01

    Introduction: Immunosensor for illicit drugs have gained immense interest and have found several applications for drug abuse monitoring. This technology has offered a low cost detection of narcotics; thereby, providing a confirmatory platform to compliment the existing analytical methods. Methods: In this minireview, we define the basic concept of transducer for immunosensor development that utilizes antibodies and low molecular mass hapten (opiate) molecules. Results: This article emphasizes on recent advances in immunoanalytical techniques for monitoring of opiate drugs. Our results demonstrate that high quality antibodies can be used for immunosensor development against target analyte with greater sensitivity, specificity and precision than other available analytical methods. Conclusion: In this review we highlight the fundamentals of different transducer technologies and its applications for immunosensor development currently being developed in our laboratory using rapid screening via immunochromatographic kit, label free optical detection via enzyme, fluorescence, gold nanoparticles and carbon nanotubes based immunosensing for sensitive and specific monitoring of opiates. PMID:26929925

  5. Analytical screening of low emissions, high performance duct burners for supersonic cruise aircraft engines

    NASA Technical Reports Server (NTRS)

    Lohmann, R. A.; Riecke, G. T.

    1977-01-01

    An analytical screening study was conducted to identify duct burner concepts capable of providing low emissions and high performance in advanced supersonic engines. Duct burner configurations ranging from current augmenter technology to advanced concepts such as premix-prevaporized burners were defined. Aerothermal and mechanical design studies provided the basis for screening these configurations using the criteria of emissions, performance, engine compatibility, cost, weight and relative risk. Technology levels derived from recently defined experimental low emissions main burners are required to achieve both low emissions and high performance goals. A configuration based on the Vorbix (Vortex burning and mixing) combustor concept was analytically determined to meet the performance goals and is consistent with the fan duct envelope of a variable cycle engine. The duct burner configuration has a moderate risk level compatible with the schedule of anticipated experimental programs.

  6. Using Big Data Analytics to Advance Precision Radiation Oncology.

    PubMed

    McNutt, Todd R; Benedict, Stanley H; Low, Daniel A; Moore, Kevin; Shpitser, Ilya; Jiang, Wei; Lakshminarayanan, Pranav; Cheng, Zhi; Han, Peijin; Hui, Xuan; Nakatsugawa, Minoru; Lee, Junghoon; Moore, Joseph A; Robertson, Scott P; Shah, Veeraj; Taylor, Russ; Quon, Harry; Wong, John; DeWeese, Theodore

    2018-06-01

    Big clinical data analytics as a primary component of precision medicine is discussed, identifying where these emerging tools fit in the spectrum of genomics and radiomics research. A learning health system (LHS) is conceptualized that uses clinically acquired data with machine learning to advance the initiatives of precision medicine. The LHS is comprehensive and can be used for clinical decision support, discovery, and hypothesis derivation. These developing uses can positively impact the ultimate management and therapeutic course for patients. The conceptual model for each use of clinical data, however, is different, and an overview of the implications is discussed. With advancements in technologies and culture to improve the efficiency, accuracy, and breadth of measurements of the patient condition, the concept of an LHS may be realized in precision radiation therapy. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. A comparison of tripolar concentric ring electrode and spline Laplacians on a four-layer concentric spherical model.

    PubMed

    Liu, Xiang; Makeyev, Oleksandr; Besio, Walter

    2011-01-01

    We have simulated a four-layer concentric spherical head model. We calculated the spline and tripolar Laplacian estimates and compared them to the analytical Laplacian on the spherical surface. In the simulations we used five different dipole groups and two electrode configurations. The comparison shows that the tripolar Laplacian has higher correlation coefficient to the analytical Laplacian in the electrode configurations tested (19, standard 10/20 locations and 64 electrodes).

  8. Consistent Yokoya-Chen Approximation to Beamstrahlung(LCC-0010)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peskin, M

    2004-04-22

    I reconsider the Yokoya-Chen approximate evolution equation for beamstrahlung and modify it slightly to generate simple, consistent analytical approximations for the electron and photon energy spectra. I compare these approximations to previous ones, and to simulation data.I reconsider the Yokoya-Chen approximate evolution equation for beamstrahlung and modify it slightly to generate simple, consistent analytical approximations for the electron and photon energy spectra. I compare these approximations to previous ones, and to simulation data.

  9. Power combining in an array of microwave power rectifiers

    NASA Technical Reports Server (NTRS)

    Gutmann, R. J.; Borrego, J. M.

    1979-01-01

    This work analyzes the resultant efficiency degradation when identical rectifiers operate at different RF power levels as caused by the power beam taper. Both a closed-form analytical circuit model and a detailed computer-simulation model are used to obtain the output dc load line of the rectifier. The efficiency degradation is nearly identical with series and parallel combining, and the closed-form analytical model provides results which are similar to the detailed computer-simulation model.

  10. 14 CFR Appendix H to Part 121 - Advanced Simulation

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 3 2014-01-01 2014-01-01 false Advanced Simulation H Appendix H to Part... Simulation Link to an amendment published at 78 FR 67846, Nov. 12, 2013. This appendix provides guidelines... Simulation Training Program For an operator to conduct Level C or D training under this appendix all required...

  11. Machine Learning Technologies Translates Vigilant Surveillance Satellite Big Data into Predictive Alerts for Environmental Stressors

    NASA Astrophysics Data System (ADS)

    Johnson, S. P.; Rohrer, M. E.

    2017-12-01

    The application of scientific research pertaining to satellite imaging and data processing has facilitated the development of dynamic methodologies and tools that utilize nanosatellites and analytical platforms to address the increasing scope, scale, and intensity of emerging environmental threats to national security. While the use of remotely sensed data to monitor the environment at local and global scales is not a novel proposition, the application of advances in nanosatellites and analytical platforms are capable of overcoming the data availability and accessibility barriers that have historically impeded the timely detection, identification, and monitoring of these stressors. Commercial and university-based applications of these technologies were used to identify and evaluate their capacity as security-motivated environmental monitoring tools. Presently, nanosatellites can provide consumers with 1-meter resolution imaging, frequent revisits, and customizable tasking, allowing users to define an appropriate temporal scale for high resolution data collection that meets their operational needs. Analytical platforms are capable of ingesting increasingly large and diverse volumes of data, delivering complex analyses in the form of interpretation-ready data products and solutions. The synchronous advancement of these technologies creates the capability of analytical platforms to deliver interpretable products from persistently collected high-resolution data that meet varying temporal and geographic scale requirements. In terms of emerging environmental threats, these advances translate into customizable and flexible tools that can respond to and accommodate the evolving nature of environmental stressors. This presentation will demonstrate the capability of nanosatellites and analytical platforms to provide timely, relevant, and actionable information that enables environmental analysts and stakeholders to make informed decisions regarding the prevention, intervention, and prediction of emerging environmental threats.

  12. Comparison of simulator fidelity model predictions with in-simulator evaluation data

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.; Mckissick, B. T.; Ashworth, B. R.

    1983-01-01

    A full factorial in simulator experiment of a single axis, multiloop, compensatory pitch tracking task is described. The experiment was conducted to provide data to validate extensions to an analytic, closed loop model of a real time digital simulation facility. The results of the experiment encompassing various simulation fidelity factors, such as visual delay, digital integration algorithms, computer iteration rates, control loading bandwidths and proprioceptive cues, and g-seat kinesthetic cues, are compared with predictions obtained from the analytic model incorporating an optimal control model of the human pilot. The in-simulator results demonstrate more sensitivity to the g-seat and to the control loader conditions than were predicted by the model. However, the model predictions are generally upheld, although the predicted magnitudes of the states and of the error terms are sometimes off considerably. Of particular concern is the large sensitivity difference for one control loader condition, as well as the model/in-simulator mismatch in the magnitude of the plant states when the other states match.

  13. A computer program for the simulation of heat and moisture flow in soils

    NASA Technical Reports Server (NTRS)

    Camillo, P.; Schmugge, T. J.

    1981-01-01

    A computer program that simulates the flow of heat and moisture in soils is described. The space-time dependence of temperature and moisture content is described by a set of diffusion-type partial differential equations. The simulator uses a predictor/corrector to numerically integrate them, giving wetness and temperature profiles as a function of time. The simulator was used to generate solutions to diffusion-type partial differential equations for which analytical solutions are known. These equations include both constant and variable diffusivities, and both flux and constant concentration boundary conditions. In all cases, the simulated and analytic solutions agreed to within the error bounds which were imposed on the integrator. Simulations of heat and moisture flow under actual field conditions were also performed. Ground truth data were used for the boundary conditions and soil transport properties. The qualitative agreement between simulated and measured profiles is an indication that the model equations are reasonably accurate representations of the physical processes involved.

  14. TERRA: Building New Communities for Advanced Biofuels

    ScienceCinema

    Cornelius, Joe; Mockler, Todd; Tuinstra, Mitch

    2018-01-16

    ARPA-E’s Transportation Energy Resources from Renewable Agriculture (TERRA) program is bringing together top experts from different disciplines – agriculture, robotics and data analytics – to rethink the production of advanced biofuel crops. ARPA-E Program Director Dr. Joe Cornelius discusses the TERRA program and explains how ARPA-E’s model enables multidisciplinary collaboration among diverse communities. The video focuses on two TERRA projects—Donald Danforth Center and Purdue University—that are developing and integrating cutting-edge remote sensing platforms, complex data analytics tools and plant breeding technologies to tackle the challenge of sustainably increasing biofuel stocks.

  15. Engine Structures Modeling Software System (ESMOSS)

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Engine Structures Modeling Software System (ESMOSS) is the development of a specialized software system for the construction of geometric descriptive and discrete analytical models of engine parts, components, and substructures which can be transferred to finite element analysis programs such as NASTRAN. The NASA Lewis Engine Structures Program is concerned with the development of technology for the rational structural design and analysis of advanced gas turbine engines with emphasis on advanced structural analysis, structural dynamics, structural aspects of aeroelasticity, and life prediction. Fundamental and common to all of these developments is the need for geometric and analytical model descriptions at various engine assembly levels which are generated using ESMOSS.

  16. Energy geotechnics: Advances in subsurface energy recovery, storage, exchange, and waste management

    DOE PAGES

    McCartney, John S.; Sanchez, Marcelo; Tomac, Ingrid

    2016-02-17

    Energy geotechnics involves the use of geotechnical principles to understand and engineer the coupled thermo-hydro-chemo-mechanical processes encountered in collecting, exchanging, storing, and protecting energy resources in the subsurface. In addition to research on these fundamental coupled processes and characterization of relevant material properties, applied research is being performed to develop analytical tools for the design and analysis of different geo-energy applications. In conclusion, the aims of this paper are to discuss the fundamental physics and constitutive models that are common to these different applications, and to summarize recent advances in the development of relevant analytical tools.

  17. Energy geotechnics: Advances in subsurface energy recovery, storage, exchange, and waste management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCartney, John S.; Sanchez, Marcelo; Tomac, Ingrid

    Energy geotechnics involves the use of geotechnical principles to understand and engineer the coupled thermo-hydro-chemo-mechanical processes encountered in collecting, exchanging, storing, and protecting energy resources in the subsurface. In addition to research on these fundamental coupled processes and characterization of relevant material properties, applied research is being performed to develop analytical tools for the design and analysis of different geo-energy applications. In conclusion, the aims of this paper are to discuss the fundamental physics and constitutive models that are common to these different applications, and to summarize recent advances in the development of relevant analytical tools.

  18. Advanced Design Features of APR1400 and Realization in Shin Kori Construction Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    OH, S.J.; Park, K.C.; Kim, H.G.

    2006-07-01

    APR1400 adopted several advanced design features. To ensure their proper operation as a part of ShinKori 3,4 project, both experimental and analytical work are continuing. In this paper, work on the advanced design features related to enhanced safety is examined. APR1400 safety injection system consists of four independent trains which include four safety injection pump and tanks. A passive flow regulating device called fluidic device is installed in the safety injection tanks. Separate effect tests including a full scale fluidic device tests have been conducted. Integral system tests are in progress. Combination of these work with the analytical work usingmore » RELAP5/Mod3 would ensure the proper operation of the new safety injection systems. To mitigate severe accidents, hydrogen mitigation system using PARs and igniters is adopted. Also, active injection system and the streamlined insulation design are adopted to enhance the in-vessel retention capability with the external cooling of RPV strategy. Analytic work with supporting experiments is performed. We are certain that these preparatory work would help the successful adaptation of ADF in ShinKori project. (authors)« less

  19. Analytical expression for position sensitivity of linear response beam position monitor having inter-electrode cross talk

    NASA Astrophysics Data System (ADS)

    Kumar, Mukesh; Ojha, A.; Garg, A. D.; Puntambekar, T. A.; Senecha, V. K.

    2017-02-01

    According to the quasi electrostatic model of linear response capacitive beam position monitor (BPM), the position sensitivity of the device depends only on the aperture of the device and it is independent of processing frequency and load impedance. In practice, however, due to the inter-electrode capacitive coupling (cross talk), the actual position sensitivity of the device decreases with increasing frequency and load impedance. We have taken into account the inter-electrode capacitance to derive and propose a new analytical expression for the position sensitivity as a function of frequency and load impedance. The sensitivity of a linear response shoe-box type BPM has been obtained through simulation using CST Studio Suite to verify and confirm the validity of the new analytical equation. Good agreement between the simulation results and the new analytical expression suggest that this method can be exploited for proper designing of BPM.

  20. Analytical modeling and numerical simulation of the short-wave infrared electron-injection detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Movassaghi, Yashar; Fathipour, Morteza; Fathipour, Vala

    2016-03-21

    This paper describes comprehensive analytical and simulation models for the design and optimization of the electron-injection based detectors. The electron-injection detectors evaluated here operate in the short-wave infrared range and utilize a type-II band alignment in InP/GaAsSb/InGaAs material system. The unique geometry of detectors along with an inherent negative-feedback mechanism in the device allows for achieving high internal avalanche-free amplifications without any excess noise. Physics-based closed-form analytical models are derived for the detector rise time and dark current. Our optical gain model takes into account the drop in the optical gain at high optical power levels. Furthermore, numerical simulation studiesmore » of the electrical characteristics of the device show good agreement with our analytical models as well experimental data. Performance comparison between devices with different injector sizes shows that enhancement in the gain and speed is anticipated by reducing the injector size. Sensitivity analysis for the key detector parameters shows the relative importance of each parameter. The results of this study may provide useful information and guidelines for development of future electron-injection based detectors as well as other heterojunction photodetectors.« less

  1. Developing Healthcare Data Analytics APPs with Open Data Science Tools.

    PubMed

    Hao, Bibo; Sun, Wen; Yu, Yiqin; Xie, Guotong

    2017-01-01

    Recent advances in big data analytics provide more flexible, efficient, and open tools for researchers to gain insight from healthcare data. Whilst many tools require researchers to develop programs with programming languages like Python, R and so on, which is not a skill set grasped by many researchers in the healthcare data analytics area. To make data science more approachable, we explored existing tools and developed a practice that can help data scientists convert existing analytics pipelines to user-friendly analytics APPs with rich interactions and features of real-time analysis. With this practice, data scientists can develop customized analytics pipelines as APPs in Jupyter Notebook and disseminate them to other researchers easily, and researchers can benefit from the shared notebook to perform analysis tasks or reproduce research results much more easily.

  2. The design and evaluation of three advanced daylighting systems: Light shelves, light pipes and skylights

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beltran, L.O.; Lee, E.S.; Papmichael, K.M.

    1994-03-01

    We present results from the design and evaluation of three advanced daylighting systems: a light shelf, a light pipe, and a skylight. These systems use optical films and an optimized geometry to passively intercept and redirect sunlight further into the building. The objectives of these designs are to increase daylighting illuminance levels at distances of 4.6-9.1 m (15--30 ft) from the window, and to improve the uniformity of the daylight distribution and the luminance gradient across the room under variable sun and sky conditions throughout the year. The designs were developed through a series of computer-assisted ray-tracing studies, photometric measurements,more » and observations using physical scale models. Comprehensive sets of laboratory measurements in combination with analytical routines were then used to simulate daylight performance for any solar position. Results show increased daylight levels and an improved luminance gradient throughout the year -- indicating that lighting energy consumption and cooling energy due of lighting can be substantially reduced with improvements to visual comfort. Future development of the designs may further improve the daylighting performance of these systems.« less

  3. Recent Advances in Solar Sail Propulsion at NASA

    NASA Technical Reports Server (NTRS)

    Johnson, Les; Young, Roy M.; Montgomery, Edward E., IV

    2006-01-01

    Supporting NASA's Science Mission Directorate, the In-Space Propulsion Technology Program is developing solar sail propulsion for use in robotic science and exploration of the solar system. Solar sail propulsion will provide longer on-station operation, increased scientific payload mass fraction, and access to previously inaccessible orbits for multiple potential science missions. Two different 20-meter solar sail systems were produced and successfully completed functional vacuum testing last year in NASA Glenn's Space Power Facility at Plum Brook Station, Ohio. The sails were designed and developed by ATK Space Systems and L'Garde, respectively. These sail systems consist of a central structure with four deployable booms that support the sails. This sail designs are robust enough for deployments in a one atmosphere, one gravity environment, and are scalable to much larger solar sails-perhaps as much as 150 meters on a side. In addition, computation modeling and analytical simulations have been performed to assess the scalability of the technology to the large sizes (>150 meters) required for first generation solar sails missions. Life and space environmental effects testing of sail and component materials are also nearly complete. This paper will summarize recent technology advancements in solar sails and their successful ambient and vacuum testing.

  4. Design and process development of a photonic crystal polymer biosensor for point-of-care diagnostics

    NASA Astrophysics Data System (ADS)

    Dortu, F.; Egger, H.; Kolari, K.; Haatainen, T.; Furjes, P.; Fekete, Z.; Bernier, D.; Sharp, G.; Lahiri, B.; Kurunczi, S.; Sanchez, J.-C.; Turck, N.; Petrik, P.; Patko, D.; Horvath, R.; Eiden, S.; Aalto, T.; Watts, S.; Johnson, N. P.; De La Rue, R. M.; Giannone, D.

    2011-07-01

    In this work, we report advances in the fabrication and anticipated performance of a polymer biosensor photonic chip developed in the European Union project P3SENS (FP7-ICT4-248304). Due to the low cost requirements of point-ofcare applications, the photonic chip is fabricated from nanocomposite polymeric materials, using highly scalable nanoimprint- lithography (NIL). A suitable microfluidic structure transporting the analyte solutions to the sensor area is also fabricated in polymer and adequately bonded to the photonic chip. We first discuss the design and the simulated performance of a high-Q resonant cavity photonic crystal sensor made of a high refractive index polyimide core waveguide on a low index polymer cladding. We then report the advances in doped and undoped polymer thin film processing and characterization for fabricating the photonic sensor chip. Finally the development of the microfluidic chip is presented in details, including the characterisation of the fluidic behaviour, the technological and material aspects of the 3D polymer structuring and the stable adhesion strategies for bonding the fluidic and the photonic chips, with regards to the constraints imposed by the bioreceptors supposedly already present on the sensors.

  5. Nanoporous membranes enable concentration and transport in fully wet paper-based assays.

    PubMed

    Gong, Max M; Zhang, Pei; MacDonald, Brendan D; Sinton, David

    2014-08-19

    Low-cost paper-based assays are emerging as the platform for diagnostics worldwide. Paper does not, however, readily enable advanced functionality required for complex diagnostics, such as analyte concentration and controlled analyte transport. That is, after the initial wetting, no further analyte manipulation is possible. Here, we demonstrate active concentration and transport of analytes in fully wet paper-based assays by leveraging nanoporous material (mean pore diameter ≈ 4 nm) and ion concentration polarization. Two classes of devices are developed, an external stamp-like device with the nanoporous material separate from the paper-based assay, and an in-paper device patterned with the nanoporous material. Experimental results demonstrate up to 40-fold concentration of a fluorescent tracer in fully wet paper, and directional transport of the tracer over centimeters with efficiencies up to 96%. In-paper devices are applied to concentrate protein and colored dye, extending their limits of detection from ∼10 to ∼2 pmol/mL and from ∼40 to ∼10 μM, respectively. This approach is demonstrated in nitrocellulose membrane as well as paper, and the added cost of the nanoporous material is very low at ∼0.015 USD per device. The result is a major advance in analyte concentration and manipulation for the growing field of low-cost paper-based assays.

  6. The Exponential Expansion of Simulation in Research

    DTIC Science & Technology

    2012-12-01

    exponential growth of computing power. Although other analytic approaches also benefit from this trend, keyword searches of several scholarly search ... engines reveal that the reliance on simulation is increasing more rapidly. A descriptive analysis paints a compelling picture: simulation is frequently

  7. Analytical Study on Thermal and Mechanical Design of Printed Circuit Heat Exchanger

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoon, Su-Jong; Sabharwall, Piyush; Kim, Eung-Soo

    2013-09-01

    The analytical methodologies for the thermal design, mechanical design and cost estimation of printed circuit heat exchanger are presented in this study. In this study, three flow arrangements of parallel flow, countercurrent flow and crossflow are taken into account. For each flow arrangement, the analytical solution of temperature profile of heat exchanger is introduced. The size and cost of printed circuit heat exchangers for advanced small modular reactors, which employ various coolants such as sodium, molten salts, helium, and water, are also presented.

  8. Analytical solutions and particle simulations of cross-field plasma sheaths

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerver, M.J.; Parker, S.E.; Theilhaber, K.

    1989-08-30

    Particles simulations have been made of an infinite plasma slab, bounded by absorbing conducting walls, with a magnetic field parallel to the walls. The simulations have been either 1-D, or 2-D, with the magnetic field normal to the simulation plane. Initially, the plasma has a uniform density between the walls, and there is a uniform source of ions and electrons to replace particles lost to the walls. In the 1-D case, there is no diffusion of the particle guiding centers, and the plasma remains uniform in density and potential over most of the slab, with sheaths about a Debye lengthmore » wide where the potential rises to the wall potential. In the 2-D case, the density profile becomes parabolic, going almost to zero at the walls, and there is a quasineutral presheath in the bulk of the plasma, in addition to sheaths near the walls. Analytic expressions are found for the density and potential profiles in both cases, including, in the 2-D case, the magnetic presheath due to finite ion Larmor radius, and the effects of the guiding center diffusion rate being either much less than or much grater than the energy diffusion rate. These analytic expressions are shown to agree with the simulations. A 1-D simulation with Monte Carlo guiding center diffusion included gives results that are good agreement with the much more expensive 2-D simulation. 17 refs., 10 figs.« less

  9. Analysis of high-speed rotating flow inside gas centrifuge casing

    NASA Astrophysics Data System (ADS)

    Pradhan, Sahadev, , Dr.

    2017-10-01

    The generalized analytical model for the radial boundary layer inside the gas centrifuge casing in which the inner cylinder is rotating at a constant angular velocity Ω_i while the outer one is stationary, is formulated for studying the secondary gas flow field due to wall thermal forcing, inflow/outflow of light gas along the boundaries, as well as due to the combination of the above two external forcing. The analytical model includes the sixth order differential equation for the radial boundary layer at the cylindrical curved surface in terms of master potential (χ) , which is derived from the equations of motion in an axisymmetric (r - z) plane. The linearization approximation is used, where the equations of motion are truncated at linear order in the velocity and pressure disturbances to the base flow, which is a solid-body rotation. Additional approximations in the analytical model include constant temperature in the base state (isothermal compressible Couette flow), high aspect ratio (length is large compared to the annular gap), high Reynolds number, but there is no limitation on the Mach number. The discrete eigenvalues and eigenfunctions of the linear operators (sixth-order in the radial direction for the generalized analytical equation) are obtained. The solutions for the secondary flow is determined in terms of these eigenvalues and eigenfunctions. These solutions are compared with direct simulation Monte Carlo (DSMC) simulations and found excellent agreement (with a difference of less than 15%) between the predictions of the analytical model and the DSMC simulations, provided the boundary conditions in the analytical model are accurately specified.

  10. Analysis of high-speed rotating flow inside gas centrifuge casing

    NASA Astrophysics Data System (ADS)

    Pradhan, Sahadev, , Dr.

    2017-09-01

    The generalized analytical model for the radial boundary layer inside the gas centrifuge casing in which the inner cylinder is rotating at a constant angular velocity Ωi while the outer one is stationary, is formulated for studying the secondary gas flow field due to wall thermal forcing, inflow/outflow of light gas along the boundaries, as well as due to the combination of the above two external forcing. The analytical model includes the sixth order differential equation for the radial boundary layer at the cylindrical curved surface in terms of master potential (χ) , which is derived from the equations of motion in an axisymmetric (r - z) plane. The linearization approximation is used, where the equations of motion are truncated at linear order in the velocity and pressure disturbances to the base flow, which is a solid-body rotation. Additional approximations in the analytical model include constant temperature in the base state (isothermal compressible Couette flow), high aspect ratio (length is large compared to the annular gap), high Reynolds number, but there is no limitation on the Mach number. The discrete eigenvalues and eigenfunctions of the linear operators (sixth-order in the radial direction for the generalized analytical equation) are obtained. The solutions for the secondary flow is determined in terms of these eigenvalues and eigenfunctions. These solutions are compared with direct simulation Monte Carlo (DSMC) simulations and found excellent agreement (with a difference of less than 15%) between the predictions of the analytical model and the DSMC simulations, provided the boundary conditions in the analytical model are accurately specified.

  11. Analysis of high-speed rotating flow inside gas centrifuge casing

    NASA Astrophysics Data System (ADS)

    Pradhan, Sahadev

    2017-11-01

    The generalized analytical model for the radial boundary layer inside the gas centrifuge casing in which the inner cylinder is rotating at a constant angular velocity Ωi while the outer one is stationary, is formulated for studying the secondary gas flow field due to wall thermal forcing, inflow/outflow of light gas along the boundaries, as well as due to the combination of the above two external forcing. The analytical model includes the sixth order differential equation for the radial boundary layer at the cylindrical curved surface in terms of master potential (χ) , which is derived from the equations of motion in an axisymmetric (r - z) plane. The linearization approximation is used, where the equations of motion are truncated at linear order in the velocity and pressure disturbances to the base flow, which is a solid-body rotation. Additional approximations in the analytical model include constant temperature in the base state (isothermal compressible Couette flow), high aspect ratio (length is large compared to the annular gap), high Reynolds number, but there is no limitation on the Mach number. The discrete eigenvalues and eigenfunctions of the linear operators (sixth-order in the radial direction for the generalized analytical equation) are obtained. The solutions for the secondary flow is determined in terms of these eigenvalues and eigenfunctions. These solutions are compared with direct simulation Monte Carlo (DSMC) simulations and found excellent agreement (with a difference of less than 15%) between the predictions of the analytical model and the DSMC simulations, provided the boundary conditions in the analytical model are accurately specified.

  12. Neural control of magnetic suspension systems

    NASA Technical Reports Server (NTRS)

    Gray, W. Steven

    1993-01-01

    The purpose of this research program is to design, build and test (in cooperation with NASA personnel from the NASA Langley Research Center) neural controllers for two different small air-gap magnetic suspension systems. The general objective of the program is to study neural network architectures for the purpose of control in an experimental setting and to demonstrate the feasibility of the concept. The specific objectives of the research program are: (1) to demonstrate through simulation and experimentation the feasibility of using neural controllers to stabilize a nonlinear magnetic suspension system; (2) to investigate through simulation and experimentation the performance of neural controllers designs under various types of parametric and nonparametric uncertainty; (3) to investigate through simulation and experimentation various types of neural architectures for real-time control with respect to performance and complexity; and (4) to benchmark in an experimental setting the performance of neural controllers against other types of existing linear and nonlinear compensator designs. To date, the first one-dimensional, small air-gap magnetic suspension system has been built, tested and delivered to the NASA Langley Research Center. The device is currently being stabilized with a digital linear phase-lead controller. The neural controller hardware is under construction. Two different neural network paradigms are under consideration, one based on hidden layer feedforward networks trained via back propagation and one based on using Gaussian radial basis functions trained by analytical methods related to stability conditions. Some advanced nonlinear control algorithms using feedback linearization and sliding mode control are in simulation studies.

  13. Experimentally valid predictions of muscle force and EMG in models of motor-unit function are most sensitive to neural properties.

    PubMed

    Keenan, Kevin G; Valero-Cuevas, Francisco J

    2007-09-01

    Computational models of motor-unit populations are the objective implementations of the hypothesized mechanisms by which neural and muscle properties give rise to electromyograms (EMGs) and force. However, the variability/uncertainty of the parameters used in these models--and how they affect predictions--confounds assessing these hypothesized mechanisms. We perform a large-scale computational sensitivity analysis on the state-of-the-art computational model of surface EMG, force, and force variability by combining a comprehensive review of published experimental data with Monte Carlo simulations. To exhaustively explore model performance and robustness, we ran numerous iterative simulations each using a random set of values for nine commonly measured motor neuron and muscle parameters. Parameter values were sampled across their reported experimental ranges. Convergence after 439 simulations found that only 3 simulations met our two fitness criteria: approximating the well-established experimental relations for the scaling of EMG amplitude and force variability with mean force. An additional 424 simulations preferentially sampling the neighborhood of those 3 valid simulations converged to reveal 65 additional sets of parameter values for which the model predictions approximate the experimentally known relations. We find the model is not sensitive to muscle properties but very sensitive to several motor neuron properties--especially peak discharge rates and recruitment ranges. Therefore to advance our understanding of EMG and muscle force, it is critical to evaluate the hypothesized neural mechanisms as implemented in today's state-of-the-art models of motor unit function. We discuss experimental and analytical avenues to do so as well as new features that may be added in future implementations of motor-unit models to improve their experimental validity.

  14. Hydrocarbon-Fueled Rocket Engine Plume Diagnostics: Analytical Developments and Experimental Results

    NASA Technical Reports Server (NTRS)

    Tejwani, Gopal D.; McVay, Gregory P.; Langford, Lester A.; St. Cyr, William W.

    2006-01-01

    A viewgraph presentation describing experimental results and analytical developments about plume diagnostics for hydrocarbon-fueled rocket engines is shown. The topics include: 1) SSC Plume Diagnostics Background; 2) Engine Health Monitoring Approach; 3) Rocket Plume Spectroscopy Simulation Code; 4) Spectral Simulation for 10 Atomic Species and for 11 Diatomic Molecular Electronic Bands; 5) "Best" Lines for Plume Diagnostics for Hydrocarbon-Fueled Rocket Engines; 6) Experimental Set Up for the Methane Thruster Test Program and Experimental Results; and 7) Summary and Recommendations.

  15. Simulation and modeling of the temporal performance of path-based restoration schemes in planar mesh networks

    NASA Astrophysics Data System (ADS)

    Bhardwaj, Manish; McCaughan, Leon; Olkhovets, Anatoli; Korotky, Steven K.

    2006-12-01

    We formulate an analytic framework for the restoration performance of path-based restoration schemes in planar mesh networks. We analyze various switch architectures and signaling schemes and model their total restoration interval. We also evaluate the network global expectation value of the time to restore a demand as a function of network parameters. We analyze a wide range of nominally capacity-optimal planar mesh networks and find our analytic model to be in good agreement with numerical simulation data.

  16. Roadmap to an Engineering-Scale Nuclear Fuel Performance & Safety Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, John A; Clarno, Kevin T; Hansen, Glen A

    2009-09-01

    Developing new fuels and qualifying them for large-scale deployment in power reactors is a lengthy and expensive process, typically spanning a period of two decades from concept to licensing. Nuclear fuel designers serve an indispensable role in the process, at the initial exploratory phase as well as in analysis of the testing results. In recent years fuel performance capabilities based on first principles have been playing more of a role in what has traditionally been an empirically dominated process. Nonetheless, nuclear fuel behavior is based on the interaction of multiple complex phenomena, and recent evolutionary approaches are being applied moremore » on a phenomenon-by-phenomenon basis, targeting localized problems, as opposed to a systematic approach based on a fundamental understanding of all interacting parameters. Advanced nuclear fuels are generally more complex, and less understood, than the traditional fuels used in existing reactors (ceramic UO{sub 2} with burnable poisons and other minor additives). The added challenges are primarily caused by a less complete empirical database and, in the case of recycled fuel, the inherent variability in fuel compositions. It is clear that using the traditional approach to develop and qualify fuels over the entire range of variables pertinent to the U.S. Department of Energy (DOE) Office of Nuclear Energy on a timely basis with available funds would be very challenging, if not impossible. As a result the DOE Office of Nuclear Energy has launched the Nuclear Energy Advanced Modeling and Simulation (NEAMS) approach to revolutionize fuel development. This new approach is predicated upon transferring the recent advances in computational sciences and computer technologies into the fuel development program. The effort will couple computational science with recent advances in the fundamental understanding of physical phenomena through ab initio modeling and targeted phenomenological testing to leapfrog many fuel-development activities. Realizing the full benefits of this approach will likely take some time. However, it is important that the developmental activities for modeling and simulation be tightly coupled with the experimental activities to maximize feedback effects and accelerate both the experimental and analytical elements of the program toward a common objective. The close integration of modeling and simulation and experimental activities is key to developing a useful fuel performance simulation capability, providing a validated design and analysis tool, and understanding the uncertainties within the models and design process. The efforts of this project are integrally connected to the Transmutation Fuels Campaign (TFC), which maintains as a primary objective to formulate, fabricate, and qualify a transuranic-based fuel with added minor actinides for use in future fast reactors. Additional details of the TFC scope can be found in the Transmutation Fuels Campaign Execution Plan. This project is an integral component of the TFC modeling and simulation effort, and this multiyear plan borrowed liberally from the Transmutation Fuels Campaign Modeling and Simulation Roadmap. This document provides the multiyear staged development plan to develop a continuum-level Integrated Performance and Safety Code (IPSC) to predict the behavior of the fuel and cladding during normal reactor operations and anticipated transients up to the point of clad breach.« less

  17. Advanced Simulation in Undergraduate Pilot Training: Systems Integration. Final Report (February 1972-March 1975).

    ERIC Educational Resources Information Center

    Larson, D. F.; Terry, C.

    The Advanced Simulator for Undergraduate Pilot Training (ASUPT) was designed to investigate the role of simulation in the future Undergraduate Pilot Training (UPT) program. The problem addressed in this report was one of integrating two unlike components into one synchronized system. These two components were the Basic T-37 Simulators and their…

  18. Capillary Zone Electrophoresis for the Analysis of Peptides: Fostering Students' Problem-Solving and Discovery Learning in an Undergraduate Laboratory Experiment

    ERIC Educational Resources Information Center

    Albright, Jessica C.; Beussman, Douglas J.

    2017-01-01

    Capillary electrophoresis is an important analytical separation method used to study a wide variety of samples, including those of biological origin. Capillary electrophoresis may be covered in the classroom, especially in advanced analytical courses, and while many students are exposed to gel electrophoresis in biology or biochemistry…

  19. FIELD ANALYTICAL METHODS: ADVANCED FIELD MONITORING METHODS DEVELOPMENT AND EVALUATION OF NEW AND INNOVATIVE TECHNOLOGIES THAT SUPPORT THE SITE CHARACTERIZATION AND MONITORING REQUIREMENTS OF THE SUPERFUND PROGRAM.

    EPA Science Inventory

    The overall goal of this task is to help reduce the uncertainties in the assessment of environmental health and human exposure by better characterizing hazardous wastes through cost-effective analytical methods. Research projects are directed towards the applied development and ...

  20. Advances in Adaptive Control Methods

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan

    2009-01-01

    This poster presentation describes recent advances in adaptive control technology developed by NASA. Optimal Control Modification is a novel adaptive law that can improve performance and robustness of adaptive control systems. A new technique has been developed to provide an analytical method for computing time delay stability margin for adaptive control systems.

Top