Science.gov

Sample records for multi-physic problems application

  1. Partitioned coupling strategies for multi-physically coupled radiative heat transfer problems

    NASA Astrophysics Data System (ADS)

    Wendt, Gunnar; Erbts, Patrick; Düster, Alexander

    2015-11-01

    This article aims to propose new aspects concerning a partitioned solution strategy for multi-physically coupled fields including the physics of thermal radiation. Particularly, we focus on the partitioned treatment of electro-thermo-mechanical problems with an additional fourth thermal radiation field. One of the main goals is to take advantage of the flexibility of the partitioned approach to enable combinations of different simulation software and solvers. Within the frame of this article, we limit ourselves to the case of nonlinear thermoelasticity at finite strains, using temperature-dependent material parameters. For the thermal radiation field, diffuse radiating surfaces and gray participating media are assumed. Moreover, we present a robust and fast partitioned coupling strategy for the fourth field problem. Stability and efficiency of the implicit coupling algorithm are improved drawing on several methods to stabilize and to accelerate the convergence. To conclude and to review the effectiveness and the advantages of the additional thermal radiation field several numerical examples are considered to study the proposed algorithm. In particular we focus on an industrial application, namely the electro-thermo-mechanical modeling of the field-assisted sintering technology.

  2. Specification of the Advanced Burner Test Reactor Multi-Physics Coupling Demonstration Problem

    SciTech Connect

    Shemon, E. R.; Grudzinski, J. J.; Lee, C. H.; Thomas, J. W.; Yu, Y. Q.

    2015-12-21

    This document specifies the multi-physics nuclear reactor demonstration problem using the SHARP software package developed by NEAMS. The SHARP toolset simulates the key coupled physics phenomena inside a nuclear reactor. The PROTEUS neutronics code models the neutron transport within the system, the Nek5000 computational fluid dynamics code models the fluid flow and heat transfer, and the DIABLO structural mechanics code models structural and mechanical deformation. The three codes are coupled to the MOAB mesh framework which allows feedback from neutronics, fluid mechanics, and mechanical deformation in a compatible format.

  3. Multi-Physics Demonstration Problem with the SHARP Reactor Simulation Toolkit

    SciTech Connect

    Merzari, E.; Shemon, E. R.; Yu, Y. Q.; Thomas, J. W.; Obabko, A.; Jain, Rajeev; Mahadevan, Vijay; Tautges, Timothy; Solberg, Jerome; Ferencz, Robert Mark; Whitesides, R.

    2015-12-21

    This report describes to employ SHARP to perform a first-of-a-kind analysis of the core radial expansion phenomenon in an SFR. This effort required significant advances in the framework Multi-Physics Demonstration Problem with the SHARP Reactor Simulation Toolkit used to drive the coupled simulations, manipulate the mesh in response to the deformation of the geometry, and generate the necessary modified mesh files. Furthermore, the model geometry is fairly complex, and consistent mesh generation for the three physics modules required significant effort. Fully-integrated simulations of a 7-assembly mini-core test problem have been performed, and the results are presented here. Physics models of a full-core model of the Advanced Burner Test Reactor have also been developed for each of the three physics modules. Standalone results of each of the three physics modules for the ABTR are presented here, which provides a demonstration of the feasibility of the fully-integrated simulation.

  4. TerraFERMA: The Transparent Finite Element Rapid Model Assembler for multi-physics problems in the solid Earth sciences

    NASA Astrophysics Data System (ADS)

    Spiegelman, M. W.; Wilson, C. R.; Van Keken, P. E.

    2013-12-01

    We announce the release of a new software infrastructure, TerraFERMA, the Transparent Finite Element Rapid Model Assembler for the exploration and solution of coupled multi-physics problems. The design of TerraFERMA is driven by two overarching computational needs in Earth sciences. The first is the need for increased flexibility in both problem description and solution strategies for coupled problems where small changes in model assumptions can often lead to dramatic changes in physical behavior. The second is the need for software and models that are more transparent so that results can be verified, reproduced and modified in a manner such that the best ideas in computation and earth science can be more easily shared and reused. TerraFERMA leverages three advanced open-source libraries for scientific computation that provide high level problem description (FEniCS), composable solvers for coupled multi-physics problems (PETSc) and a science neutral options handling system (SPuD) that allows the hierarchical management of all model options. TerraFERMA integrates these libraries into an easier to use interface that organizes the scientific and computational choices required in a model into a single options file, from which a custom compiled application is generated and run. Because all models share the same infrastructure, models become more reusable and reproducible. TerraFERMA inherits much of its functionality from the underlying libraries. It currently solves partial differential equations (PDE) using finite element methods on simplicial meshes of triangles (2D) and tetrahedra (3D). The software is particularly well suited for non-linear problems with complex coupling between components. We demonstrate the design and utility of TerraFERMA through examples of thermal convection and magma dynamics. TerraFERMA has been tested successfully against over 45 benchmark problems from 7 publications in incompressible and compressible convection, magmatic solitary waves

  5. Final report on LDRD project : coupling strategies for multi-physics applications.

    SciTech Connect

    Hopkins, Matthew Morgan; Moffat, Harry K.; Carnes, Brian; Hooper, Russell Warren; Pawlowski, Roger P.

    2007-11-01

    Many current and future modeling applications at Sandia including ASC milestones will critically depend on the simultaneous solution of vastly different physical phenomena. Issues due to code coupling are often not addressed, understood, or even recognized. The objectives of the LDRD has been both in theory and in code development. We will show that we have provided a fundamental analysis of coupling, i.e., when strong coupling vs. a successive substitution strategy is needed. We have enabled the implementation of tighter coupling strategies through additions to the NOX and Sierra code suites to make coupling strategies available now. We have leveraged existing functionality to do this. Specifically, we have built into NOX the capability to handle fully coupled simulations from multiple codes, and we have also built into NOX the capability to handle Jacobi Free Newton Krylov simulations that link multiple applications. We show how this capability may be accessed from within the Sierra Framework as well as from outside of Sierra. The critical impact from this LDRD is that we have shown how and have delivered strategies for enabling strong Newton-based coupling while respecting the modularity of existing codes. This will facilitate the use of these codes in a coupled manner to solve multi-physic applications.

  6. Keeping it Together: Advanced algorithms and software for magma dynamics (and other coupled multi-physics problems)

    NASA Astrophysics Data System (ADS)

    Spiegelman, M.; Wilson, C. R.

    2011-12-01

    A quantitative theory of magma production and transport is essential for understanding the dynamics of magmatic plate boundaries, intra-plate volcanism and the geochemical evolution of the planet. It also provides one of the most challenging computational problems in solid Earth science, as it requires consistent coupling of fluid and solid mechanics together with the thermodynamics of melting and reactive flows. Considerable work on these problems over the past two decades shows that small changes in assumptions of coupling (e.g. the relationship between melt fraction and solid rheology), can have profound changes on the behavior of these systems which in turn affects critical computational choices such as discretizations, solvers and preconditioners. To make progress in exploring and understanding this physically rich system requires a computational framework that allows more flexible, high-level description of multi-physics problems as well as increased flexibility in composing efficient algorithms for solution of the full non-linear coupled system. Fortunately, recent advances in available computational libraries and algorithms provide a platform for implementing such a framework. We present results from a new model building system that leverages functionality from both the FEniCS project (www.fenicsproject.org) and PETSc libraries (www.mcs.anl.gov/petsc) along with a model independent options system and gui, Spud (amcg.ese.ic.ac.uk/Spud). Key features from FEniCS include fully unstructured FEM with a wide range of elements; a high-level language (ufl) and code generation compiler (FFC) for describing the weak forms of residuals and automatic differentiation for calculation of exact and approximate jacobians. The overall strategy is to monitor/calculate residuals and jacobians for the entire non-linear system of equations within a global non-linear solve based on PETSc's SNES routines. PETSc already provides a wide range of solvers and preconditioners, from

  7. Module-based Hybrid Uncertainty Quantification for Multi-physics Applications: Theory and Software

    SciTech Connect

    Tong, Charles; Chen, Xiao; Iaccarino, Gianluca; Mittal, Akshay

    2013-10-08

    In this project we proposed to develop an innovative uncertainty quantification methodology that captures the best of the two competing approaches in UQ, namely, intrusive and non-intrusive approaches. The idea is to develop the mathematics and the associated computational framework and algorithms to facilitate the use of intrusive or non-intrusive UQ methods in different modules of a multi-physics multi-module simulation model in a way that physics code developers for different modules are shielded (as much as possible) from the chores of accounting for the uncertain ties introduced by the other modules. As the result of our research and development, we have produced a number of publications, conference presentations, and a software product.

  8. A theory manual for multi-physics code coupling in LIME.

    SciTech Connect

    Belcourt, Noel; Bartlett, Roscoe Ainsworth; Pawlowski, Roger Patrick; Schmidt, Rodney Cannon; Hooper, Russell Warren

    2011-03-01

    The Lightweight Integrating Multi-physics Environment (LIME) is a software package for creating multi-physics simulation codes. Its primary application space is when computer codes are currently available to solve different parts of a multi-physics problem and now need to be coupled with other such codes. In this report we define a common domain language for discussing multi-physics coupling and describe the basic theory associated with multiphysics coupling algorithms that are to be supported in LIME. We provide an assessment of coupling techniques for both steady-state and time dependent coupled systems. Example couplings are also demonstrated.

  9. Modeling and simulation of multi-physics multi-scale transport phenomenain bio-medical applications

    NASA Astrophysics Data System (ADS)

    Kenjereš, Saša

    2014-08-01

    We present a short overview of some of our most recent work that combines the mathematical modeling, advanced computer simulations and state-of-the-art experimental techniques of physical transport phenomena in various bio-medical applications. In the first example, we tackle predictions of complex blood flow patterns in the patient-specific vascular system (carotid artery bifurcation) and transfer of the so-called "bad" cholesterol (low-density lipoprotein, LDL) within the multi-layered artery wall. This two-way coupling between the blood flow and corresponding mass transfer of LDL within the artery wall is essential for predictions of regions where atherosclerosis can develop. It is demonstrated that a recently developed mathematical model, which takes into account the complex multi-layer arterial-wall structure, produced LDL profiles within the artery wall in good agreement with in-vivo experiments in rabbits, and it can be used for predictions of locations where the initial stage of development of atherosclerosis may take place. The second example includes a combination of pulsating blood flow and medical drug delivery and deposition controlled by external magnetic field gradients in the patient specific carotid artery bifurcation. The results of numerical simulations are compared with own PIV (Particle Image Velocimetry) and MRI (Magnetic Resonance Imaging) in the PDMS (silicon-based organic polymer) phantom. A very good agreement between simulations and experiments is obtained for different stages of the pulsating cycle. Application of the magnetic drug targeting resulted in an increase of up to ten fold in the efficiency of local deposition of the medical drug at desired locations. Finally, the LES (Large Eddy Simulation) of the aerosol distribution within the human respiratory system that includes up to eight bronchial generations is performed. A very good agreement between simulations and MRV (Magnetic Resonance Velocimetry) measurements is obtained

  10. Optimization and Parallelization of the Thermal-Hydraulic Sub-channel Code CTF for High-Fidelity Multi-physics Applications

    SciTech Connect

    Salko, Robert K; Schmidt, Rodney; Avramova, Maria N

    2014-01-01

    This paper describes major improvements to the computational infrastructure of the CTF sub-channel code so that full-core sub-channel-resolved simulations can now be performed in much shorter run-times, either in stand-alone mode or as part of coupled-code multi-physics calculations. These improvements support the goals of the Department Of Energy (DOE) Consortium for Advanced Simulations of Light Water (CASL) Energy Innovation Hub to develop high fidelity multi-physics simulation tools for nuclear energy design and analysis. A set of serial code optimizations--including fixing computational inefficiencies, optimizing the numerical approach, and making smarter data storage choices--are first described and shown to reduce both execution time and memory usage by about a factor of ten. Next, a Single Program Multiple Data (SPMD) parallelization strategy targeting distributed memory Multiple Instruction Multiple Data (MIMD) platforms and utilizing domain-decomposition is presented. In this approach, data communication between processors is accomplished by inserting standard MPI calls at strategic points in the code. The domain decomposition approach implemented assigns one MPI process to each fuel assembly, with each domain being represented by its own CTF input file. The creation of CTF input files, both for serial and parallel runs, is also fully automated through use of a pre-processor utility that takes a greatly reduced set of user input over the traditional CTF input file. To run CTF in parallel, two additional libraries are currently needed; MPI, for inter-processor message passing, and the Parallel Extensible Toolkit for Scientific Computation (PETSc), which is leveraged to solve the global pressure matrix in parallel. Results presented include a set of testing and verification calculations and performance tests assessing parallel scaling characteristics up to a full core, sub-channel-resolved model of Watts Bar Unit 1 under hot full-power conditions (193 17x17

  11. Mechanics: Ideas, problems, applications

    NASA Astrophysics Data System (ADS)

    Ishlinskii, A. Iu.

    The book contains the published articles and reports by academician Ishlinskii which deal with the concepts and ideas of modern mechanics, its role in providing a general understanding of the natural phenomena, and its applications to various problems in science and engineering. Attention is given to the methodological aspects of mechanics, to the history of the theories of plasticity, friction, gyroscopic and inertial systems, and inertial navigation, and to mathematical methods in mechanics. The book also contains essays on some famous scientists and engineers.

  12. Scalable Methods for Uncertainty Quantification, Data Assimilation and Target Accuracy Assessment for Multi-Physics Advanced Simulation of Light Water Reactors

    NASA Astrophysics Data System (ADS)

    Khuwaileh, Bassam

    High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL

  13. Integration of Advanced Probabilistic Analysis Techniques with Multi-Physics Models

    SciTech Connect

    Cetiner, Mustafa Sacit; none,; Flanagan, George F.; Poore III, Willis P.; Muhlheim, Michael David

    2014-07-30

    An integrated simulation platform that couples probabilistic analysis-based tools with model-based simulation tools can provide valuable insights for reactive and proactive responses to plant operating conditions. The objective of this work is to demonstrate the benefits of a partial implementation of the Small Modular Reactor (SMR) Probabilistic Risk Assessment (PRA) Detailed Framework Specification through the coupling of advanced PRA capabilities and accurate multi-physics plant models. Coupling a probabilistic model with a multi-physics model will aid in design, operations, and safety by providing a more accurate understanding of plant behavior. This represents the first attempt at actually integrating these two types of analyses for a control system used for operations, on a faster than real-time basis. This report documents the development of the basic communication capability to exchange data with the probabilistic model using Reliability Workbench (RWB) and the multi-physics model using Dymola. The communication pathways from injecting a fault (i.e., failing a component) to the probabilistic and multi-physics models were successfully completed. This first version was tested with prototypic models represented in both RWB and Modelica. First, a simple event tree/fault tree (ET/FT) model was created to develop the software code to implement the communication capabilities between the dynamic-link library (dll) and RWB. A program, written in C#, successfully communicates faults to the probabilistic model through the dll. A systems model of the Advanced Liquid-Metal Reactor–Power Reactor Inherently Safe Module (ALMR-PRISM) design developed under another DOE project was upgraded using Dymola to include proper interfaces to allow data exchange with the control application (ConApp). A program, written in C+, successfully communicates faults to the multi-physics model. The results of the example simulation were successfully plotted.

  14. Modelling transport phenomena in a multi-physics context

    NASA Astrophysics Data System (ADS)

    Marra, Francesco

    2015-01-01

    Innovative heating research on cooking, pasteurization/sterilization, defrosting, thawing and drying, often focuses on areas which include the assessment of processing time, evaluation of heating uniformity, studying the impact on quality attributes of the final product as well as considering the energy efficiency of these heating processes. During the last twenty years, so-called electro-heating-processes (radio-frequency - RF, microwaves - MW and ohmic - OH) gained a wide interest in industrial food processing and many applications using the above mentioned technologies have been developed with the aim of reducing processing time, improving process efficiency and, in many cases, the heating uniformity. In the area of innovative heating, electro-heating accounts for a considerable portion of both the scientific literature and commercial applications, which can be subdivided into either direct electro-heating (as in the case of OH heating) where electrical current is applied directly to the food or indirect electro-heating (e.g. MW and RF heating) where the electrical energy is firstly converted to electromagnetic radiation which subsequently generates heat within a product. New software packages, which make easier solution of PDEs based mathematical models, and new computers, capable of larger RAM and more efficient CPU performances, allowed an increasing interest about modelling transport phenomena in systems and processes - as the ones encountered in food processing - that can be complex in terms of geometry, composition, boundary conditions but also - as in the case of electro-heating assisted applications - in terms of interaction with other physical phenomena such as displacement of electric or magnetic field. This paper deals with the description of approaches used in modelling transport phenomena in a multi-physics context such as RF, MW and OH assisted heating.

  15. Modelling transport phenomena in a multi-physics context

    SciTech Connect

    Marra, Francesco

    2015-01-22

    Innovative heating research on cooking, pasteurization/sterilization, defrosting, thawing and drying, often focuses on areas which include the assessment of processing time, evaluation of heating uniformity, studying the impact on quality attributes of the final product as well as considering the energy efficiency of these heating processes. During the last twenty years, so-called electro-heating-processes (radio-frequency - RF, microwaves - MW and ohmic - OH) gained a wide interest in industrial food processing and many applications using the above mentioned technologies have been developed with the aim of reducing processing time, improving process efficiency and, in many cases, the heating uniformity. In the area of innovative heating, electro-heating accounts for a considerable portion of both the scientific literature and commercial applications, which can be subdivided into either direct electro-heating (as in the case of OH heating) where electrical current is applied directly to the food or indirect electro-heating (e.g. MW and RF heating) where the electrical energy is firstly converted to electromagnetic radiation which subsequently generates heat within a product. New software packages, which make easier solution of PDEs based mathematical models, and new computers, capable of larger RAM and more efficient CPU performances, allowed an increasing interest about modelling transport phenomena in systems and processes - as the ones encountered in food processing - that can be complex in terms of geometry, composition, boundary conditions but also - as in the case of electro-heating assisted applications - in terms of interaction with other physical phenomena such as displacement of electric or magnetic field. This paper deals with the description of approaches used in modelling transport phenomena in a multi-physics context such as RF, MW and OH assisted heating.

  16. Wind-Turbine Gear-Box Roller-Bearing Premature-Failure Caused by Grain-Boundary Hydrogen Embrittlement: A Multi-physics Computational Investigation

    NASA Astrophysics Data System (ADS)

    Grujicic, M.; Chenna, V.; Galgalikar, R.; Snipes, J. S.; Ramaswami, S.; Yavari, R.

    2014-11-01

    To help overcome the problem of horizontal-axis wind-turbine (HAWT) gear-box roller-bearing premature-failure, the root causes of this failure are currently being investigated using mainly laboratory and field-test experimental approaches. In the present work, an attempt is made to develop complementary computational methods and tools which can provide additional insight into the problem at hand (and do so with a substantially shorter turn-around time). Toward that end, a multi-physics computational framework has been developed which combines: (a) quantum-mechanical calculations of the grain-boundary hydrogen-embrittlement phenomenon and hydrogen bulk/grain-boundary diffusion (the two phenomena currently believed to be the main contributors to the roller-bearing premature-failure); (b) atomic-scale kinetic Monte Carlo-based calculations of the hydrogen-induced embrittling effect ahead of the advancing crack-tip; and (c) a finite-element analysis of the damage progression in, and the final failure of a prototypical HAWT gear-box roller-bearing inner raceway. Within this approach, the key quantities which must be calculated using each computational methodology are identified, as well as the quantities which must be exchanged between different computational analyses. The work demonstrates that the application of the present multi-physics computational framework enables prediction of the expected life of the most failure-prone HAWT gear-box bearing elements.

  17. Multi-Physics Analysis of the Fermilab Booster RF Cavity

    SciTech Connect

    Awida, M.; Reid, J.; Yakovlev, V.; Lebedev, V.; Khabiboulline, T.; Champion, M.; /Fermilab

    2012-05-14

    After about 40 years of operation the RF accelerating cavities in Fermilab Booster need an upgrade to improve their reliability and to increase the repetition rate in order to support a future experimental program. An increase in the repetition rate from 7 to 15 Hz entails increasing the power dissipation in the RF cavities, their ferrite loaded tuners, and HOM dampers. The increased duty factor requires careful modelling for the RF heating effects in the cavity. A multi-physic analysis investigating both the RF and thermal properties of Booster cavity under various operating conditions is presented in this paper.

  18. Welded metal bellows seals: Applications and problems

    SciTech Connect

    Smith

    1983-01-01

    Describes seals which avoid certain problems encountered with conventional mechanical shaft seals but have some characteristics which, if not recognized, can lead to premature failure. Discusses applications in which the seals excel, and potential problems which can be avoided by following certain guidelines. Advantages are wide temperature extremes, high pressures, high speeds and concentrated slurries. Potential problems include fatigue, chemically induced cracking or corrosion, pressure rupture, and faulty installation procedures. Fatigue is caused by misalignment between rotating and stationary pieces; vibration related to shaft rotation; and resonance vibration. Bellows seal installation dimensions should be provided by the vendor. Some factors such as cleanliness, seal face flatness, flush purity, care in shipment, storage and handling will affect bellows and conventional seals similarly.

  19. A flexible uncertainty quantification method for linearly coupled multi-physics systems

    SciTech Connect

    Chen, Xiao Ng, Brenda; Sun, Yunwei; Tong, Charles

    2013-09-01

    Highlights: •We propose a “modularly hybrid” UQ methodology suitable for independent development of module-based multi-physics simulation. •Our algorithmic framework allows for each module to have its own UQ method (either intrusive or non-intrusive). •Information from each module is combined systematically to propagate “global uncertainty”. •Our proposed approach can allow for easy swapping of new methods for any modules without the need to address incompatibilities. •We demonstrate the proposed framework on a practical application involving a multi-species reactive transport model. -- Abstract: This paper presents a novel approach to building an integrated uncertainty quantification (UQ) methodology suitable for modern-day component-based approach for multi-physics simulation development. Our “hybrid” UQ methodology supports independent development of the most suitable UQ method, intrusive or non-intrusive, for each physics module by providing an algorithmic framework to couple these “stochastic” modules for propagating “global” uncertainties. We address algorithmic and computational issues associated with the construction of this hybrid framework. We demonstrate the utility of such a framework on a practical application involving a linearly coupled multi-species reactive transport model.

  20. Solid Oxide Fuel Cell - Multi-Physics and GUI

    SciTech Connect

    2013-10-10

    SOFC-MP is a simulation tool developed at PNNL to evaluate the tightly coupled multi-physical phenomena in SOFCs. The purpose of the tool is to allow SOFC manufacturers to numerically test changes in planar stack design to meet DOE technical targets. The SOFC-MP 2D module is designed for computational efficiency to enable rapid engineering evaluations for operation of tall symmetric stacks. It can quickly compute distributions for the current density, voltage, temperature, and species composition in tall stacks with co-flow or counter-flow orientations. The 3D module computes distributions in entire 3D domain and handles all planner configurations: co-flow, counter-flow, and cross-flow. The detailed data from 3D simulation can be used as input for structural analysis. SOFC-MP GUI integrates both 2D and 3D modules, and it provides user friendly pre-processing and post-processing capabilities.

  1. Problems of applicability of statistical methods in cosmology

    SciTech Connect

    Levin, S. F.

    2015-12-15

    The problems arising from the incorrect formulation of measuring problems of identification for cosmological models and violations of conditions of applicability of statistical methods are considered.

  2. Lithium-Ion Battery Safety Study Using Multi-Physics Internal Short-Circuit Model (Presentation)

    SciTech Connect

    Kim, G-.H.; Smith, K.; Pesaran, A.

    2009-06-01

    This presentation outlines NREL's multi-physics simulation study to characterize an internal short by linking and integrating electrochemical cell, electro-thermal, and abuse reaction kinetics models.

  3. Application of Performance Problem-Solving to Educational Problems

    ERIC Educational Resources Information Center

    Bullock, Donald H.

    1973-01-01

    The relevance of performance problem-solving for education is discussed in terms of its effect on the marketability of graduates, the cost-effectiveness of educational programs, and the drop/push/failout rate. (Author)

  4. Multi-physics optimization of three-dimensional microvascular polymeric components

    NASA Astrophysics Data System (ADS)

    Aragón, Alejandro M.; Saksena, Rajat; Kozola, Brian D.; Geubelle, Philippe H.; Christensen, Kenneth T.; White, Scott R.

    2013-01-01

    This work discusses the computational design of microvascular polymeric materials, which aim at mimicking the behavior found in some living organisms that contain a vascular system. The optimization of the topology of the embedded three-dimensional microvascular network is carried out by coupling a multi-objective constrained genetic algorithm with a finite-element based physics solver, the latter validated through experiments. The optimization is carried out on multiple conflicting objective functions, namely the void volume fraction left by the network, the energy required to drive the fluid through the network and the maximum temperature when the material is subjected to thermal loads. The methodology presented in this work results in a viable alternative for the multi-physics optimization of these materials for active-cooling applications.

  5. Solid Oxide Fuel Cell - Multi-Physics and GUI

    2013-10-10

    SOFC-MP is a simulation tool developed at PNNL to evaluate the tightly coupled multi-physical phenomena in SOFCs. The purpose of the tool is to allow SOFC manufacturers to numerically test changes in planar stack design to meet DOE technical targets. The SOFC-MP 2D module is designed for computational efficiency to enable rapid engineering evaluations for operation of tall symmetric stacks. It can quickly compute distributions for the current density, voltage, temperature, and species composition inmore » tall stacks with co-flow or counter-flow orientations. The 3D module computes distributions in entire 3D domain and handles all planner configurations: co-flow, counter-flow, and cross-flow. The detailed data from 3D simulation can be used as input for structural analysis. SOFC-MP GUI integrates both 2D and 3D modules, and it provides user friendly pre-processing and post-processing capabilities.« less

  6. Recent research in network problems with applications

    NASA Technical Reports Server (NTRS)

    Thompson, G. L.

    1980-01-01

    The capabilities of network codes and their extensions are surveyed in regard to specially structured integer programming problems which are solved by using the solutions of a series of ordinary network problems.

  7. Application of boundary integral equations to elastoplastic problems

    NASA Technical Reports Server (NTRS)

    Mendelson, A.; Albers, L. U.

    1975-01-01

    The application of boundary integral equations to elastoplastic problems is reviewed. Details of the analysis as applied to torsion problems and to plane problems is discussed. Results are presented for the elastoplastic torsion of a square cross section bar and for the plane problem of notched beams. A comparison of different formulations as well as comparisons with experimental results are presented.

  8. Applications of NASTRAN to nuclear problems

    NASA Technical Reports Server (NTRS)

    Spreeuw, E.

    1972-01-01

    The extent to which suitable solutions may be obtained for one physics problem and two engineering type problems is traced. NASTRAN appears to be a practical tool to solve one-group steady-state neutron diffusion equations. Transient diffusion analysis may be performed after new levels that allow time-dependent temperature calculations are developed. NASTRAN piecewise linear anlaysis may be applied to solve those plasticity problems for which a smooth stress-strain curve can be used to describe the nonlinear material behavior. The accuracy decreases when sharp transitions in the stress-strain relations are involved. Improved NASTRAN usefulness will be obtained when nonlinear material capabilities are extended to axisymmetric elements and to include provisions for time-dependent material properties and creep analysis. Rigid formats 3 and 5 proved to be very convenient for the buckling and normal-mode analysis of a nuclear fuel element.

  9. Multi-physics computational grains (MPCGs) for direct numerical simulation (DNS) of piezoelectric composite/porous materials and structures

    NASA Astrophysics Data System (ADS)

    Bishay, Peter L.; Dong, Leiting; Atluri, Satya N.

    2014-11-01

    Conceptually simple and computationally most efficient polygonal computational grains with voids/inclusions are proposed for the direct numerical simulation of the micromechanics of piezoelectric composite/porous materials with non-symmetrical arrangement of voids/inclusions. These are named "Multi-Physics Computational Grains" (MPCGs) because each "mathematical grain" is geometrically similar to the irregular shapes of the physical grains of the material in the micro-scale. So each MPCG element represents a grain of the matrix of the composite and can include a pore or an inclusion. MPCG is based on assuming independent displacements and electric-potentials in each cell. The trial solutions in each MPCG do not need to satisfy the governing differential equations, however, they are still complete, and can efficiently model concentration of electric and mechanical fields. MPCG can be used to model any generally anisotropic material as well as nonlinear problems. The essential idea can also be easily applied to accurately solve other multi-physical problems, such as complex thermal-electro-magnetic-mechanical materials modeling. Several examples are presented to show the capabilities of the proposed MPCGs and their accuracy.

  10. Multi-Scale, Multi-Physics Membrane Technology

    SciTech Connect

    Henshaw, W D

    2009-02-19

    Our objectives for this 10 week feasibility study were to gain an initial theoretical understanding of the numerical issues involved in modeling fluid-structure interface problems and to develop a prototype software infrastructure based on deforming composite grids to test the new approach on simple problems. For our first test case we considered a two-dimensional fluid-solid piston problem in which one half of the domain is occupied by fluid and the other half by a solid. We determined the exact solution to this problem using the method of characteristics and d'Alembert's solution to the wave equation. We solved this problem using our new numerical approximations and verified the results compared to the exact solution. As a second test case we considered a two dimensional problem consisting of a shock in a fluid that strikes a cylindrically shaped solid.

  11. An RCM multi-physics ensemble over Europe: multi-variable evaluation to avoid error compensation

    NASA Astrophysics Data System (ADS)

    García-Díez, Markel; Fernández, Jesús; Vautard, Robert

    2015-12-01

    Regional Climate Models are widely used tools to add detail to the coarse resolution of global simulations. However, these are known to be affected by biases. Usually, published model evaluations use a reduced number of variables, frequently precipitation and temperature. Due to the complexity of the models, this may not be enough to assess their physical realism (e.g. to enable a fair comparison when weighting ensemble members). Furthermore, looking at only a few variables makes difficult to trace model errors. Thus, in many previous studies, these biases are described but their underlying causes and mechanisms are often left unknown. In this work the ability of a multi-physics ensemble in reproducing the observed climatologies of many variables over Europe is analysed. These are temperature, precipitation, cloud cover, radiative fluxes and total soil moisture content. It is found that, during winter, the model suffers a significant cold bias over snow covered regions. This is shown to be related with a poor representation of the snow-atmosphere interaction, and is amplified by an albedo feedback. It is shown how two members of the ensemble are able to alleviate this bias, but by generating a too large cloud cover. During summer, a large sensitivity to the cumulus parameterization is found, related to large differences in the cloud cover and short wave radiation flux. Results also show that small errors in one variable are sometimes a result of error compensation, so the high dimensionality of the model evaluation problem cannot be disregarded.

  12. Data-driven prognosis: a multi-physics approach verified via balloon burst experiment

    PubMed Central

    Chandra, Abhijit; Kar, Oliva

    2015-01-01

    A multi-physics formulation for data-driven prognosis (DDP) is developed. Unlike traditional predictive strategies that require controlled offline measurements or ‘training’ for determination of constitutive parameters to derive the transitional statistics, the proposed DDP algorithm relies solely on in situ measurements. It uses a deterministic mechanics framework, but the stochastic nature of the solution arises naturally from the underlying assumptions regarding the order of the conservation potential as well as the number of dimensions involved. The proposed DDP scheme is capable of predicting onset of instabilities. Because the need for offline testing (or training) is obviated, it can be easily implemented for systems where such a priori testing is difficult or even impossible to conduct. The prognosis capability is demonstrated here via a balloon burst experiment where the instability is predicted using only online visual observations. The DDP scheme never failed to predict the incipient failure, and no false-positives were issued. The DDP algorithm is applicable to other types of datasets. Time horizons of DDP predictions can be adjusted by using memory over different time windows. Thus, a big dataset can be parsed in time to make a range of predictions over varying time horizons. PMID:27547071

  13. Fractal applications to complex crustal problems

    NASA Technical Reports Server (NTRS)

    Turcotte, Donald L.

    1989-01-01

    Complex scale-invariant problems obey fractal statistics. The basic definition of a fractal distribution is that the number of objects with a characteristic linear dimension greater than r satisfies the relation N = about r exp -D where D is the fractal dimension. Fragmentation often satisfies this relation. The distribution of earthquakes satisfies this relation. The classic relationship between the length of a rocky coast line and the step length can be derived from this relation. Power law relations for spectra can also be related to fractal dimensions. Topography and gravity are examples. Spectral techniques can be used to obtain maps of fractal dimension and roughness amplitude. These provide a quantitative measure of texture analysis. It is argued that the distribution of stress and strength in a complex crustal region, such as the Alps, is fractal. Based on this assumption, the observed frequency-magnitude relation for the seismicity in the region can be derived.

  14. Development of a multi-physics calculation platform dedicated to irradiation devices in a material testing reactor

    SciTech Connect

    Bonaccorsi, T.; Di Salvo, J.; Aggery, A.; D'Aletto, C.; Doederlein, C.; Sireta, P.; Willermoz, G.; Daniel, M.

    2006-07-01

    The physical phenomena involved in irradiation devices within material testing reactors are complex (neutron and photon interactions, nuclear heating, thermal hydraulics, ...). However, the simulation of these phenomena requires a high precision in order to control the condition of the experiment and the development of predictive models. Until now, physicists use different tools with several approximations at each interface. The aim of this work is to develop a calculation platform dedicated to numerical multi-physics simulations of irradiation devices in the future European Jules Horowitz Reactor [1], This platform is based on a multi-physics data model which describes geometries, materials and state parameters associated with a sequence of thematic (neutronics, thermal hydraulics...) computations of these devices. Once the computation is carried out, the results can be returned to the data model (DM). The DM is encapsulated in a dedicated module of the SALOME platform [2] and exchanges data with SALOME native modules. This method allows a parametric description of a study, independent of the code used to perform the simulation. The application proposed in this paper concerns neutronic calculation of a fuel irradiation device with the new method of characteristics implemented in the APOLLO2 code [3]. The device is located at the periphery of the OSIRIS core. This choice is motivated by the possibility to compare the calculation with experimental results, which cannot be done for the Jules Horowitz Reactor, currently in design study phase. (authors)

  15. CT perfusion: principles, applications, and problems

    NASA Astrophysics Data System (ADS)

    Lee, Ting-Yim

    2004-10-01

    The fast scanning speed of current slip-ring CT scanners has enabled the development of perfusion imaging techniques with intravenous injection of contrast medium. In a typical CT perfusion study, contrast medium is injected and rapid scanning at a frequency of 1-2 Hz is used to monitor the first circulation of the injected contrast medium through a 1-2 cm thick slab of tissue. From the acquired time-series of CT images, arteries can be identified within the tissue slab to derive the arterial contrast concentration curve, Ca(t) while each individual voxel produces a tissue residue curve, Q(t) for the corresponding tissue region. Deconvolution between the measured Ca(t) and Q(t) leads to the determination of cerebral blood flow (CBF), cerebral blood volume (CBV) and mean transit time (MTT) in brain studies. In this presentation, an important application of CT perfusion in acute stroke studies - the identification of the ischemic penumbra via the CBF/CBV mismatch and factors affecting the quantitative accuracy of deconvolution, including partial volume averaging, arterial delay and dispersion are discussed.

  16. Applications of Genetic Methods to NASA Design and Operations Problems

    NASA Technical Reports Server (NTRS)

    Laird, Philip D.

    1996-01-01

    We review four recent NASA-funded applications in which evolutionary/genetic methods are important. In the process we survey: the kinds of problems being solved today with these methods; techniques and tools used; problems encountered; and areas where research is needed. The presentation slides are annotated briefly at the top of each page.

  17. AI techniques for a space application scheduling problem

    NASA Technical Reports Server (NTRS)

    Thalman, N.; Sparn, T.; Jaffres, L.; Gablehouse, D.; Judd, D.; Russell, C.

    1991-01-01

    Scheduling is a very complex optimization problem which can be categorized as an NP-complete problem. NP-complete problems are quite diverse, as are the algorithms used in searching for an optimal solution. In most cases, the best solutions that can be derived for these combinatorial explosive problems are near-optimal solutions. Due to the complexity of the scheduling problem, artificial intelligence (AI) can aid in solving these types of problems. Some of the factors are examined which make space application scheduling problems difficult and presents a fairly new AI-based technique called tabu search as applied to a real scheduling application. the specific problem is concerned with scheduling application. The specific problem is concerned with scheduling solar and stellar observations for the SOLar-STellar Irradiance Comparison Experiment (SOLSTICE) instrument in a constrained environment which produces minimum impact on the other instruments and maximizes target observation times. The SOLSTICE instrument will gly on-board the Upper Atmosphere Research Satellite (UARS) in 1991, and a similar instrument will fly on the earth observing system (Eos).

  18. Application of remote sensing to water resources problems

    NASA Technical Reports Server (NTRS)

    Clapp, J. L.

    1972-01-01

    The following conclusions were reached concerning the applications of remote sensing to water resources problems: (1) Remote sensing methods provide the most practical method of obtaining data for many water resources problems; (2) the multi-disciplinary approach is essential to the effective application of remote sensing to water resource problems; (3) there is a correlation between the amount of suspended solids in an effluent discharged into a water body and reflected energy; (4) remote sensing provides for more effective and accurate monitoring, discovery and characterization of the mixing zone of effluent discharged into a receiving water body; and (5) it is possible to differentiate between blue and blue-green algae.

  19. Scalable parallel solution coupling for multi-physics reactor simulation.

    SciTech Connect

    Tautges, T. J.; Caceres, A.; Mathematics and Computer Science

    2009-01-01

    Reactor simulation depends on the coupled solution of various physics types, including neutronics, thermal/hydraulics, and structural mechanics. This paper describes the formulation and implementation of a parallel solution coupling capability being developed for reactor simulation. The coupling process consists of mesh and coupler initialization, point location, field interpolation, and field normalization. We report here our test of this capability on an example problem, namely, a reflector assembly from an advanced burner test reactor. Performance of this coupler in parallel is reasonable for the chosen problem size and range of processor counts. The runtime is dominated by startup costs, which amortize over the entire coupled simulation. Future efforts will include adding more sophisticated interpolation and normalization methods, to accommodate different numerical solvers used in various physics modules and to obtain better conservation properties for certain field types.

  20. Application of remote sensing to solution of ecological problems

    NASA Technical Reports Server (NTRS)

    Adelman, A.

    1972-01-01

    The application of remote sensing techniques to solving ecological problems is discussed. The three phases of environmental ecological management are examined. The differences between discovery and exploitation of natural resources and their ecological management are described. The specific application of remote sensing to water management is developed.

  1. Multi-scale/multi-physical modeling in head/disk interface of magnetic data storage

    NASA Astrophysics Data System (ADS)

    Chung, Pil Seung; Smith, Robert; Vemuri, Sesha Hari; Jhon, Young In; Tak, Kyungjae; Moon, Il; Biegler, Lorenz T.; Jhon, Myung S.

    2012-04-01

    The model integration of the head-disk interface (HDI) in the hard disk drive system, which includes the hierarchy of highly interactive layers (magnetic layer, carbon overcoat (COC), lubricant, and air bearing system (ABS)), has recently been focused upon to resolve technical barriers and enhance reliability. Heat-assisted magnetic recording especially demands that the model simultaneously incorporates thermal and mechanical phenomena by considering the enormous combinatorial cases of materials and multi-scale/multi-physical phenomena. In this paper, we explore multi-scale/multi-physical simulation methods for HDI, which will holistically integrate magnetic layers, COC, lubricants, and ABS in non-isothermal conditions.

  2. Osiris: A Modern, High-Performance, Coupled, Multi-Physics Code For Nuclear Reactor Core Analysis

    SciTech Connect

    Procassini, R J; Chand, K K; Clouse, C J; Ferencz, R M; Grandy, J M; Henshaw, W D; Kramer, K J; Parsons, I D

    2007-02-26

    To meet the simulation needs of the GNEP program, LLNL is leveraging a suite of high-performance codes to be used in the development of a multi-physics tool for modeling nuclear reactor cores. The Osiris code project, which began last summer, is employing modern computational science techniques in the development of the individual physics modules and the coupling framework. Initial development is focused on coupling thermal-hydraulics and neutral-particle transport, while later phases of the project will add thermal-structural mechanics and isotope depletion. Osiris will be applicable to the design of existing and future reactor systems through the use of first-principles, coupled physics models with fine-scale spatial resolution in three dimensions and fine-scale particle-energy resolution. Our intent is to replace an existing set of legacy, serial codes which require significant approximations and assumptions, with an integrated, coupled code that permits the design of a reactor core using a first-principles physics approach on a wide range of computing platforms, including the world's most powerful parallel computers. A key research activity of this effort deals with the efficient and scalable coupling of physics modules which utilize rather disparate mesh topologies. Our approach allows each code module to use a mesh topology and resolution that is optimal for the physics being solved, and employs a mesh-mapping and data-transfer module to effect the coupling. Additional research is planned in the area of scalable, parallel thermal-hydraulics, high-spatial-accuracy depletion and coupled-physics simulation using Monte Carlo transport.

  3. Research on TRIZ and CAIs Application Problems for Technology Innovation

    NASA Astrophysics Data System (ADS)

    Li, Xiangdong; Li, Qinghai; Bai, Zhonghang; Geng, Lixiao

    In order to realize application of invent problem solve theory (TRIZ) and computer aided innovation software (CAIs) , need to solve some key problems, such as the mode choice of technology innovation, establishment of technology innovation organization network(TION), and achievement of innovative process based on TRIZ and CAIs, etc.. This paper shows that the demands for TRIZ and CAIs according to the characteristics and existing problem of the manufacturing enterprises. Have explained that the manufacturing enterprises need to set up an open TION of enterprise leading type, and achieve the longitudinal cooperation innovation with institution of higher learning. The process of technology innovation based on TRIZ and CAIs has been set up from researching and developing point of view. Application of TRIZ and CAIs in FY Company has been summarized. The application effect of TRIZ and CAIs has been explained using technology innovation of the close goggle valve product.

  4. Coupling multi-physics models to cardiac mechanics.

    PubMed

    Nordsletten, D A; Niederer, S A; Nash, M P; Hunter, P J; Smith, N P

    2011-01-01

    We outline and review the mathematical framework for representing mechanical deformation and contraction of the cardiac ventricles, and how this behaviour integrates with other processes crucial for understanding and modelling heart function. Building on general conservation principles of space, mass and momentum, we introduce an arbitrary Eulerian-Lagrangian framework governing the behaviour of both fluid and solid components. Exploiting the natural alignment of cardiac mechanical properties with the tissue microstructure, finite deformation measures and myocardial constitutive relations are referred to embedded structural axes. Coupling approaches for solving this large deformation mechanics framework with three dimensional fluid flow, coronary hemodynamics and electrical activation are described. We also discuss the potential of cardiac mechanics modelling for clinical applications.

  5. Overview of Krylov subspace methods with applications to control problems

    NASA Technical Reports Server (NTRS)

    Saad, Youcef

    1989-01-01

    An overview of projection methods based on Krylov subspaces are given with emphasis on their application to solving matrix equations that arise in control problems. The main idea of Krylov subspace methods is to generate a basis of the Krylov subspace Span and seek an approximate solution the the original problem from this subspace. Thus, the original matrix problem of size N is approximated by one of dimension m typically much smaller than N. Krylov subspace methods have been very successful in solving linear systems and eigenvalue problems and are now just becoming popular for solving nonlinear equations. It is shown how they can be used to solve partial pole placement problems, Sylvester's equation, and Lyapunov's equation.

  6. An application of the matching law to severe problem behavior.

    PubMed Central

    Borrero, John C; Vollmer, Timothy R

    2002-01-01

    We evaluated problem behavior and appropriate behavior using the matching equation with 4 individuals with developmental disabilities. Descriptive observations were conducted during interactions between the participants and their primary care providers in either a clinical laboratory environment (3 participants) or the participant's home (1 participant). Data were recorded on potential reinforcers, problem behavior, and appropriate behavior. After identifying the reinforcers that maintained each participant's problem behavior by way of functional analysis, the descriptive data were analyzed retrospectively, based on the matching equation. Results showed that the proportional rate of problem behavior relative to appropriate behavior approximately matched the proportional rate of reinforcement for problem behavior for all participants. The results extend prior research because a functional analysis was conducted and because multiple sources of reinforcement (other than attention) were evaluated. Methodological constraints were identified, which may limit the application of the matching law on both practical and conceptual levels. PMID:11936543

  7. Generalized Approaches to the Maxbet Problem and the Maxdiff Problem, with Applications to Canonical Correlations.

    ERIC Educational Resources Information Center

    ten Berge, Jos M. F.

    1988-01-01

    A summary and a unified treatment of fully general computational solutions for two criteria for transforming two or more matrices to maximal agreement are provided. The two criteria--Maxdiff and Maxbet--have applications in the rotation of factor loading or configuration matrices to maximal agreement and the canonical correlation problem. (SLD)

  8. Problem solving in magnetic field: Animation in mobile application

    NASA Astrophysics Data System (ADS)

    Najib, A. S. M.; Othman, A. P.; Ibarahim, Z.

    2014-09-01

    This paper is focused on the development of mobile application for smart phone, Android, tablet, iPhone, and iPad as a problem solving tool in magnetic field. Mobile application designs consist of animations that were created by using Flash8 software which could be imported and compiled to prezi.com software slide. The Prezi slide then had been duplicated in Power Point format and instead question bank with complete answer scheme was also additionally generated as a menu in the application. Results of the published mobile application can be viewed and downloaded at Infinite Monkey website or at Google Play Store from your gadgets. Statistics of the application from Google Play Developer Console shows the high impact of the application usage in all over the world.

  9. Innovative Applications of Genetic Algorithms to Problems in Accelerator Physics

    SciTech Connect

    Hofler, Alicia; Terzic, Balsa; Kramer, Matthew; Zvezdin, Anton; Morozov, Vasiliy; Roblin, Yves; Lin, Fanglei; Jarvis, Colin

    2013-01-01

    The genetic algorithm (GA) is a relatively new technique that implements the principles nature uses in biological evolution in order to optimize a multidimensional nonlinear problem. The GA works especially well for problems with a large number of local extrema, where traditional methods (such as conjugate gradient, steepest descent, and others) fail or, at best, underperform. The field of accelerator physics, among others, abounds with problems which lend themselves to optimization via GAs. In this paper, we report on the successful application of GAs in several problems related to the existing CEBAF facility, the proposed MEIC at Jefferson Lab, and a radio frequency (RF) gun based injector. These encouraging results are a step forward in optimizing accelerator design and provide an impetus for application of GAs to other problems in the field. To that end, we discuss the details of the GAs used, including a newly devised enhancement, which leads to improved convergence to the optimum and make recommendations for future GA developments and accelerator applications.

  10. Conceptions of Efficiency: Applications in Learning and Problem Solving

    ERIC Educational Resources Information Center

    Hoffman, Bobby; Schraw, Gregory

    2010-01-01

    The purpose of this article is to clarify conceptions, definitions, and applications of learning and problem-solving efficiency. Conceptions of efficiency vary within the field of educational psychology, and there is little consensus as to how to define, measure, and interpret the efficiency construct. We compare three diverse models that differ…

  11. An Application of Calculus: Optimum Parabolic Path Problem

    ERIC Educational Resources Information Center

    Atasever, Merve; Pakdemirli, Mehmet; Yurtsever, Hasan Ali

    2009-01-01

    A practical and technological application of calculus problem is posed to motivate freshman students or junior high school students. A variable coefficient of friction is used in modelling air friction. The case in which the coefficient of friction is a decreasing function of altitude is considered. The optimum parabolic path for a flying object…

  12. The Application of Acceptance and Commitment Therapy to Problem Anger

    ERIC Educational Resources Information Center

    Eifert, Georg H.; Forsyth, John P.

    2011-01-01

    The goal of this paper is to familiarize clinicians with the use of Acceptance and Commitment Therapy (ACT) for problem anger by describing the application of ACT to a case of a 45-year-old man struggling with anger. ACT is an approach and set of intervention technologies that support acceptance and mindfulness processes linked with commitment and…

  13. Applications and Problems of Computer Assisted Education in Turkey

    ERIC Educational Resources Information Center

    Usun, Salih

    2006-01-01

    This paper focuses on the Computer Assisted Education (CAE) in Turkey; reviews of the related literature; examines the projects, applications and problems on the Computer Assisted Education (CAE) in Turkey compares with the World; exposes the positive and negative aspects of the projects; a number of the suggestion presents on the effective use of…

  14. Assessing student written problem solutions: A problem-solving rubric with application to introductory physics

    NASA Astrophysics Data System (ADS)

    Docktor, Jennifer L.; Dornfeld, Jay; Frodermann, Evan; Heller, Kenneth; Hsu, Leonardo; Jackson, Koblar Alan; Mason, Andrew; Ryan, Qing X.; Yang, Jie

    2016-06-01

    Problem solving is a complex process valuable in everyday life and crucial for learning in the STEM fields. To support the development of problem-solving skills it is important for researchers and curriculum developers to have practical tools that can measure the difference between novice and expert problem-solving performance in authentic classroom work. It is also useful if such tools can be employed by instructors to guide their pedagogy. We describe the design, development, and testing of a simple rubric to assess written solutions to problems given in undergraduate introductory physics courses. In particular, we present evidence for the validity, reliability, and utility of the instrument. The rubric identifies five general problem-solving processes and defines the criteria to attain a score in each: organizing problem information into a Useful Description, selecting appropriate principles (Physics Approach), applying those principles to the specific conditions in the problem (Specific Application of Physics), using Mathematical Procedures appropriately, and displaying evidence of an organized reasoning pattern (Logical Progression).

  15. Fifth international conference on hyperbolic problems -- theory, numerics, applications: Abstracts

    SciTech Connect

    1994-12-31

    The conference demonstrated that hyperbolic problems and conservation laws play an important role in many areas including industrial applications and the studying of elasto-plastic materials. Among the various topics covered in the conference, the authors mention: the big bang theory, general relativity, critical phenomena, deformation and fracture of solids, shock wave interactions, numerical simulation in three dimensions, the level set method, multidimensional Riemann problem, application of the front tracking in petroleum reservoir simulations, global solution of the Navier-Stokes equations in high dimensions, recent progress in granular flow, and the study of elastic plastic materials. The authors believe that the new ideas, tools, methods, problems, theoretical results, numerical solutions and computational algorithms presented or discussed at the conference will benefit the participants in their current and future research.

  16. A Memory-Based Reasoning Applicable to Business Problems

    NASA Astrophysics Data System (ADS)

    Maeda, Kazuho; Yaginuma, Yoshinori

    Recently, data mining is remarkable as a practical solution for huge accumulated data. The classification, the goal of which is that a new data is classified into one of given groups, is one of the most generally used data mining techniques. In this paper, we discuss advantages of Memory-Based Reasoning (MBR), one of classification methods, and point out some problems to use it practically. To solve them, we propose a MBR applicable to business problems, with self-determination of proper number of neighbors, proper feature weights, normalized distance metric between categorical values, high accuracy despite dependent features, and high speed prediction. We experimentally compare our MBR with usual MBR and C5.0, one of the most popular classification methods. We also discuss the fitness of our MBR to business problems, through an application study of our MBR to the financial credit management.

  17. Applications of polymeric smart materials to environmental problems.

    PubMed Central

    Gray, H N; Bergbreiter, D E

    1997-01-01

    New methods for the reduction and remediation of hazardous wastes like carcinogenic organic solvents, toxic materials, and nuclear contamination are vital to environmental health. Procedures for effective waste reduction, detection, and removal are important components of any such methods. Toward this end, polymeric smart materials are finding useful applications. Polymer-bound smart catalysts are useful in waste minimization, catalyst recovery, and catalyst reuse. Polymeric smart coatings have been developed that are capable of both detecting and removing hazardous nuclear contaminants. Such applications of smart materials involving catalysis chemistry, sensor chemistry, and chemistry relevant to decontamination methodology are especially applicable to environmental problems. PMID:9114277

  18. Problems in classical potential theory with applications to mathematical physics

    NASA Astrophysics Data System (ADS)

    Lundberg, Erik

    In this thesis we are interested in some problems regarding harmonic functions. The topics are divided into three chapters. Chapter 2 concerns singularities developed by solutions of the Cauchy problem for a holomorphic elliptic equation, especially Laplace's equation. The principal motivation is to locate the singularities of the Schwarz potential. The results have direct applications to Laplacian growth (or the Hele-Shaw problem). Chapter 3 concerns the Dirichlet problem when the boundary is an algebraic set and the data is a polynomial or a real-analytic function. We pursue some questions related to the Khavinson-Shapiro conjecture. A main topic of interest is analytic continuability of the solution outside its natural domain. Chapter 4 concerns certain complex-valued harmonic functions and their zeros. The special cases we consider apply directly in astrophysics to the study of multiple-image gravitational lenses.

  19. [Problems and countermeasures in the application of constructed wetlands].

    PubMed

    Huang, Jin-Lou; Chen, Qin; Xu, Lian-Huang

    2013-01-01

    Constructed wetlands as a wastewater eco-treatment technology are developed in recent decades. It combines sewage treatment with the eco-environment in an efficient way. It treats the sewage effectively, and meanwhile beautifies the environment, creates ecological landscape, and brings benefits to the environment and economics. The unique advantages of constructed wetlands have attracted intensive attention since developed. Constructed wetlands are widely used in treatment of domestic sewage, industrial wastewater, and wastewater from mining and petroleum production. However, many problems are found in the practical application of constructed wetland, e. g. they are vulnerable to changes in climatic conditions and temperature, their substrates are easily saturated and plugged, they are readily affected by plant species, they often occupy large areas, and there are other problems including irrational management, non-standard design, and a single function of ecological service. These problems to a certain extent influence the efficiency of constructed wetlands in wastewater treatment, shorten the life of the artificial wetland, and hinder the application of artificial wetland. The review presents correlation analysis and countermeasures for these problems, in order to improve the efficiency of constructed wetland in wastewater treatment, and provide reference for the application and promotion of artificial wetland.

  20. [The Abbreviated Injury Scale (AIS). Options and problems in application].

    PubMed

    Haasper, C; Junge, M; Ernstberger, A; Brehme, H; Hannawald, L; Langer, C; Nehmzow, J; Otte, D; Sander, U; Krettek, C; Zwipp, H

    2010-05-01

    The new AIS (Abbreviated Injury Scale) was released with an update by the AAAM (Association for the Advancement of Automotive Medicine) in 2008. It is a universal scoring system in the field of trauma applicable in clinic and research. In engineering it is used as a classification system for vehicle safety. The AIS can therefore be considered as an international, interdisciplinary and universal code of injury severity. This review focuses on a historical overview, potential applications and new coding options in the current version and also outlines the associated problems. PMID:20376615

  1. Progress on PRONGHORN Application to NGNP Related Problems

    SciTech Connect

    Dana A. Knoll

    2009-08-01

    We are developing a multiphysics simulation tool for Very High-Temperature gascooled Reactors (VHTR). The simulation tool, PRONGHORN, takes advantages of the Multiphysics Object-Oriented Simulation library, and is capable of solving multidimensional thermal-fluid and neutronics problems implicitly in parallel. Expensive Jacobian matrix formation is alleviated by the Jacobian-free Newton-Krylov method, and physics-based preconditioning is applied to improve the convergence. The initial development of PRONGHORN has been focused on the pebble bed corec concept. However, extensions required to simulate prismatic cores are underway. In this progress report we highlight progress on application of PRONGHORN to PBMR400 benchmark problems, extension and application of PRONGHORN to prismatic core reactors, and progress on simulations of 3-D transients.

  2. SIAM conference on inverse problems: Geophysical applications. Final technical report

    SciTech Connect

    1995-12-31

    This conference was the second in a series devoted to a particular area of inverse problems. The theme of this series is to discuss problems of major scientific importance in a specific area from a mathematical perspective. The theme of this symposium was geophysical applications. In putting together the program we tried to include a wide range of mathematical scientists and to interpret geophysics in as broad a sense as possible. Our speaker came from industry, government laboratories, and diverse departments in academia. We managed to attract a geographically diverse audience with participation from five continents. There were talks devoted to seismology, hydrology, determination of the earth`s interior on a global scale as well as oceanographic and atmospheric inverse problems.

  3. Application of the boundary integral method to immiscible displacement problems

    SciTech Connect

    Masukawa, J.; Horne, R.N.

    1988-08-01

    This paper presents an application of the boundary integral method (BIM) to fluid displacement problems to demonstrate its usefulness in reservoir simulation. A method for solving two-dimensional (2D), piston-like displacement for incompressible fluids with good accuracy has been developed. Several typical example problems with repeated five-spot patterns were solved for various mobility ratios. The solutions were compared with the analytical solutions to demonstrate accuracy. Singularity programming was found to be a major advantage in handling flow in the vicinity of wells. The BIM was found to be an excellent way to solve immiscible displacement problems. Unlike analytic methods, it can accommodate complex boundary shapes and does not suffer from numerical dispersion at the front.

  4. Space Life Support Technology Applications to Terrestrial Environmental Problems

    NASA Technical Reports Server (NTRS)

    Schwartzkopf, Steven H.; Sleeper, Howard L.

    1993-01-01

    Many of the problems now facing the human race on Earth are, in fact, life support issues. Decline of air Quality as a result of industrial and automotive emissions, pollution of ground water by organic pesticides or solvents, and the disposal of solid wastes are all examples of environmental problems that we must solve to sustain human life. The technologies currently under development to solve the problems of supporting human life for advanced space missions are extraordinarily synergistic with these environmental problems. The development of these technologies (including both physicochemical and bioregenerative types) is increasingly focused on closing the life support loop by removing and recycling contaminants and wastes to produce the materials necessary to sustain human life. By so doing, this technology development effort also focuses automatically on reducing resupply logistics requirements and increasing crew safety through increased self-sufficiency. This paper describes several technologies that have been developed to support human life in space and illustrates the applicability of the technologies to environmental problems including environmental remediation and pollution prevention.

  5. Application of traditional CFD methods to nonlinear computational aeroacoustics problems

    NASA Technical Reports Server (NTRS)

    Chyczewski, Thomas S.; Long, Lyle N.

    1995-01-01

    This paper describes an implementation of a high order finite difference technique and its application to the category 2 problems of the ICASE/LaRC Workshop on Computational Aeroacoustics (CAA). Essentially, a popular Computational Fluid Dynamics (CFD) approach (central differencing, Runge-Kutta time integration and artificial dissipation) is modified to handle aeroacoustic problems. The changes include increasing the order of the spatial differencing to sixth order and modifying the artificial dissipation so that it does not significantly contaminate the wave solution. All of the results were obtained from the CM5 located at the Numerical Aerodynamic Simulation Laboratory. lt was coded in CMFortran (very similar to HPF), using programming techniques developed for communication intensive large stencils, and ran very efficiently.

  6. On the Application of the Energy Method to Stability Problems

    NASA Technical Reports Server (NTRS)

    Marguerre, Karl

    1947-01-01

    Since stability problems have come into the field of vision of engineers, energy methods have proved to be one of the most powerful aids in mastering them. For finding the especially interesting critical loads special procedures have evolved that depart somewhat from those customary in the usual elasticity theory. A clarification of the connections seemed desirable,especially with regard to the post-critical region, for the treatment of which these special methods are not suited as they are. The present investigation discusses this question-complex (made important by shell construction in aircraft) especially in the classical example of the Euler strut, because in this case - since the basic features are not hidden by difficulties of a mathematical nature - the problem is especially clear. The present treatment differs from that appearing in the Z.f.a.M.M. (1938) under the title "Uber die Behandlung von Stabilittatsproblemen mit Hilfe der energetischen Methode" in that, in order to work out the basic ideas still more clearly,it dispenses with the investigation of behavior at large deflections and of the elastic foundation;in its place the present version gives an elaboration of the 6th section and (in its 7 th and 8th secs.)a new example that shows the applicability of the general criterion to a stability problem that differs from that of Euler in many respects.

  7. Application of inverse heat conduction problem on temperature measurement

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Zhou, G.; Dong, B.; Li, Q.; Liu, L. Q.

    2013-09-01

    For regenerative cooling devices, such as G-M refrigerator, pulse tube cooler or thermoacoustic cooler, the gas oscillating bring about temperature fluctuations inevitably, which is harmful in many applications requiring high stable temperatures. To find out the oscillating mechanism of the cooling temperature and improve the temperature stability of cooler, the inner temperature of the cold head has to be measured. However, it is difficult to measure the inner oscillating temperature of the cold head directly because the invasive temperature detectors may disturb the oscillating flow. Fortunately, the outer surface temperature of the cold head can be measured accurately by invasive temperature measurement techniques. In this paper, a mathematical model of inverse heat conduction problem is presented to identify the inner surface oscillating temperature of cold head according to the measured temperature of the outer surface in a GM cryocooler. Inverse heat conduction problem will be solved using control volume approach. Outer surface oscillating temperature could be used as input conditions of inverse problem and the inner surface oscillating temperature of cold head can be inversely obtained. A simple uncertainty analysis of the oscillating temperature measurement also will be provided.

  8. Application of gradient elasticity to benchmark problems of beam vibrations

    NASA Astrophysics Data System (ADS)

    Kateb, K. M.; Almitani, K. H.; Alnefaie, K. A.; Abu-Hamdeh, N. H.; Papadopoulos, P.; Askes, H.; Aifantis, E. C.

    2016-04-01

    The gradient approach, specifically gradient elasticity theory, is adopted to revisit certain typical configurations on mechanical vibrations. New results on size effects and scale-dependent behavior not captured by classical elasticity are derived, aiming at illustrating the usefulness of this approach to applications in advanced technologies. In particular, elastic prismatic straight beams in bending are discussed using two different governing equations: the gradient elasticity bending moment equation (fourth order) and the gradient elasticity deflection equation (sixth order). Different boundary/support conditions are examined. One problem considers the free vibrations of a cantilever beam loaded by an end force. A second problem is concerned with a simply supported beam disturbed by a concentrated force in the middle of the beam. Both problems are solved analytically. Exact free vibration frequencies and mode shapes are derived and presented. The difference between the gradient elasticity solution and its classical counterpart is revealed. The size ratio c/L (c denotes internal length and L is the length of the beam) induces significant effects on vibration frequencies. For both beam configurations, it turns out that as the ratio c/L increases, the vibration frequencies decrease, a fact which implies lower beam stiffness. Numerical examples show this behavior explicitly and recover the classical vibration behavior for vanishing size ratio c/L.

  9. The application of artificial intelligence to astronomical scheduling problems

    NASA Technical Reports Server (NTRS)

    Johnston, Mark D.

    1992-01-01

    Efficient utilization of expensive space- and ground-based observatories is an important goal for the astronomical community; the cost of modern observing facilities is enormous, and the available observing time is much less than the demand from astronomers around the world. The complexity and variety of scheduling constraints and goals has led several groups to investigate how artificial intelligence (AI) techniques might help solve these kinds of problems. The earliest and most successful of these projects was started at Space Telescope Science Institute in 1987 and has led to the development of the Spike scheduling system to support the scheduling of Hubble Space Telescope (HST). The aim of Spike at STScI is to allocate observations to timescales of days to a week observing all scheduling constraints and maximizing preferences that help ensure that observations are made at optimal times. Spike has been in use operationally for HST since shortly after the observatory was launched in Apr. 1990. Although developed specifically for HST scheduling, Spike was carefully designed to provide a general framework for similar (activity-based) scheduling problems. In particular, the tasks to be scheduled are defined in the system in general terms, and no assumptions about the scheduling timescale are built in. The mechanisms for describing, combining, and propagating temporal and other constraints and preferences are quite general. The success of this approach has been demonstrated by the application of Spike to the scheduling of other satellite observatories: changes to the system are required only in the specific constraints that apply, and not in the framework itself. In particular, the Spike framework is sufficiently flexible to handle both long-term and short-term scheduling, on timescales of years down to minutes or less. This talk will discuss recent progress made in scheduling search techniques, the lessons learned from early HST operations, the application of Spike

  10. Multi-physics and multi-scale characterization of shale anisotropy

    NASA Astrophysics Data System (ADS)

    Sarout, J.; Nadri, D.; Delle Piane, C.; Esteban, L.; Dewhurst, D.; Clennell, M. B.

    2012-12-01

    Shales are the most abundant sedimentary rock type in the Earth's shallow crust. In the past decade or so, they have attracted increased attention from the petroleum industry as reservoirs, as well as more traditionally for their sealing capacity for hydrocarbon/CO2 traps or underground waste repositories. The effectiveness of both fundamental and applied shale research is currently limited by (i) the extreme variability of physical, mechanical and chemical properties observed for these rocks, and by (ii) the scarce data currently available. The variability in observed properties is poorly understood due to many factors that are often irrelevant for other sedimentary rocks. The relationships between these properties and the petrophysical measurements performed at the field and laboratory scales are not straightforward, translating to a scale dependency typical of shale behaviour. In addition, the complex and often anisotropic micro-/meso-structures of shales give rise to a directional dependency of some of the measured physical properties that are tensorial by nature such as permeability or elastic stiffness. Currently, fundamental understanding of the parameters controlling the directional and scale dependency of shale properties is far from complete. Selected results of a multi-physics laboratory investigation of the directional and scale dependency of some critical shale properties are reported. In particular, anisotropic features of shale micro-/meso-structures are related to the directional-dependency of elastic and fluid transport properties: - Micro-/meso-structure (μm to cm scale) characterization by electron microscopy and X-ray tomography; - Estimation of elastic anisotropy parameters on a single specimen using elastic wave propagation (cm scale); - Estimation of the permeability tensor using the steady-state method on orthogonal specimens (cm scale); - Estimation of the low-frequency diffusivity tensor using NMR method on orthogonal specimens (<

  11. The maximum clique enumeration problem: algorithms, applications, and implementations

    PubMed Central

    2012-01-01

    Background The maximum clique enumeration (MCE) problem asks that we identify all maximum cliques in a finite, simple graph. MCE is closely related to two other well-known and widely-studied problems: the maximum clique optimization problem, which asks us to determine the size of a largest clique, and the maximal clique enumeration problem, which asks that we compile a listing of all maximal cliques. Naturally, these three problems are NP-hard, given that they subsume the classic version of the NP-complete clique decision problem. MCE can be solved in principle with standard enumeration methods due to Bron, Kerbosch, Kose and others. Unfortunately, these techniques are ill-suited to graphs encountered in our applications. We must solve MCE on instances deeply seeded in data mining and computational biology, where high-throughput data capture often creates graphs of extreme size and density. MCE can also be solved in principle using more modern algorithms based in part on vertex cover and the theory of fixed-parameter tractability (FPT). While FPT is an improvement, these algorithms too can fail to scale sufficiently well as the sizes and densities of our datasets grow. Results An extensive testbed of benchmark graphs are created using publicly available transcriptomic datasets from the Gene Expression Omnibus (GEO). Empirical testing reveals crucial but latent features of such high-throughput biological data. In turn, it is shown that these features distinguish real data from random data intended to reproduce salient topological features. In particular, with real data there tends to be an unusually high degree of maximum clique overlap. Armed with this knowledge, novel decomposition strategies are tuned to the data and coupled with the best FPT MCE implementations. Conclusions Several algorithmic improvements to MCE are made which progressively decrease the run time on graphs in the testbed. Frequently the final runtime improvement is several orders of magnitude

  12. A novel phenomenological multi-physics model of Li-ion battery cells

    NASA Astrophysics Data System (ADS)

    Oh, Ki-Yong; Samad, Nassim A.; Kim, Youngki; Siegel, Jason B.; Stefanopoulou, Anna G.; Epureanu, Bogdan I.

    2016-09-01

    A novel phenomenological multi-physics model of Lithium-ion battery cells is developed for control and state estimation purposes. The model can capture electrical, thermal, and mechanical behaviors of battery cells under constrained conditions, e.g., battery pack conditions. Specifically, the proposed model predicts the core and surface temperatures and reaction force induced from the volume change of battery cells because of electrochemically- and thermally-induced swelling. Moreover, the model incorporates the influences of changes in preload and ambient temperature on the force considering severe environmental conditions electrified vehicles face. Intensive experimental validation demonstrates that the proposed multi-physics model accurately predicts the surface temperature and reaction force for a wide operational range of preload and ambient temperature. This high fidelity model can be useful for more accurate and robust state of charge estimation considering the complex dynamic behaviors of the battery cell. Furthermore, the inherent simplicity of the mechanical measurements offers distinct advantages to improve the existing power and thermal management strategies for battery management.

  13. A Geospatial Integrated Problem Solving Environment for Homeland Security Applications

    SciTech Connect

    Koch, Daniel B

    2010-01-01

    Effective planning, response, and recovery (PRR) involving terrorist attacks or natural disasters come with a vast array of information needs. Much of the required information originates from disparate sources in widely differing formats. However, one common attribute the information often possesses is physical location. The organization and visualization of this information can be critical to the success of the PRR mission. Organizing information geospatially is often the most intuitive for the user. In the course of developing a field tool for the U.S. Department of Homeland Security (DHS) Office for Bombing Prevention, a geospatial integrated problem solving environment software framework was developed by Oak Ridge National Laboratory. This framework has proven useful as well in a number of other DHS, Department of Defense, and Department of Energy projects. An overview of the software architecture along with application examples are presented.

  14. Solution of the Traffic Jam Problem through Fuzzy Applications

    NASA Astrophysics Data System (ADS)

    Fernandez, Shery

    2010-11-01

    The major hurdle of a city planning council is to handle the traffic jam problem. The number of vehicles on roads increases day by day. Also the number of vehicles is directly proportional to the width of the road (including that of parallel roads). But it is not always possible to make roads or to increase width of the road corresponding to the increase in the number of vehicles. Also we cannot tell a person not to buy a vehicle. So trying to minimise the traffic jam is the only possible way to overcome this hurdle. Here we try to develop a method to avoid traffic jam through a mathematical approach (through fuzzy applications). This method helps to find a suitable route from an origin to a destination with lesser time than other routes.

  15. Scalable Adaptive Multilevel Solvers for Multiphysics Problems

    SciTech Connect

    Xu, Jinchao

    2014-12-01

    In this project, we investigated adaptive, parallel, and multilevel methods for numerical modeling of various real-world applications, including Magnetohydrodynamics (MHD), complex fluids, Electromagnetism, Navier-Stokes equations, and reservoir simulation. First, we have designed improved mathematical models and numerical discretizaitons for viscoelastic fluids and MHD. Second, we have derived new a posteriori error estimators and extended the applicability of adaptivity to various problems. Third, we have developed multilevel solvers for solving scalar partial differential equations (PDEs) as well as coupled systems of PDEs, especially on unstructured grids. Moreover, we have integrated the study between adaptive method and multilevel methods, and made significant efforts and advances in adaptive multilevel methods of the multi-physics problems.

  16. Multi-physics nuclear reactor simulator for advanced nuclear engineering education

    SciTech Connect

    Yamamoto, A.

    2012-07-01

    Multi-physics nuclear reactor simulator, which aims to utilize for advanced nuclear engineering education, is being introduced to Nagoya Univ.. The simulator consists of the 'macroscopic' physics simulator and the 'microscopic' physics simulator. The former performs real time simulation of a whole nuclear power plant. The latter is responsible to more detail numerical simulations based on the sophisticated and precise numerical models, while taking into account the plant conditions obtained in the macroscopic physics simulator. Steady-state and kinetics core analyses, fuel mechanical analysis, fluid dynamics analysis, and sub-channel analysis can be carried out in the microscopic physics simulator. Simulation calculations are carried out through dedicated graphical user interface and the simulation results, i.e., spatial and temporal behaviors of major plant parameters are graphically shown. The simulator will provide a bridge between the 'theories' studied with textbooks and the 'physical behaviors' of actual nuclear power plants. (authors)

  17. Multi-Physics Markov Chain Monte Carlo Methods for Subsurface Flows

    NASA Astrophysics Data System (ADS)

    Rigelo, J.; Ginting, V.; Rahunanthan, A.; Pereira, F.

    2014-12-01

    For CO2 sequestration in deep saline aquifers, contaminant transport in subsurface, and oil or gas recovery, we often need to forecast flow patterns. Subsurface characterization is a critical and challenging step in flow forecasting. To characterize subsurface properties we establish a statistical description of the subsurface properties that are conditioned to existing dynamic and static data. A Markov Chain Monte Carlo (MCMC) algorithm is used in a Bayesian statistical description to reconstruct the spatial distribution of rock permeability and porosity. The MCMC algorithm requires repeatedly solving a set of nonlinear partial differential equations describing displacement of fluids in porous media for different values of permeability and porosity. The time needed for the generation of a reliable MCMC chain using the algorithm can be too long to be practical for flow forecasting. In this work we develop fast and effective computational methods for generating MCMC chains in the Bayesian framework for the subsurface characterization. Our strategy consists of constructing a family of computationally inexpensive preconditioners based on simpler physics as well as on surrogate models such that the number of fine-grid simulations is drastically reduced in the generated MCMC chains. In particular, we introduce a huff-puff technique as screening step in a three-stage multi-physics MCMC algorithm to reduce the number of expensive final stage simulations. The huff-puff technique in the algorithm enables a better characterization of subsurface near wells. We assess the quality of the proposed multi-physics MCMC methods by considering Monte Carlo simulations for forecasting oil production in an oil reservoir.

  18. Advanced Mesh-Enabled Monte carlo capability for Multi-Physics Reactor Analysis

    SciTech Connect

    Wilson, Paul; Evans, Thomas; Tautges, Tim

    2012-12-24

    This project will accumulate high-precision fluxes throughout reactor geometry on a non- orthogonal grid of cells to support multi-physics coupling, in order to more accurately calculate parameters such as reactivity coefficients and to generate multi-group cross sections. This work will be based upon recent developments to incorporate advanced geometry and mesh capability in a modular Monte Carlo toolkit with computational science technology that is in use in related reactor simulation software development. Coupling this capability with production-scale Monte Carlo radiation transport codes can provide advanced and extensible test-beds for these developments. Continuous energy Monte Carlo methods are generally considered to be the most accurate computational tool for simulating radiation transport in complex geometries, particularly neutron transport in reactors. Nevertheless, there are several limitations for their use in reactor analysis. Most significantly, there is a trade-off between the fidelity of results in phase space, statistical accuracy, and the amount of computer time required for simulation. Consequently, to achieve an acceptable level of statistical convergence in high-fidelity results required for modern coupled multi-physics analysis, the required computer time makes Monte Carlo methods prohibitive for design iterations and detailed whole-core analysis. More subtly, the statistical uncertainty is typically not uniform throughout the domain, and the simulation quality is limited by the regions with the largest statistical uncertainty. In addition, the formulation of neutron scattering laws in continuous energy Monte Carlo methods makes it difficult to calculate adjoint neutron fluxes required to properly determine important reactivity parameters. Finally, most Monte Carlo codes available for reactor analysis have relied on orthogonal hexahedral grids for tallies that do not conform to the geometric boundaries and are thus generally not well

  19. Ensemble Smoother implemented in parallel for groundwater problems applications

    NASA Astrophysics Data System (ADS)

    Leyva, E.; Herrera, G. S.; de la Cruz, L. M.

    2013-05-01

    Data assimilation is a process that links forecasting models and measurements using the benefits from both sources. The Ensemble Kalman Filter (EnKF) is a data-assimilation sequential-method that was designed to address two of the main problems related to the use of the Extended Kalman Filter (EKF) with nonlinear models in large state spaces, i-e the use of a closure problem and massive computational requirements associated with the storage and subsequent integration of the error covariance matrix. The EnKF has gained popularity because of its simple conceptual formulation and relative ease of implementation. It has been used successfully in various applications of meteorology and oceanography and more recently in petroleum engineering and hydrogeology. The Ensemble Smoother (ES) is a method similar to EnKF, it was proposed by Van Leeuwen and Evensen (1996). Herrera (1998) proposed a version of the ES which we call Ensemble Smoother of Herrera (ESH) to distinguish it from the former. It was introduced for space-time optimization of groundwater monitoring networks. In recent years, this method has been used for data assimilation and parameter estimation in groundwater flow and transport models. The ES method uses Monte Carlo simulation, which consists of generating repeated realizations of the random variable considered, using a flow and transport model. However, often a large number of model runs are required for the moments of the variable to converge. Therefore, depending on the complexity of problem a serial computer may require many hours of continuous use to apply the ES. For this reason, it is required to parallelize the process in order to do it in a reasonable time. In this work we present the results of a parallelization strategy to reduce the execution time for doing a high number of realizations. The software GWQMonitor by Herrera (1998), implements all the algorithms required for the ESH in Fortran 90. We develop a script in Python using mpi4py, in

  20. Inverse Problems in Complex Models and Applications to Earth Sciences

    NASA Astrophysics Data System (ADS)

    Bosch, M. E.

    2015-12-01

    The inference of the subsurface earth structure and properties requires the integration of different types of data, information and knowledge, by combined processes of analysis and synthesis. To support the process of integrating information, the regular concept of data inversion is evolving to expand its application to models with multiple inner components (properties, scales, structural parameters) that explain multiple data (geophysical survey data, well-logs, core data). The probabilistic inference methods provide the natural framework for the formulation of these problems, considering a posterior probability density function (PDF) that combines the information from a prior information PDF and the new sets of observations. To formulate the posterior PDF in the context of multiple datasets, the data likelihood functions are factorized assuming independence of uncertainties for data originating across different surveys. A realistic description of the earth medium requires modeling several properties and structural parameters, which relate to each other according to dependency and independency notions. Thus, conditional probabilities across model components also factorize. A common setting proceeds by structuring the model parameter space in hierarchical layers. A primary layer (e.g. lithology) conditions a secondary layer (e.g. physical medium properties), which conditions a third layer (e.g. geophysical data). In general, less structured relations within model components and data emerge from the analysis of other inverse problems. They can be described with flexibility via direct acyclic graphs, which are graphs that map dependency relations between the model components. Examples of inverse problems in complex models can be shown at various scales. At local scale, for example, the distribution of gas saturation is inferred from pre-stack seismic data and a calibrated rock-physics model. At regional scale, joint inversion of gravity and magnetic data is applied

  1. Assessing Student Written Problem Solutions: A Problem-Solving Rubric with Application to Introductory Physics

    ERIC Educational Resources Information Center

    Docktor, Jennifer L.; Dornfeld, Jay; Frodermann, Evan; Heller, Kenneth; Hsu, Leonardo; Jackson, Koblar Alan; Mason, Andrew; Ryan, Qing X.; Yang, Jie

    2016-01-01

    Problem solving is a complex process valuable in everyday life and crucial for learning in the STEM fields. To support the development of problem-solving skills it is important for researchers and curriculum developers to have practical tools that can measure the difference between novice and expert problem-solving performance in authentic…

  2. Jacobi elliptic functions: A review of nonlinear oscillatory application problems

    NASA Astrophysics Data System (ADS)

    Kovacic, Ivana; Cveticanin, Livija; Zukovic, Miodrag; Rakaric, Zvonko

    2016-10-01

    This review paper is concerned with the applications of Jacobi elliptic functions to nonlinear oscillators whose restoring force has a monomial or binomial form that involves cubic and/or quadratic nonlinearity. First, geometric interpretations of three basic Jacobi elliptic functions are given and their characteristics are discussed. It is shown then how their different forms can be utilized to express exact solutions for the response of certain free conservative oscillators. These forms are subsequently used as a starting point for a presentation of different quantitative techniques for obtaining an approximate response for free perturbed nonlinear oscillators. An illustrative example is provided. Further, two types of externally forced nonlinear oscillators are reviewed: (i) those that are excited by elliptic-type excitations with different exact and approximate solutions; (ii) those that are damped and excited by harmonic excitations, but their approximate response is expressed in terms of Jacobi elliptic functions. Characteristics of the steady-state response are discussed and certain qualitative differences with respect to the classical Duffing oscillator excited harmonically are pointed out. Parametric oscillations of the oscillators excited by an elliptic-type forcing are considered as well, and the differences with respect to the stability chart of the classical Mathieu equation are emphasized. The adjustment of the Melnikov method to derive the general condition for the onset of homoclinic bifurcations in a system parametrically excited by an elliptic-type forcing is provided and compared with those corresponding to harmonic excitations. Advantages and disadvantages of the use of Jacobi elliptic functions in nonlinear oscillatory application problems are discussed and some suggestions for future work are given.

  3. COAMPS Application to Dispersion Scavenging Problem: Heavy Precipitation Simulation

    SciTech Connect

    Chin, H; Cederwall, R

    2004-05-05

    Precipitation scavenging can effectively remove particulates from the atmosphere. Therefore, this process is of importance in the real-time modeling of atmospheric transport for hazardous materials. To account for the rainfall effect in LLNL operational dispersion model, a modified version of a standard below-cloud aerosol scavenging model has been developed to handle the emergency response in this scenario (Loosmore and Cerdewall, 2003, hereafter referred to as LC). Two types of rain data can be used to incorporate precipitation scavenging in the dispersion model; realtime measurements (rain gauge and radar), and model prediction. The former approach has been adopted in LC's study for the below-cloud scavenging problem based on the surface rain measurements. However, the in-cloud scavenging effect remains unresolved as a restriction of available real-time measurements in providing the vertical structure of precipitation systems. The objective of this study is to explore the possibility to incorporate three-dimensional precipitation structure of forecast data into the dispersion model. Therefore, both in-cloud and below-cloud scavenging effects can be included in LLNL aerosol scavenging model. To this end, a mesoscale model (Naval Research Laboratory 3-D weather forecast model, COAMPS) is used to demonstrate this application using a mid-west severe storm case occurring on July 18, 1997.

  4. Application of Problem Based Learning through Research Investigation

    ERIC Educational Resources Information Center

    Beringer, Jason

    2007-01-01

    Problem-based learning (PBL) is a teaching technique that uses problem-solving as the basis for student learning. The technique is student-centred with teachers taking the role of a facilitator. Its general aims are to construct a knowledge base, develop problem-solving skills, teach effective collaboration and provide the skills necessary to be a…

  5. Problem Based Learning: Application to Technology Education in Three Countries

    ERIC Educational Resources Information Center

    Williams, P. John; Iglesias, Juan; Barak, Moshe

    2008-01-01

    An increasing variety of professional educational and training disciplines are now problem based (e.g., medicine, nursing, engineering, community health), and they may have a corresponding variety of educational objectives. However, they all have in common the use of problems in the instructional sequence. The problems may be as diverse as a…

  6. Multi-physics modelling approach for oscillatory microengines: application for a microStirling generator design

    NASA Astrophysics Data System (ADS)

    Formosa, F.; Fréchette, L. G.

    2015-12-01

    An electrical circuit equivalent (ECE) approach has been set up allowing elementary oscillatory microengine components to be modelled. They cover gas channel/chamber thermodynamics, viscosity and thermal effects, mechanical structure and electromechanical transducers. The proposed tool has been validated on a centimeter scale Free Piston membrane Stirling engine [1]. We propose here new developments taking into account scaling effects to establish models suitable for any microengines. They are based on simplifications derived from the comparison of the hydraulic radius with respect to the viscous and thermal penetration depths respectively).

  7. Periodically specified satisfiability problems with applications: An alternative to domino problems

    SciTech Connect

    Marathe, M.V.; Hunt, H.B., III; Rosenkrantz, D.J.; Stearns, R.E.; Radhakrishnann, V.

    1995-12-31

    We characterize the complexities of several basic generalized CNF satisfiability problems SAT(S), when instances are specified using various kinds of 1- and 2-dimensional periodic specifications. We outline how this characterization can be used to prove a number of new hardness results for the complexity classes DSPACE(n), NSPACE(n), DEXPTIME, NEXPTIME, EXPSPACE etc. The hardness results presented significantly extend the known hardness results for periodically specified problems. Several advantages axe outlined of the use of periodically specified satisfiability problems over the use of domino problems in proving both hardness and easiness results. As one corollary, we show that a number of basic NP-hard problems become EXPSPACE hard when inputs axe represented using 1-dimensional infinite periodic wide specifications. This answers a long standing open question posed by Orlin.

  8. Application of the FETI Method to ASCI Problems: Scalability Results on One Thousand Processors and Discussion of Highly Heterogeneous Problems

    SciTech Connect

    Bhardwaj, M.; Day, D.; Farhat, C.; Lesoinne, M; Pierson, K.; Rixen, D.

    1999-04-01

    We report on the application of the one-level FETI method to the solution of a class of substructural problems associated with the Department of Energy's Accelerated Strategic Computing Initiative (ASCI). We focus on numerical and parallel scalability issues, and on preliminary performance results obtained on the ASCI Option Red supercomputer configured with as many as one thousand processors, for problems with as many as 5 million degrees of freedom.

  9. Computation of Thermodynamic Equilibria Pertinent to Nuclear Materials in Multi-Physics Codes

    NASA Astrophysics Data System (ADS)

    Piro, Markus Hans Alexander

    Nuclear energy plays a vital role in supporting electrical needs and fulfilling commitments to reduce greenhouse gas emissions. Research is a continuing necessity to improve the predictive capabilities of fuel behaviour in order to reduce costs and to meet increasingly stringent safety requirements by the regulator. Moreover, a renewed interest in nuclear energy has given rise to a "nuclear renaissance" and the necessity to design the next generation of reactors. In support of this goal, significant research efforts have been dedicated to the advancement of numerical modelling and computational tools in simulating various physical and chemical phenomena associated with nuclear fuel behaviour. This undertaking in effect is collecting the experience and observations of a past generation of nuclear engineers and scientists in a meaningful way for future design purposes. There is an increasing desire to integrate thermodynamic computations directly into multi-physics nuclear fuel performance and safety codes. A new equilibrium thermodynamic solver is being developed with this matter as a primary objective. This solver is intended to provide thermodynamic material properties and boundary conditions for continuum transport calculations. There are several concerns with the use of existing commercial thermodynamic codes: computational performance; limited capabilities in handling large multi-component systems of interest to the nuclear industry; convenient incorporation into other codes with quality assurance considerations; and, licensing entanglements associated with code distribution. The development of this software in this research is aimed at addressing all of these concerns. The approach taken in this work exploits fundamental principles of equilibrium thermodynamics to simplify the numerical optimization equations. In brief, the chemical potentials of all species and phases in the system are constrained by estimates of the chemical potentials of the system

  10. Application of generalized separation of variables to solving mixed problems with irregular boundary conditions

    NASA Astrophysics Data System (ADS)

    Gasymov, E. A.; Guseinova, A. O.; Gasanova, U. N.

    2016-07-01

    One of the methods for solving mixed problems is the classical separation of variables (the Fourier method). If the boundary conditions of the mixed problem are irregular, this method, generally speaking, is not applicable. In the present paper, a generalized separation of variables and a way of application of this method to solving some mixed problems with irregular boundary conditions are proposed. Analytical representation of the solution to this irregular mixed problem is obtained.

  11. Application of TRIZ approach to machine vibration condition monitoring problems

    NASA Astrophysics Data System (ADS)

    Cempel, Czesław

    2013-12-01

    Up to now machine condition monitoring has not been seriously approached by TRIZ1TRIZ= Russian acronym for Inventive Problem Solving System, created by G. Altshuller ca 50 years ago. users, and the knowledge of TRIZ methodology has not been applied there intensively. However, there are some introductory papers of present author posted on Diagnostic Congress in Cracow (Cempel, in press [11]), and Diagnostyka Journal as well. But it seems to be further need to make such approach from different sides in order to see, if some new knowledge and technology will emerge. In doing this we need at first to define the ideal final result (IFR) of our innovation problem. As a next we need a set of parameters to describe the problems of system condition monitoring (CM) in terms of TRIZ language and set of inventive principles possible to apply, on the way to IFR. This means we should present the machine CM problem by means of contradiction and contradiction matrix. When specifying the problem parameters and inventive principles, one should use analogy and metaphorical thinking, which by definition is not exact but fuzzy, and leads sometimes to unexpected results and outcomes. The paper undertakes this important problem again and brings some new insight into system and machine CM problems. This may mean for example the minimal dimensionality of TRIZ engineering parameter set for the description of machine CM problems, and the set of most useful inventive principles applied to given engineering parameter and contradictions of TRIZ.

  12. Applications of decision analysis and related techniques to industrial engineering problems at KSC

    NASA Technical Reports Server (NTRS)

    Evans, Gerald W.

    1995-01-01

    This report provides: (1) a discussion of the origination of decision analysis problems (well-structured problems) from ill-structured problems; (2) a review of the various methodologies and software packages for decision analysis and related problem areas; (3) a discussion of how the characteristics of a decision analysis problem affect the choice of modeling methodologies, thus providing a guide as to when to choose a particular methodology; and (4) examples of applications of decision analysis to particular problems encountered by the IE Group at KSC. With respect to the specific applications at KSC, particular emphasis is placed on the use of the Demos software package (Lumina Decision Systems, 1993).

  13. Coupling between a multi-physics workflow engine and an optimization framework

    NASA Astrophysics Data System (ADS)

    Di Gallo, L.; Reux, C.; Imbeaux, F.; Artaud, J.-F.; Owsiak, M.; Saoutic, B.; Aiello, G.; Bernardi, P.; Ciraolo, G.; Bucalossi, J.; Duchateau, J.-L.; Fausser, C.; Galassi, D.; Hertout, P.; Jaboulay, J.-C.; Li-Puma, A.; Zani, L.

    2016-03-01

    A generic coupling method between a multi-physics workflow engine and an optimization framework is presented in this paper. The coupling architecture has been developed in order to preserve the integrity of the two frameworks. The objective is to provide the possibility to replace a framework, a workflow or an optimizer by another one without changing the whole coupling procedure or modifying the main content in each framework. The coupling is achieved by using a socket-based communication library for exchanging data between the two frameworks. Among a number of algorithms provided by optimization frameworks, Genetic Algorithms (GAs) have demonstrated their efficiency on single and multiple criteria optimization. Additionally to their robustness, GAs can handle non-valid data which may appear during the optimization. Consequently GAs work on most general cases. A parallelized framework has been developed to reduce the time spent for optimizations and evaluation of large samples. A test has shown a good scaling efficiency of this parallelized framework. This coupling method has been applied to the case of SYCOMORE (SYstem COde for MOdeling tokamak REactor) which is a system code developed in form of a modular workflow for designing magnetic fusion reactors. The coupling of SYCOMORE with the optimization platform URANIE enables design optimization along various figures of merit and constraints.

  14. Multi-physical model of cation and water transport in ionic polymer-metal composite sensors

    NASA Astrophysics Data System (ADS)

    Zhu, Zicai; Chang, Longfei; Horiuchi, Tetsuya; Takagi, Kentaro; Aabloo, Alvo; Asaka, Kinji

    2016-03-01

    Ion-migration based electrical potential widely exists not only in natural systems but also in ionic polymer materials. We presented a multi-physical model and investigated the transport process of cation and water of ionic polymer-metal composites based on our thorough understanding on the ionic sensing mechanisms in this paper. The whole transport process was depicted by transport equations concerning convection flux under the total pressure gradient, electrical migration by the built-in electrical field, and the inter-coupling effect between cation and water. With numerical analysis, the influence of critical material parameters, the elastic modulus Ewet, the hydraulic permeability coefficient K, the diffusion coefficient of cation dII and water dWW, and the drag coefficient of water ndW, on the distribution of cation and water was investigated. It was obtained how these parameters correlate to the voltage characteristics (both magnitude and response speed) under a step bending. Additionally, it was found that the effective relative dielectric constant ɛr has little influence on the voltage but is positively correlated to the current. With a series of optimized parameters, the predicted voltage agreed with the experimental results well, which validated our model. Based on our physical model, it was suggested that an ionic polymer sensor can benefit from a higher modulus Ewet, a higher coefficient K and a lower coefficient dII, and a higher constant ɛr.

  15. Multi-physics model of a thermo-magnetic energy harvester

    NASA Astrophysics Data System (ADS)

    Joshi, Keyur B.; Priya, Shashank

    2013-05-01

    Harvesting small thermal gradients effectively to generate electricity still remains a challenge. Ujihara et al (2007 Appl. Phys. Lett. 91 093508) have recently proposed a thermo-magnetic energy harvester that incorporates a combination of hard and soft magnets on a vibrating beam structure and two opposing heat transfer surfaces. This design has many advantages and could present an optimum solution to harvest energy in low temperature gradient conditions. In this paper, we describe a multi-physics numerical model for this harvester configuration that incorporates all the relevant parameters, including heat transfer, magnetic force, beam vibration, contact surface and piezoelectricity. The model was used to simulate the complete transient behavior of the system. Results are presented for the evolution of the magnetic force, changes in the internal temperature of the soft magnet (gadolinium (Gd)), thermal contact conductance, contact pressure and heat transfer over a complete cycle. Variation of the vibration frequency with contact stiffness and gap distance was also modeled. Limit cycle behavior and its bifurcations are illustrated as a function of device parameters. The model was extended to include a piezoelectric energy harvesting mechanism and, using a piezoelectric bimorph as spring material, a maximum power of 318 μW was predicted across a 100 kΩ external load.

  16. Research on Structural Safety of the Stratospheric Airship Based on Multi-Physics Coupling Calculation

    NASA Astrophysics Data System (ADS)

    Ma, Z.; Hou, Z.; Zang, X.

    2015-09-01

    As a large-scale flexible inflatable structure by a huge inner lifting gas volume of several hundred thousand cubic meters, the stratospheric airship's thermal characteristic of inner gas plays an important role in its structural performance. During the floating flight, the day-night variation of the combined thermal condition leads to the fluctuation of the flow field inside the airship, which will remarkably affect the pressure acted on the skin and the structural safety of the stratospheric airship. According to the multi-physics coupling mechanism mentioned above, a numerical procedure of structural safety analysis of stratospheric airships is developed and the thermal model, CFD model, finite element code and criterion of structural strength are integrated. Based on the computation models, the distributions of the deformations and stresses of the skin are calculated with the variation of day-night time. The effects of loads conditions and structural configurations on the structural safety of stratospheric airships in the floating condition are evaluated. The numerical results can be referenced for the structural design of stratospheric airships.

  17. Application of remote sensing to state and regional problems

    NASA Technical Reports Server (NTRS)

    Bouchillon, C. W.; Miller, W. F.; Landphair, H.; Zitta, V. L.

    1974-01-01

    The use of remote sensing techniques to help the state of Mississippi recognize and solve its environmental, resource, and socio-economic problems through inventory, analysis, and monitoring is suggested.

  18. Application of nonlinear Krylov acceleration to radiative transfer problems

    SciTech Connect

    Till, A. T.; Adams, M. L.; Morel, J. E.

    2013-07-01

    The iterative solution technique used for radiative transfer is normally nested, with outer thermal iterations and inner transport iterations. We implement a nonlinear Krylov acceleration (NKA) method in the PDT code for radiative transfer problems that breaks nesting, resulting in more thermal iterations but significantly fewer total inner transport iterations. Using the metric of total inner transport iterations, we investigate a crooked-pipe-like problem and a pseudo-shock-tube problem. Using only sweep preconditioning, we compare NKA against a typical inner / outer method employing GMRES / Newton and find NKA to be comparable or superior. Finally, we demonstrate the efficacy of applying diffusion-based preconditioning to grey problems in conjunction with NKA. (authors)

  19. Inequalities for Means of Chords, with Application to Isoperimetric Problems

    NASA Astrophysics Data System (ADS)

    Exner, Pavel; Harrell, Evans M.; Loss, Michael

    2006-03-01

    We consider a pair of isoperimetric problems arising in physics. The first concerns a Schrödinger operator in L^2(mathbb{R}^2) with an attractive interaction supported on a closed curve Γ, formally given by -Δ-αδ( x-Γ); we ask which curve of a given length maximizes the ground state energy. In the second problem we have a loop-shaped thread Γ in mathbb{R}^3, homogeneously charged but not conducting, and we ask about the (renormalized) potential-energy minimizer. Both problems reduce to purely geometric questions about inequalities for mean values of chords of Γ. We prove an isoperimetric theorem for p-means of chords of curves when p ≤ 2, which implies in particular that the global extrema for the physical problems are always attained when Γ is a circle. The letter concludes with a discussion of the p-means of chords when p > 2.

  20. Application of the hybrid method to inverse heat conduction problems

    NASA Astrophysics Data System (ADS)

    Chen, Han-Taw; Chang, Shiuh-Ming

    1990-04-01

    The hybrid method involving the combined use of Laplace transform method and the FEM method is considerably powerful for solving one-dimensional linear heat conduction problems. In the present method, the time-dependent terms are removed from the problem using the Laplace transform method, and then the FEM is applied to the space domain. The transformed temperature is inverted numerically to obtain the result in the physical quantity. The estimation of the surface heat flux or temperature from transient measured temperatures inside the solid agrees well with the analytical solution of the direct problem without Beck's sensitivity analysis and a least-square criterion. Due to no time step, the present method can directly calculate the surface conditions of an inverse problem without step by step computation in the time domain until the specific time is reached.

  1. Neurolinguistic Applications for the Remediation of Reading Problems.

    ERIC Educational Resources Information Center

    Arnold, Diane G.; Swaby, Barbara

    1984-01-01

    Notes that neurolinguistic programing, recently introduced in education, helps teachers analyze and overcome barriers to student achievement in reading. Presents applications and implications of the concept. (FL)

  2. Application of remote sensing to hydrological problems and floods

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Novo, E. M. L. M.

    1983-01-01

    The main applications of remote sensors to hydrology are identified as well as the principal spectral bands and their advantages and disadvantages. Some examples of LANDSAT data applications to flooding-risk evaluation are cited. Because hydrology studies the amount of moisture and water involved in each phase of hydrological cycle, remote sensing must be emphasized as a technique for hydrological data acquisition.

  3. Publication misrepresentation among neurosurgery residency applicants: an increasing problem.

    PubMed

    Kistka, Heather M; Nayeri, Arash; Wang, Li; Dow, Jamie; Chandrasekhar, Rameela; Chambless, Lola B

    2016-01-01

    OBJECT Misrepresentation of scholarly achievements is a recognized phenomenon, well documented in numerous fields, yet the accuracy of reporting remains dependent on the honor principle. Therefore, honest self-reporting is of paramount importance to maintain scientific integrity in neurosurgery. The authors had observed a trend toward increasing numbers of publications among applicants for neurosurgery residency at Vanderbilt University and undertook this study to determine whether this change was a result of increased academic productivity, inflated reporting, or both. They also aimed to identify application variables associated with inaccurate citations. METHODS The authors retrospectively reviewed the residency applications submitted to their neurosurgery department in 2006 (n = 148) and 2012 (n = 194). The applications from 2006 were made via SF Match and those from 2012 were made using the Electronic Residency Application Service. Publications reported as "accepted" or "in press" were verified via online search of Google Scholar, PubMed, journal websites, and direct journal contact. Works were considered misrepresented if they did not exist, incorrectly listed the applicant as first author, or were incorrectly listed as peer reviewed or published in a printed journal rather than an online only or non-peer-reviewed publication. Demographic data were collected, including applicant sex, medical school ranking and country, advanced degrees, Alpha Omega Alpha membership, and USMLE Step 1 score. Zero-inflated negative binomial regression was used to identify predictors of misrepresentation. RESULTS Using univariate analysis, between 2006 and 2012 the percentage of applicants reporting published works increased significantly (47% vs 97%, p < 0.001). However, the percentage of applicants with misrepresentations (33% vs 45%) also increased. In 2012, applicants with a greater total of reported works (p < 0.001) and applicants from unranked US medical schools (those not

  4. The Inverse Problem of Klein-Gordon Equation Boundary Value Problem and Its Application in Data Assimilation

    NASA Astrophysics Data System (ADS)

    Mu, Xiyu; Cheng, Hao; Liu, Guoqing

    2016-04-01

    It is often difficult to provide the exact boundary condition in the practical use of variational method. The Euler equation derived from variational method cannot be solved without boundary condition. However, in some application problems such as the assimilation of remote sensing data, the values can be easily obtained in the inner region of the domain. Since the solution of elliptic partial differential equations continuously depends on the boundary condition, the boundary condition can be retrieved using part solutions in the inner area. In this paper, the variational problem of remote sensing data assimilation within a circular area is first established. The Klein-Gordon elliptic equation is derived from the Euler method of variational problems with assumed boundary condition. Secondly, a computer-friendly Green function is constructed for the Dirichlet problem of two-dimensional Klein-Gordon equation, with the formal solution according to Green formula. Thirdly, boundary values are retrieved by solving the optimal problem which is constructed according to the smoothness of boundary value function and the best approximation between formal solutions and high-accuracy measurements in the interior of the domain. Finally, the assimilation problem is solved on substituting the retrieved boundary values into the Klein-Gordon equation. It is a type of inverse problem in mathematics. The advantage of our method lies in that it needs no assumptions of the boundary condition. It thus alleviates the error introduced by artificial boundary condition in data fusion using variational method in the past.

  5. Identification of weakly coupled multiphysics problems. Application to the inverse problem of electrocardiography

    NASA Astrophysics Data System (ADS)

    Corrado, Cesare; Gerbeau, Jean-Frédéric; Moireau, Philippe

    2015-02-01

    This work addresses the inverse problem of electrocardiography from a new perspective, by combining electrical and mechanical measurements. Our strategy relies on the definition of a model of the electromechanical contraction which is registered on ECG data but also on measured mechanical displacements of the heart tissue typically extracted from medical images. In this respect, we establish in this work the convergence of a sequential estimator which combines for such coupled problems various state of the art sequential data assimilation methods in a unified consistent and efficient framework. Indeed, we aggregate a Luenberger observer for the mechanical state and a Reduced-Order Unscented Kalman Filter applied on the parameters to be identified and a POD projection of the electrical state. Then using synthetic data we show the benefits of our approach for the estimation of the electrical state of the ventricles along the heart beat compared with more classical strategies which only consider an electrophysiological model with ECG measurements. Our numerical results actually show that the mechanical measurements improve the identifiability of the electrical problem allowing to reconstruct the electrical state of the coupled system more precisely. Therefore, this work is intended to be a first proof of concept, with theoretical justifications and numerical investigations, of the advantage of using available multi-modal observations for the estimation and identification of an electromechanical model of the heart.

  6. Application of University Resources to Local Government Problems. Final Report.

    ERIC Educational Resources Information Center

    Shamblin, James E.; And Others

    The report details the results of a unique experimental demonstration of applying university resources to local government problems. Faculty-student teams worked with city and county personnel on projects chosen by mutual agreement, including work in areas of traffic management, law enforcement, waste heat utilization, solid waste conversion, and…

  7. Application of firefly algorithm to the dynamic model updating problem

    NASA Astrophysics Data System (ADS)

    Shabbir, Faisal; Omenzetter, Piotr

    2015-04-01

    Model updating can be considered as a branch of optimization problems in which calibration of the finite element (FE) model is undertaken by comparing the modal properties of the actual structure with these of the FE predictions. The attainment of a global solution in a multi dimensional search space is a challenging problem. The nature-inspired algorithms have gained increasing attention in the previous decade for solving such complex optimization problems. This study applies the novel Firefly Algorithm (FA), a global optimization search technique, to a dynamic model updating problem. This is to the authors' best knowledge the first time FA is applied to model updating. The working of FA is inspired by the flashing characteristics of fireflies. Each firefly represents a randomly generated solution which is assigned brightness according to the value of the objective function. The physical structure under consideration is a full scale cable stayed pedestrian bridge with composite bridge deck. Data from dynamic testing of the bridge was used to correlate and update the initial model by using FA. The algorithm aimed at minimizing the difference between the natural frequencies and mode shapes of the structure. The performance of the algorithm is analyzed in finding the optimal solution in a multi dimensional search space. The paper concludes with an investigation of the efficacy of the algorithm in obtaining a reference finite element model which correctly represents the as-built original structure.

  8. Applications of parallel global optimization to mechanics problems

    NASA Astrophysics Data System (ADS)

    Schutte, Jaco Francois

    Global optimization of complex engineering problems, with a high number of variables and local minima, requires sophisticated algorithms with global search capabilities and high computational efficiency. With the growing availability of parallel processing, it makes sense to address these requirements by increasing the parallelism in optimization strategies. This study proposes three methods of concurrent processing. The first method entails exploiting the structure of population-based global algorithms such as the stochastic Particle Swarm Optimization (PSO) algorithm and the Genetic Algorithm (GA). As a demonstration of how such an algorithm may be adapted for concurrent processing we modify and apply the PSO to several mechanical optimization problems on a parallel processing machine. Desirable PSO algorithm features such as insensitivity to design variable scaling and modest sensitivity to algorithm parameters are demonstrated. A second approach to parallelism and improving algorithm efficiency is by utilizing multiple optimizations. With this method a budget of fitness evaluations is distributed among several independent sub-optimizations in place of a single extended optimization. Under certain conditions this strategy obtains a higher combined probability of converging to the global optimum than a single optimization which utilizes the full budget of fitness evaluations. The third and final method of parallelism addressed in this study is the use of quasiseparable decomposition, which is applied to decompose loosely coupled problems. This yields several sub-problems of lesser dimensionality which may be concurrently optimized with reduced effort.

  9. The Application of Physical Organic Chemistry to Biochemical Problems.

    ERIC Educational Resources Information Center

    Westheimer, Frank

    1986-01-01

    Presents the synthesis of the science of enzymology from application of the concepts of physical organic chemistry from a historical perspective. Summarizes enzyme and coenzyme mechanisms elucidated prior to 1963. (JM)

  10. Application of remote sensing to state and regional problems

    NASA Technical Reports Server (NTRS)

    Miller, W. F.; Clark, J. R.; Solomon, J. L.; Duffy, B.; Minchew, K.; Wright, L. H. (Principal Investigator)

    1981-01-01

    The objectives, accomplishments, and future plans of several LANDSAT applications projects in Mississippi are discussed. The applications include land use planning in Lowandes County, strip mine inventory and reclamation, white tailed deer habitat evaluation, data analysis support systems, discrimination of forest habitats in potential lignite areas, changes in gravel operations, and determination of freshwater wetlands for inventory and monitoring. In addition, a conceptual design for a LANDSAT based information system is discussed.

  11. Application of remote sensing to state and regional problems. [Mississippi

    NASA Technical Reports Server (NTRS)

    Miller, W. F.; Carter, B. D.; Solomon, J. L.; Williams, S. G.; Powers, J. S.; Clark, J. R. (Principal Investigator)

    1980-01-01

    Progress is reported in the following areas: remote sensing applications to land use planning Lowndes County, applications of LANDSAT data to strip mine inventory and reclamation, white tailed deer habitat evaluation using LANDSAT data, remote sensing data analysis support system, and discrimination of unique forest habitats in potential lignite areas of Mississippi. Other projects discussed include LANDSAT change discrimination in gravel operations, environmental impact modeling for highway corridors, and discrimination of fresh water wetlands for inventory and monitoring.

  12. Towards a multi-physics modelling framework for thrombolysis under the influence of blood flow

    PubMed Central

    Piebalgs, Andris

    2015-01-01

    Thrombolytic therapy is an effective means of treating thromboembolic diseases but can also give rise to life-threatening side effects. The infusion of a high drug concentration can provoke internal bleeding while an insufficient dose can lead to artery reocclusion. It is hoped that mathematical modelling of the process of clot lysis can lead to a better understanding and improvement of thrombolytic therapy. To this end, a multi-physics continuum model has been developed to simulate the dissolution of clot over time upon the addition of tissue plasminogen activator (tPA). The transport of tPA and other lytic proteins is modelled by a set of reaction–diffusion–convection equations, while blood flow is described by volume-averaged continuity and momentum equations. The clot is modelled as a fibrous porous medium with its properties being determined as a function of the fibrin fibre radius and voidage of the clot. A unique feature of the model is that it is capable of simulating the entire lytic process from the initial phase of lysis of an occlusive thrombus (diffusion-limited transport), the process of recanalization, to post-canalization thrombolysis under the influence of convective blood flow. The model has been used to examine the dissolution of a fully occluding clot in a simplified artery at different pressure drops. Our predicted lytic front velocities during the initial stage of lysis agree well with experimental and computational results reported by others. Following canalization, clot lysis patterns are strongly influenced by local flow patterns, which are symmetric at low pressure drops, but asymmetric at higher pressure drops, which give rise to larger recirculation regions and extended areas of intense drug accumulation. PMID:26655469

  13. Variability of West African monsoon patterns generated by a WRF multi-physics ensemble

    NASA Astrophysics Data System (ADS)

    Klein, Cornelia; Heinzeller, Dominikus; Bliefernicht, Jan; Kunstmann, Harald

    2015-11-01

    The credibility of regional climate simulations over West Africa stands and falls with the ability to reproduce the West African monsoon (WAM) whose precipitation plays a pivotal role for people's livelihood. In this study, we simulate the WAM for the wet year 1999 with a 27-member multi-physics ensemble of the Weather Research and Forecasting (WRF) model. We investigate the inter-member differences in a process-based manner in order to extract generalizable information on the behavior of the tested cumulus (CU), microphysics (MP), and planetary boundary layer (PBL) schemes. Precipitation, temperature and atmospheric dynamics are analyzed in comparison to the Tropical Rainfall Measuring Mission (TRMM) rainfall estimates, the Global Precipitation Climatology Centre (GPCC) gridded gauge-analysis, the Global Historical Climatology Network (GHCN) gridded temperature product and the forcing data (ERA-Interim) to explore interdependencies of processes leading to a certain WAM regime. We find that MP and PBL schemes contribute most to the ensemble spread (147 mm month-1) for monsoon precipitation over the study region. Furthermore, PBL schemes have a strong influence on the movement of the WAM rainband because of their impact on the cloud fraction, that ranges from 8 to 20 % at 600 hPa during August. More low- and mid-level clouds result in less incoming radiation and a weaker monsoon. Ultimately, we identify the differing intensities of the moist Hadley-type meridional circulation that connects the monsoon winds to the Tropical Easterly Jet as the main source for inter-member differences. The ensemble spread of Sahel precipitation and associated dynamics for August 1999 is comparable to the observed inter-annual spread (1979-2010) between dry and wet years, emphasizing the strong potential impact of regional processes and the need for a careful selection of model parameterizations.

  14. Towards a multi-physics modelling framework for thrombolysis under the influence of blood flow.

    PubMed

    Piebalgs, Andris; Xu, X Yun

    2015-12-01

    Thrombolytic therapy is an effective means of treating thromboembolic diseases but can also give rise to life-threatening side effects. The infusion of a high drug concentration can provoke internal bleeding while an insufficient dose can lead to artery reocclusion. It is hoped that mathematical modelling of the process of clot lysis can lead to a better understanding and improvement of thrombolytic therapy. To this end, a multi-physics continuum model has been developed to simulate the dissolution of clot over time upon the addition of tissue plasminogen activator (tPA). The transport of tPA and other lytic proteins is modelled by a set of reaction-diffusion-convection equations, while blood flow is described by volume-averaged continuity and momentum equations. The clot is modelled as a fibrous porous medium with its properties being determined as a function of the fibrin fibre radius and voidage of the clot. A unique feature of the model is that it is capable of simulating the entire lytic process from the initial phase of lysis of an occlusive thrombus (diffusion-limited transport), the process of recanalization, to post-canalization thrombolysis under the influence of convective blood flow. The model has been used to examine the dissolution of a fully occluding clot in a simplified artery at different pressure drops. Our predicted lytic front velocities during the initial stage of lysis agree well with experimental and computational results reported by others. Following canalization, clot lysis patterns are strongly influenced by local flow patterns, which are symmetric at low pressure drops, but asymmetric at higher pressure drops, which give rise to larger recirculation regions and extended areas of intense drug accumulation.

  15. Development of a multi-physics simulation framework for semiconductor materials and devices

    NASA Astrophysics Data System (ADS)

    Almeida, Nuno Sucena

    Modern day semiconductor technology devices face the ever increasing issue of accounting for quantum mechanics effects on their modeling and performance assessment. The objective of this work is to create a user-friendly, extensible and powerful multi-physics simulation blackbox for nano-scale semiconductor devices. By using a graphical device modeller this work will provide a friendly environment were a user without deep knowledge of device physics can create a device, simulate it and extract optical and electrical characteristics deemed of interest to his engineering occupation. Resorting to advanced template C++ object-oriented design from the start, this work was able to implement algorithms to simulate 1,2 and 3D devices which along with scripting using the well known Python language enables the user to create batch simulations, to better optimize device performance. Higher-dimensional semiconductors, like wires and dots, require a huge computational cost. MPI parallel libraries enable the software to tackle complex geometries which otherwise would be unfeasible on a small single-CPU computer. Quantum mechanical phenomena is described by Schrodinger's equation which must be solved self-consistently with Poisson's equation for the electrostatic charge and, if required, make use of piezoelectric charge terms from elasticity constraints. Since the software implements a generic n-dimensional FEM engine, virtually any kind of Partial Differential Equation can be solved and in the future, other required solvers besides the ones already implemented will also be included for easy of use. In particular for the semiconductor device physics, we solve the quantum mechanics effective mass conduction-valence band k·p approximation to the Schrodinger-Poisson, in any crystal growth orientation (C,polar M,A and semi-polar planes or any user defined angle) and also include Piezoelectric effects caused by strain in lattice mismatched layers, where the implemented software

  16. Hybrid Ant Algorithm and Applications for Vehicle Routing Problem

    NASA Astrophysics Data System (ADS)

    Xiao, Zhang; Jiang-qing, Wang

    Ant colony optimization (ACO) is a metaheuristic method that inspired by the behavior of real ant colonies. ACO has been successfully applied to several combinatorial optimization problems, but it has some short-comings like its slow computing speed and local-convergence. For solving Vehicle Routing Problem, we proposed Hybrid Ant Algorithm (HAA) in order to improve both the performance of the algorithm and the quality of solutions. The proposed algorithm took the advantages of Nearest Neighbor (NN) heuristic and ACO for solving VRP, it also expanded the scope of solution space and improves the global ability of the algorithm through importing mutation operation, combining 2-opt heuristics and adjusting the configuration of parameters dynamically. Computational results indicate that the hybrid ant algorithm can get optimal resolution of VRP effectively.

  17. Common Problems of Mobile Applications for Foreign Language Testing

    ERIC Educational Resources Information Center

    Garcia Laborda, Jesus; Magal-Royo, Teresa; Lopez, Jose Luis Gimenez

    2011-01-01

    As the use of mobile learning educational applications has become more common anywhere in the world, new concerns have appeared in the classroom, human interaction in software engineering and ergonomics. new tests of foreign languages for a number of purposes have become more and more common recently. However, studies interrelating language tests…

  18. The Application of Geocoded Data to Educational Problems.

    ERIC Educational Resources Information Center

    McIsaac, Donald N.; And Others

    The papers presented at a symposium on geocoding describe the preparation of a geocoded data file, some basic applications for education planning, and its use in trend analysis to produce contour maps for any desired characteristic. Geocoding data involves locating each entity, such as students or schools, in terms of grid coordinates on a…

  19. Application of Genetic Algorithms in Nonlinear Heat Conduction Problems

    PubMed Central

    Khan, Waqar A.

    2014-01-01

    Genetic algorithms are employed to optimize dimensionless temperature in nonlinear heat conduction problems. Three common geometries are selected for the analysis and the concept of minimum entropy generation is used to determine the optimum temperatures under the same constraints. The thermal conductivity is assumed to vary linearly with temperature while internal heat generation is assumed to be uniform. The dimensionless governing equations are obtained for each selected geometry and the dimensionless temperature distributions are obtained using MATLAB. It is observed that GA gives the minimum dimensionless temperature in each selected geometry. PMID:24695517

  20. Application of computational fluid mechanics to atmospheric pollution problems

    NASA Technical Reports Server (NTRS)

    Hung, R. J.; Liaw, G. S.; Smith, R. E.

    1986-01-01

    One of the most noticeable effects of air pollution on the properties of the atmosphere is the reduction in visibility. This paper reports the results of investigations of the fluid dynamical and microphysical processes involved in the formation of advection fog on aerosols from combustion-related pollutants, as condensation nuclei. The effects of a polydisperse aerosol distribution, on the condensation/nucleation processes which cause the reduction in visibility are studied. This study demonstrates how computational fluid mechanics and heat transfer modeling can be applied to simulate the life cycle of the atmosphereic pollution problems.

  1. Application of clustering global optimization to thin film design problems.

    PubMed

    Lemarchand, Fabien

    2014-03-10

    Refinement techniques usually calculate an optimized local solution, which is strongly dependent on the initial formula used for the thin film design. In the present study, a clustering global optimization method is used which can iteratively change this initial formula, thereby progressing further than in the case of local optimization techniques. A wide panel of local solutions is found using this procedure, resulting in a large range of optical thicknesses. The efficiency of this technique is illustrated by two thin film design problems, in particular an infrared antireflection coating, and a solar-selective absorber coating. PMID:24663856

  2. Application of clustering global optimization to thin film design problems.

    PubMed

    Lemarchand, Fabien

    2014-03-10

    Refinement techniques usually calculate an optimized local solution, which is strongly dependent on the initial formula used for the thin film design. In the present study, a clustering global optimization method is used which can iteratively change this initial formula, thereby progressing further than in the case of local optimization techniques. A wide panel of local solutions is found using this procedure, resulting in a large range of optical thicknesses. The efficiency of this technique is illustrated by two thin film design problems, in particular an infrared antireflection coating, and a solar-selective absorber coating.

  3. Application of partial sliding mode in guidance problem.

    PubMed

    Shafiei, M H; Binazadeh, T

    2013-03-01

    In this paper, the problem of 3-dimensional guidance law design is considered and a new guidance law based on partial sliding mode technique is presented. The approach is based on the classification of the state variables within the guidance system dynamics with respect to their required stabilization properties. In the proposed law by using a partial sliding mode technique, only trajectories of a part of states variables are forced to reach the partial sliding surfaces and slide on them. The resulting guidance law enables the missile to intercept highly maneuvering targets within a finite interception time. Effectiveness of the proposed guidance law is demonstrated through analysis and simulations.

  4. A theory of information with special application to search problems.

    PubMed

    Wilbur, W J; Neuwald, A F

    2000-01-01

    Classical information theory concerns itself with communication through a noisy channel and how much one can infer about the channel input from a knowledge of the channel output. Because the channel is noisy the input and output are only related statistically and the rate of information transmission is a statistical concept with little meaning for the individual symbol used in transmission. Here we develop a more intuitive notion of information that is concerned with asking the right questions--that is, with finding those questions whose answer conveys the most information. We call this confirmatory information. In the first part of the paper we develop the general theory, show how it relates to classical information theory, and how in the special case of search problems it allows us to quantify the efficacy of information transmission regarding individual events. That is, confirmatory information measures how well a search for items having certain observable properties retrieves items having some unobserved property of interest. Thus confirmatory information facilitates a useful analysis of search problems and contrasts with classical information theory, which quantifies the efficiency of information transmission but is indifferent to the nature of the particular information being transmitted. The last part of the paper presents several examples where confirmatory information is used to quantify protein structural properties in a search setting. PMID:10642878

  5. Signature neural networks: definition and application to multidimensional sorting problems.

    PubMed

    Latorre, Roberto; de Borja Rodriguez, Francisco; Varona, Pablo

    2011-01-01

    In this paper we present a self-organizing neural network paradigm that is able to discriminate information locally using a strategy for information coding and processing inspired in recent findings in living neural systems. The proposed neural network uses: 1) neural signatures to identify each unit in the network; 2) local discrimination of input information during the processing; and 3) a multicoding mechanism for information propagation regarding the who and the what of the information. The local discrimination implies a distinct processing as a function of the neural signature recognition and a local transient memory. In the context of artificial neural networks none of these mechanisms has been analyzed in detail, and our goal is to demonstrate that they can be used to efficiently solve some specific problems. To illustrate the proposed paradigm, we apply it to the problem of multidimensional sorting, which can take advantage of the local information discrimination. In particular, we compare the results of this new approach with traditional methods to solve jigsaw puzzles and we analyze the situations where the new paradigm improves the performance.

  6. An application of GMRES to indefinite linear problems in meteorology

    NASA Astrophysics Data System (ADS)

    Navarra, Antonio

    1989-05-01

    A preliminary investigation of a Krylov subspace method (GMRES) has been performed on a set of representative problems that can be encountered in geophysical fluid dynamics. Though in the majority of the numerical experiments practical convergence was correlated with the confinement of the eigenvalue spectrum to one complex half plane, it appears that there are cases in which this fact may not be enough to guarantee a practical rate of convergence. However, in the cases that did converge, results seem to indicate that convergence of the iterative GMRES can be obtained when the eigenvalues of the linear operator are all confined to a complex half plane (in agreement with Saad and Schultz). Simple shifts and scale selective dissipation are very effective in controlling convergence. A substantial improvement can be achieved by using preconditioning suggested by the physical nature of the problem. It appears that this is the best way to accelerate convergence. Even with preconditioning, however, it remains important that most of the eigenvalues be confined to one half plane.

  7. Application of remote sensing to state and regional problems

    NASA Technical Reports Server (NTRS)

    Miller, W. F. (Principal Investigator); Tingle, J.; Wright, L. H.; Tebbs, B.

    1984-01-01

    Progress was made in the hydroclimatology, habitat modeling and inventory, computer analysis, wildlife management, and data comparison programs that utilize LANDSAT and SEASAT data provided to Mississippi researchers through the remote sensing applications program. Specific topics include water runoff in central Mississippi, habitat models for the endangered gopher tortoise, coyote, and turkey Geographic Information Systems (GIS) development, forest inventory along the Mississipppi River, and the merging of LANDSAT and SEASAT data for enhanced forest type discrimination.

  8. Remote sensing applications to resource problems in South Dakota

    NASA Technical Reports Server (NTRS)

    Myers, V. I. (Principal Investigator)

    1981-01-01

    The procedures used as well as the results obtained and conclusions derived are described for the following applications of remote sensing in South Dakota: (1) sage grouse management; (2) censusing Canada geese; (3) monitoring grasshopper infestation in rangeland; (4) detecting Dutch elm disease in an urban environment; (5) determining water usage from the Belle Fourche River; (6) resource management of the Lower James River; and (7) the National Model Implantation Program: Lake Herman watershed.

  9. Inference of Stochastic Nonlinear Oscillators with Applications to Physiological Problems

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, Vadim N.; Luchinsky, Dmitry G.

    2004-01-01

    A new method of inferencing of coupled stochastic nonlinear oscillators is described. The technique does not require extensive global optimization, provides optimal compensation for noise-induced errors and is robust in a broad range of dynamical models. We illustrate the main ideas of the technique by inferencing a model of five globally and locally coupled noisy oscillators. Specific modifications of the technique for inferencing hidden degrees of freedom of coupled nonlinear oscillators is discussed in the context of physiological applications.

  10. Design and Analysis of a New Hair Sensor for Multi-Physical Signal Measurement.

    PubMed

    Yang, Bo; Hu, Di; Wu, Lei

    2016-07-08

    A new hair sensor for multi-physical signal measurements, including acceleration, angular velocity and air flow, is presented in this paper. The entire structure consists of a hair post, a torsional frame and a resonant signal transducer. The hair post is utilized to sense and deliver the physical signals of the acceleration and the air flow rate. The physical signals are converted into frequency signals by the resonant transducer. The structure is optimized through finite element analysis. The simulation results demonstrate that the hair sensor has a frequency of 240 Hz in the first mode for the acceleration or the air flow sense, 3115 Hz in the third and fourth modes for the resonant conversion, and 3467 Hz in the fifth and sixth modes for the angular velocity transformation, respectively. All the above frequencies present in a reasonable modal distribution and are separated from interference modes. The input-output analysis of the new hair sensor demonstrates that the scale factor of the acceleration is 12.35 Hz/g, the scale factor of the angular velocity is 0.404 nm/deg/s and the sensitivity of the air flow is 1.075 Hz/(m/s)², which verifies the multifunction sensitive characteristics of the hair sensor. Besides, the structural optimization of the hair post is used to improve the sensitivity of the air flow rate and the acceleration. The analysis results illustrate that the hollow circular hair post can increase the sensitivity of the air flow and the II-shape hair post can increase the sensitivity of the acceleration. Moreover, the thermal analysis confirms the scheme of the frequency difference for the resonant transducer can prominently eliminate the temperature influences on the measurement accuracy. The air flow analysis indicates that the surface area increase of hair post is significantly beneficial for the efficiency improvement of the signal transmission. In summary, the structure of the new hair sensor is proved to be feasible by comprehensive

  11. Supercomputing with TOUGH2 family codes for coupled multi-physics simulations of geologic carbon sequestration

    NASA Astrophysics Data System (ADS)

    Yamamoto, H.; Nakajima, K.; Zhang, K.; Nanai, S.

    2015-12-01

    scalabilities showing almost linear speedup against number of processors up to over ten thousand cores. Generally this allows us to perform coupled multi-physics (THC) simulations on high resolution geologic models with multi-million grid in a practical time (e.g., less than a second per time step).

  12. Design and Analysis of a New Hair Sensor for Multi-Physical Signal Measurement.

    PubMed

    Yang, Bo; Hu, Di; Wu, Lei

    2016-01-01

    A new hair sensor for multi-physical signal measurements, including acceleration, angular velocity and air flow, is presented in this paper. The entire structure consists of a hair post, a torsional frame and a resonant signal transducer. The hair post is utilized to sense and deliver the physical signals of the acceleration and the air flow rate. The physical signals are converted into frequency signals by the resonant transducer. The structure is optimized through finite element analysis. The simulation results demonstrate that the hair sensor has a frequency of 240 Hz in the first mode for the acceleration or the air flow sense, 3115 Hz in the third and fourth modes for the resonant conversion, and 3467 Hz in the fifth and sixth modes for the angular velocity transformation, respectively. All the above frequencies present in a reasonable modal distribution and are separated from interference modes. The input-output analysis of the new hair sensor demonstrates that the scale factor of the acceleration is 12.35 Hz/g, the scale factor of the angular velocity is 0.404 nm/deg/s and the sensitivity of the air flow is 1.075 Hz/(m/s)², which verifies the multifunction sensitive characteristics of the hair sensor. Besides, the structural optimization of the hair post is used to improve the sensitivity of the air flow rate and the acceleration. The analysis results illustrate that the hollow circular hair post can increase the sensitivity of the air flow and the II-shape hair post can increase the sensitivity of the acceleration. Moreover, the thermal analysis confirms the scheme of the frequency difference for the resonant transducer can prominently eliminate the temperature influences on the measurement accuracy. The air flow analysis indicates that the surface area increase of hair post is significantly beneficial for the efficiency improvement of the signal transmission. In summary, the structure of the new hair sensor is proved to be feasible by comprehensive

  13. Design and Analysis of a New Hair Sensor for Multi-Physical Signal Measurement

    PubMed Central

    Yang, Bo; Hu, Di; Wu, Lei

    2016-01-01

    A new hair sensor for multi-physical signal measurements, including acceleration, angular velocity and air flow, is presented in this paper. The entire structure consists of a hair post, a torsional frame and a resonant signal transducer. The hair post is utilized to sense and deliver the physical signals of the acceleration and the air flow rate. The physical signals are converted into frequency signals by the resonant transducer. The structure is optimized through finite element analysis. The simulation results demonstrate that the hair sensor has a frequency of 240 Hz in the first mode for the acceleration or the air flow sense, 3115 Hz in the third and fourth modes for the resonant conversion, and 3467 Hz in the fifth and sixth modes for the angular velocity transformation, respectively. All the above frequencies present in a reasonable modal distribution and are separated from interference modes. The input-output analysis of the new hair sensor demonstrates that the scale factor of the acceleration is 12.35 Hz/g, the scale factor of the angular velocity is 0.404 nm/deg/s and the sensitivity of the air flow is 1.075 Hz/(m/s)2, which verifies the multifunction sensitive characteristics of the hair sensor. Besides, the structural optimization of the hair post is used to improve the sensitivity of the air flow rate and the acceleration. The analysis results illustrate that the hollow circular hair post can increase the sensitivity of the air flow and the II-shape hair post can increase the sensitivity of the acceleration. Moreover, the thermal analysis confirms the scheme of the frequency difference for the resonant transducer can prominently eliminate the temperature influences on the measurement accuracy. The air flow analysis indicates that the surface area increase of hair post is significantly beneficial for the efficiency improvement of the signal transmission. In summary, the structure of the new hair sensor is proved to be feasible by comprehensive

  14. Application of wave mechanics theory to fluid dynamics problems: Fundamentals

    NASA Technical Reports Server (NTRS)

    Krzywoblocki, M. Z. V.

    1974-01-01

    The application of the basic formalistic elements of wave mechanics theory is discussed. The theory is used to describe the physical phenomena on the microscopic level, the fluid dynamics of gases and liquids, and the analysis of physical phenomena on the macroscopic (visually observable) level. The practical advantages of relating the two fields of wave mechanics and fluid mechanics through the use of the Schroedinger equation constitute the approach to this relationship. Some of the subjects include: (1) fundamental aspects of wave mechanics theory, (2) laminarity of flow, (3) velocity potential, (4) disturbances in fluids, (5) introductory elements of the bifurcation theory, and (6) physiological aspects in fluid dynamics.

  15. Application of remote sensing to state and regional problems. [mississippi

    NASA Technical Reports Server (NTRS)

    Miller, W. F.; Powers, J. S.; Clark, J. R.; Solomon, J. L.; Williams, S. G. (Principal Investigator)

    1981-01-01

    The methods and procedures used, accomplishments, current status, and future plans are discussed for each of the following applications of LANDSAT in Mississippi: (1) land use planning in Lowndes County; (2) strip mine inventory and reclamation; (3) white-tailed deer habitat evaluation; (4) remote sensing data analysis support systems; (5) discrimination of unique forest habitats in potential lignite areas; (6) changes in gravel operations; and (7) determining freshwater wetlands for inventory and monitoring. The documentation of all existing software and the integration of the image analysis and data base software into a single package are now considered very high priority items.

  16. Applications of vacuum technology to novel accelerator problems

    SciTech Connect

    Garwin, E.L.

    1983-01-01

    Vacuum requirements for electron storage rings are most demanding to fulfill, due to the presence of gas desorption caused by large quantities of synchrotron radiation, the very limited area accessible for pumping ports, the need for 10/sup -9/ torr pressures in the ring, and for pressures a decade lower in the interaction regions. Design features of a wide variety of distributed ion sublimation pumps (DIP) developed at SLAC to meet these requirements are discussed, as well as NEG (non-evaporable getter) pumps tested for use in the Large Electron Positron Collider at CERN. Application of DIP to much higher pressures in electron damping rings for the Stanford Linear Collider are discussed.

  17. [Current problems of information technologies application for forces medical service].

    PubMed

    Ivanov, V V; Korneenkov, A A; Bogomolov, V D; Borisov, D N; Rezvantsev, M V

    2013-06-01

    The modern information technologies are the key factors for the upgrading of forces medical service. The aim of this article is the analysis of prospective information technologies application for the upgrading of forces medical service. The authors suggested 3 concepts of information support of Russian military health care on the basis of data about information technologies application in the foreign armed forces, analysis of the regulatory background, prospects of military-medical service and gathered experience of specialists. These three concepts are: development of united telecommunication network of the medical service of the Armed Forces of the Russian Federation medical service, working out and implementation of standard medical information systems for medical units and establishments, monitoring the military personnel health state and military medical service resources. It is noted that on the assumption of sufficient centralized financing and industrial implementation of the military medical service prospective information technologies, by the year 2020 the united information space of the military medical service will be created and the target information support effectiveness will be achieved.

  18. Multi-physics design and analyses of long life reactors for lunar outposts

    NASA Astrophysics Data System (ADS)

    Schriener, Timothy M.

    event of a launch abort accident. Increasing the amount of fuel in the reactor core, and hence its operational life, would be possible by launching the reactor unfueled and fueling it on the Moon. Such a reactor would, thus, not be subject to launch criticality safety requirements. However, loading the reactor with fuel on the Moon presents a challenge, requiring special designs of the core and the fuel elements, which lend themselves to fueling on the lunar surface. This research investigates examples of both a solid core reactor that would be fueled at launch as well as an advanced concept which could be fueled on the Moon. Increasing the operational life of a reactor fueled at launch is exercised for the NaK-78 cooled Sectored Compact Reactor (SCoRe). A multi-physics design and analyses methodology is developed which iteratively couples together detailed Monte Carlo neutronics simulations with 3-D Computational Fluid Dynamics (CFD) and thermal-hydraulics analyses. Using this methodology the operational life of this compact, fast spectrum reactor is increased by reconfiguring the core geometry to reduce neutron leakage and parasitic absorption, for the same amount of HEU in the core, and meeting launch safety requirements. The multi-physics analyses determine the impacts of the various design changes on the reactor's neutronics and thermal-hydraulics performance. The option of increasing the operational life of a reactor by loading it on the Moon is exercised for the Pellet Bed Reactor (PeBR). The PeBR uses spherical fuel pellets and is cooled by He-Xe gas, allowing the reactor core to be loaded with fuel pellets and charged with working fluid on the lunar surface. The performed neutronics analyses ensure the PeBR design achieves a long operational life, and develops safe launch canister designs to transport the spherical fuel pellets to the lunar surface. The research also investigates loading the PeBR core with fuel pellets on the Moon using a transient Discrete

  19. Multi-physics design and analyses of long life reactors for lunar outposts

    NASA Astrophysics Data System (ADS)

    Schriener, Timothy M.

    event of a launch abort accident. Increasing the amount of fuel in the reactor core, and hence its operational life, would be possible by launching the reactor unfueled and fueling it on the Moon. Such a reactor would, thus, not be subject to launch criticality safety requirements. However, loading the reactor with fuel on the Moon presents a challenge, requiring special designs of the core and the fuel elements, which lend themselves to fueling on the lunar surface. This research investigates examples of both a solid core reactor that would be fueled at launch as well as an advanced concept which could be fueled on the Moon. Increasing the operational life of a reactor fueled at launch is exercised for the NaK-78 cooled Sectored Compact Reactor (SCoRe). A multi-physics design and analyses methodology is developed which iteratively couples together detailed Monte Carlo neutronics simulations with 3-D Computational Fluid Dynamics (CFD) and thermal-hydraulics analyses. Using this methodology the operational life of this compact, fast spectrum reactor is increased by reconfiguring the core geometry to reduce neutron leakage and parasitic absorption, for the same amount of HEU in the core, and meeting launch safety requirements. The multi-physics analyses determine the impacts of the various design changes on the reactor's neutronics and thermal-hydraulics performance. The option of increasing the operational life of a reactor by loading it on the Moon is exercised for the Pellet Bed Reactor (PeBR). The PeBR uses spherical fuel pellets and is cooled by He-Xe gas, allowing the reactor core to be loaded with fuel pellets and charged with working fluid on the lunar surface. The performed neutronics analyses ensure the PeBR design achieves a long operational life, and develops safe launch canister designs to transport the spherical fuel pellets to the lunar surface. The research also investigates loading the PeBR core with fuel pellets on the Moon using a transient Discrete

  20. Application of the artificial bee colony algorithm for solving the set covering problem.

    PubMed

    Crawford, Broderick; Soto, Ricardo; Cuesta, Rodrigo; Paredes, Fernando

    2014-01-01

    The set covering problem is a formal model for many practical optimization problems. In the set covering problem the goal is to choose a subset of the columns of minimal cost that covers every row. Here, we present a novel application of the artificial bee colony algorithm to solve the non-unicost set covering problem. The artificial bee colony algorithm is a recent swarm metaheuristic technique based on the intelligent foraging behavior of honey bees. Experimental results show that our artificial bee colony algorithm is competitive in terms of solution quality with other recent metaheuristic approaches for the set covering problem. PMID:24883356

  1. Topographic mapping of oral structures - problems and applications in prosthodontics

    NASA Astrophysics Data System (ADS)

    Young, John M.; Altschuler, Bruce R.

    1981-10-01

    The diagnosis and treatment of malocclusion, and the proper design of restorations and prostheses, requires the determination of surface topography of the teeth and related oral structures. Surface contour measurements involve not only affected teeth, but adjacent and opposing surface contours composing a complexly interacting occlusal system. No a priori knowledge is predictable as dental structures are largely asymmetrical, non-repetitive, and non-uniform curvatures in 3-D space. Present diagnosis, treatment planning, and fabrication relies entirely on the generation of physical replicas during each stage of treatment. Fabrication is limited to materials that lend themselves to casting or coating, and to hand fitting and finishing. Inspection is primarily by vision and patient perceptual feedback. Production methods are time-consuming. Prostheses are entirely custom designed by manual methods, require costly skilled technical labor, and do not lend themselves to centralization. The potential improvement in diagnostic techniques, improved patient care, increased productivity, and cost-savings in material and man-hours that could result, if rapid and accurate remote measurement and numerical (automated) fabrication methods were devised, would be significant. The unique problems of mapping oral structures, and specific limitations in materials and methods, are reviewed.

  2. Quantum iterative deepening with an application to the halting problem.

    PubMed

    Tarrataca, Luís; Wichert, Andreas

    2013-01-01

    Classical models of computation traditionally resort to halting schemes in order to enquire about the state of a computation. In such schemes, a computational process is responsible for signaling an end of a calculation by setting a halt bit, which needs to be systematically checked by an observer. The capacity of quantum computational models to operate on a superposition of states requires an alternative approach. From a quantum perspective, any measurement of an equivalent halt qubit would have the potential to inherently interfere with the computation by provoking a random collapse amongst the states. This issue is exacerbated by undecidable problems such as the Entscheidungsproblem which require universal computational models, e.g. the classical Turing machine, to be able to proceed indefinitely. In this work we present an alternative view of quantum computation based on production system theory in conjunction with Grover's amplitude amplification scheme that allows for (1) a detection of halt states without interfering with the final result of a computation; (2) the possibility of non-terminating computation and (3) an inherent speedup to occur during computations susceptible of parallelization. We discuss how such a strategy can be employed in order to simulate classical Turing machines.

  3. COAMPS Application to Global and Homeland Security Threat Problems

    SciTech Connect

    Chin, H S; Glascoe, L G

    2004-09-14

    Atmospheric dispersion problems have received more attention with regard to global and homeland security than their conventional roles in air pollution and local hazard assessment in the post 9/11 era. Consequently, there is growing interest to characterize meteorology uncertainty at both low and high altitudes (below and above 30 km, respectively). A 3-D Coupled Ocean Atmosphere Prediction System (COAMPS, developed by Naval Research Laboratory; Hodur, 1997) is used to address LLNL's task. The objective of this report is focused on the effort at the improvement of COAMPS forecast to address the uncertainty issue, and to provide new capability for high-altitude forecast. To assess the atmospheric dispersion behavior in a wider range of meteorological conditions and to expand its vertical scope for the potential threat at high altitudes, several modifications of COAMPS are needed to meet the project goal. These improvements include (1) the long-range forecast capability to show the variability of meteorological conditions at a much larger time scale (say, a year), and (2) the model physics enhancement to provide new capability for high-altitude forecast.

  4. Equinoctial orbit elements - Application to optimal transfer problems

    NASA Astrophysics Data System (ADS)

    Kechichian, Jean Albert

    The variation of parameters perturbation equations in terms of the nonsingular equinoctial orbit elements for the third body, oblateness, air drag, and thrust acceleration effects have been developed in the literature, to carry out orbit prediction and orbit determination, as well as optimal orbit transfer analyses for elliptic as well as near-circular orbits around earth. The partials of these elements with respect to the velocity vector and their partials with respect to the elements that define the state and Lagrange differential equations, were developed using the mean and eccentric longitudes as independent orbital elements, respectively. The full set of governing equations for optimal orbit transfer and rendezvous applications are presented in this paper in a consistent manner, for the case where mean longitude is the sixth element.

  5. Overset Grid Methods for Multidisciplinary Applications in Rotorcraft Problems

    NASA Technical Reports Server (NTRS)

    Ahmad, J. U.; VanDalsem, William R. (Technical Monitor)

    1996-01-01

    A methodology for the coupling of an advanced computational fluid dynamics method based on an overset grid flow-solver and an advanced computational structural dynamics method based on a finite element analysis is presented. Various procedures for the fluid-structure interactions modeling along with their limitations are also discussed. The flight test data for the four-bladed UH-60A Blackhawk helicopter rotor is chosen for the validation of the results. Convergence and accuracy are tested by numerical experiments with a single-bladed rotor. A comparison of airload predictions with flight test data as well as with a rigid blade case is presented. Grid and interpolation related issues for this aeroelastic application are described.

  6. Numerical Analysis of a Multi-Physics Model for Trace Gas Sensors

    NASA Astrophysics Data System (ADS)

    Brennan, Brian

    Trace gas sensors are currently used in many applications from leak detection to national security and may some day help with disease diagnosis. These sensors are modelled by a coupled system of complex elliptic partial differential equations for pressure and temperature. Solutions are approximated using the finite element method which we will show admits a continuous and coercive variational problem with optimal H1 and L2 error estimates. Numerically, the finite element discretization yields a skew-Hermitian dominant matrix for which classical algebraic preconditioners quickly degrade. We develop a block preconditioner that requires scalar Helmholtz solutions to apply but gives a very low outer iteration count. To handle this, we explore three preconditoners for the resulting linear system. First we analyze the classical block Jacobi and block Gauss-Seidel preconditions before presenting a custom, physics based preconditioner. We also present analysis showing eigenvalues of the custom preconditioned system are mesh-dependent but with a small coefficient. Numerical experiments confirm our theoretical discussion.

  7. On the range of applicability of Baker`s approach to the frame problem

    SciTech Connect

    Kartha, G.N.

    1996-12-31

    We investigate the range of applicability of Baker`s approach to the frame problem using an action language. We show that for temporal projection and deterministic domains, Baker`s approach gives the intuitively expected results.

  8. Research investigations in and demonstrations of remote sensing applications to urban environmental problems

    NASA Technical Reports Server (NTRS)

    Hidalgo, J. U.

    1975-01-01

    The applicability of remote sensing to transportation and traffic analysis, urban quality, and land use problems is discussed. Other topics discussed include preliminary user analysis, potential uses, traffic study by remote sensing, and urban condition analysis using ERTS.

  9. Application Problem of Biomass Combustion in Greenhouses for Crop Production

    NASA Astrophysics Data System (ADS)

    Kawamura, Atsuhiro; Akisawa, Atsushi; Kashiwagi, Takao

    It is consumed much energy in fossil fuels to production crops in greenhouses in Japan. And fl ue gas as CO2 fertilization is used for growing crops in modern greenhouses. If biomass as renewable energy can use for production vegetables in greenhouses, more than 800,000 kl of energy a year (in crude oil equivalent) will be saved. In this study, at fi rst, we made the biomass combustion equipment, and performed fundamental examination for various pellet fuel. We performed the examination that considered an application to a real greenhouse next. We considered biomass as both a source of energy and CO2 gas for greenhouses, and the following fi ndings were obtained: 1) Based on the standard of CO2 gas fertilization to greenhouses, it is diffi cult to apply biomass as a CO2 fertilizer, so that biomass should be applied to energy use only, at least for the time being. 2) Practical biomass energy machinery for economy, high reliability and greenhouses satisfying the conservatism that it is easy is necessary. 3) It is necessary to develop crop varieties and cultivation systems requiring less strict environmental control. 4) Disposal of combustion ash occurring abundantly, effective practical use is necessary.

  10. Periodic orbits in the Planar General Three-Body Problem. [with applications to solar system

    NASA Technical Reports Server (NTRS)

    Broucke, R.; Boggs, D.

    1975-01-01

    The article contains a numerical study of periodic solutions of the Planar General Three-Body Problem. Several new periodic solutions have been discovered and are described. In particular, there is a continuous family with variable masses, extending all the way from the elliptic restricted problem to the general problem with three equal masses. All our examples have special symmetry properties which are described in detail. Finally we also suggest some important applications to the natural satellites of the solar system.

  11. Application of fluorescent dyes for some problems of bioelectromagnetics

    NASA Astrophysics Data System (ADS)

    Babich, Danylo; Kylsky, Alexandr; Pobiedina, Valentina; Yakunov, Andrey

    2016-04-01

    Fluorescent organic dyes solutions are used for non-contact measurement of the millimeter wave absorption in liquids simulating biological tissue. There is still not any certain idea of the physical mechanism describing this process despite the widespread technology of microwave radiation in the food industry, biotechnology and medicine. For creating adequate physical model one requires an accurate command of knowledge concerning to the relation between millimeter waves and irradiated object. There were three H-bonded liquids selected as the samples with different coefficients of absorption in the millimeter range like water (strong absorption), glycerol (medium absorption) and ethylene glycol (light absorption). The measurements showed that the greatest response to the action of microwaves occurs for glycerol solutions: R6G (building-up luminescence) and RC (fading luminescence). For aqueous solutions the signal is lower due to lower quantum efficiency of luminescence, and for ethylene glycol — due to the low absorption of microwaves. In the area of exposure a local increase of temperature was estimated. For aqueous solutions of both dyes the maximum temperature increase is about 7° C caused with millimeter waves absorption, which coincides with the direct radio physical measurements and confirmed by theoretical calculations. However, for glycerol solution R6G temperature equivalent for building-up luminescence is around 9° C, and for the solution of ethylene glycol it's about 15°. It is assumed the possibility of non-thermal effect of microwaves on the different processes and substances. The application of this non-contact temperature sensing is a simple and novel method to detect temperature change in small biological objects.

  12. Application of CHAD hydrodynamics to shock-wave problems

    SciTech Connect

    Trease, H.E.; O`Rourke, P.J.; Sahota, M.S.

    1997-12-31

    CHAD is the latest in a sequence of continually evolving computer codes written to effectively utilize massively parallel computer architectures and the latest grid generators for unstructured meshes. Its applications range from automotive design issues such as in-cylinder and manifold flows of internal combustion engines, vehicle aerodynamics, underhood cooling and passenger compartment heating, ventilation, and air conditioning to shock hydrodynamics and materials modeling. CHAD solves the full unsteady Navier-Stoke equations with the k-epsilon turbulence model in three space dimensions. The code has four major features that distinguish it from the earlier KIVA code, also developed at Los Alamos. First, it is based on a node-centered, finite-volume method in which, like finite element methods, all fluid variables are located at computational nodes. The computational mesh efficiently and accurately handles all element shapes ranging from tetrahedra to hexahedra. Second, it is written in standard Fortran 90 and relies on automatic domain decomposition and a universal communication library written in standard C and MPI for unstructured grids to effectively exploit distributed-memory parallel architectures. Thus the code is fully portable to a variety of computing platforms such as uniprocessor workstations, symmetric multiprocessors, clusters of workstations, and massively parallel platforms. Third, CHAD utilizes a variable explicit/implicit upwind method for convection that improves computational efficiency in flows that have large velocity Courant number variations due to velocity of mesh size variations. Fourth, CHAD is designed to also simulate shock hydrodynamics involving multimaterial anisotropic behavior under high shear. The authors will discuss CHAD capabilities and show several sample calculations showing the strengths and weaknesses of CHAD.

  13. Application of the INSTANT-HPS PN Transport Code to the C5G7 Benchmark Problem

    SciTech Connect

    Y. Wang; H. Zhang; R. H. Szilard; R. C. Martineau

    2011-06-01

    INSTANT is the INL's next generation neutron transport solver to support high-fidelity multi-physics reactor simulation INSTANT is in continuous development to extend its capability Code is designed to take full advantage of middle to large cluster (10-1000 processors) Code is designed to focus on method adaptation while also mesh adaptation will be possible. It utilizes the most modern computing techniques to generate a neutronics tool of full-core transport calculations for reactor analysis and design. It can perform calculations on unstructured 2D/3D triangular, hexagonal and Cartesian geometries. Calculations can be easily extended to more geometries because of the independent mesh framework coded with the model Fortran. This code has a multigroup solver with thermal rebalance and Chebyshev acceleration. It employs second-order PN and Hybrid Finite Element method (PNHFEM) discretization scheme. Three different in-group solvers - preconditioned Conjugate Gradient (CG) method, preconditioned Generalized Minimal Residual Method (GMRES) and Red-Black iteration - have been implemented and parallelized with the spatial domain decomposition in the code. The input is managed with extensible markup language (XML) format. 3D variables including the flux distributions are outputted into VTK files, which can be visualized by tools such as VisIt and ParaView. An extension of the code named INSTANTHPS provides the capability to perform 3D heterogeneous transport calculations within fuel pins. C5G7 is an OECD/NEA benchmark problem created to test the ability of modern deterministic transport methods and codes to treat reactor core problems without spatial homogenization. This benchmark problem had been widely analyzed with various code packages. In this transaction, results of the applying the INSTANT-HPS code to the C5G7 problem are summarized.

  14. An examination of the potential applications of automatic classification techniques to Georgia management problems

    NASA Technical Reports Server (NTRS)

    Rado, B. Q.

    1975-01-01

    Automatic classification techniques are described in relation to future information and natural resource planning systems with emphasis on application to Georgia resource management problems. The concept, design, and purpose of Georgia's statewide Resource AS Assessment Program is reviewed along with participation in a workshop at the Earth Resources Laboratory. Potential areas of application discussed include: agriculture, forestry, water resources, environmental planning, and geology.

  15. Application of adomian decomposition method for singularly perturbed fourth order boundary value problems

    NASA Astrophysics Data System (ADS)

    Deniz, Sinan; Bildik, Necdet

    2016-06-01

    In this paper, we use Adomian Decomposition Method (ADM) to solve the singularly perturbed fourth order boundary value problem. In order to make the calculation process easier, first the given problem is transformed into a system of two second order ODEs, with suitable boundary conditions. Numerical illustrations are given to prove the effectiveness and applicability of this method in solving these kinds of problems. Obtained results shows that this technique provides a sequence of functions which converges rapidly to the accurate solution of the problems.

  16. Application of symbolic and algebraic manipulation software in solving applied mechanics problems

    NASA Technical Reports Server (NTRS)

    Tsai, Wen-Lang; Kikuchi, Noboru

    1993-01-01

    As its name implies, symbolic and algebraic manipulation is an operational tool which not only can retain symbols throughout computations but also can express results in terms of symbols. This report starts with a history of symbolic and algebraic manipulators and a review of the literatures. With the help of selected examples, the capabilities of symbolic and algebraic manipulators are demonstrated. These applications to problems of applied mechanics are then presented. They are the application of automatic formulation to applied mechanics problems, application to a materially nonlinear problem (rigid-plastic ring compression) by finite element method (FEM) and application to plate problems by FEM. The advantages and difficulties, contributions, education, and perspectives of symbolic and algebraic manipulation are discussed. It is well known that there exist some fundamental difficulties in symbolic and algebraic manipulation, such as internal swelling and mathematical limitation. A remedy for these difficulties is proposed, and the three applications mentioned are solved successfully. For example, the closed from solution of stiffness matrix of four-node isoparametrical quadrilateral element for 2-D elasticity problem was not available before. Due to the work presented, the automatic construction of it becomes feasible. In addition, a new advantage of the application of symbolic and algebraic manipulation found is believed to be crucial in improving the efficiency of program execution in the future. This will substantially shorten the response time of a system. It is very significant for certain systems, such as missile and high speed aircraft systems, in which time plays an important role.

  17. Applications of numerical optimization methods to helicopter design problems: A survey

    NASA Technical Reports Server (NTRS)

    Miura, H.

    1984-01-01

    A survey of applications of mathematical programming methods is used to improve the design of helicopters and their components. Applications of multivariable search techniques in the finite dimensional space are considered. Five categories of helicopter design problems are considered: (1) conceptual and preliminary design, (2) rotor-system design, (3) airframe structures design, (4) control system design, and (5) flight trajectory planning. Key technical progress in numerical optimization methods relevant to rotorcraft applications are summarized.

  18. EDITORIAL: Introduction to the special issue on electromagnetic inverse problems: emerging methods and novel applications Introduction to the special issue on electromagnetic inverse problems: emerging methods and novel applications

    NASA Astrophysics Data System (ADS)

    Dorn, O.; Lesselier, D.

    2010-07-01

    practically relevant inverse problems. The contribution by M Li, A Abubakar and T Habashy, `Application of a two-and-a-half dimensional model-based algorithm to crosswell electromagnetic data inversion', deals with a model-based inversion technique for electromagnetic imaging which addresses novel challenges such as multi-physics inversion, and incorporation of prior knowledge, such as in hydrocarbon recovery. 10. Non-stationary inverse problems, considered as a special class of Bayesian inverse problems, are framed via an orthogonal decomposition representation in the contribution by A Lipponen, A Seppänen and J P Kaipio, `Reduced order estimation of nonstationary flows with electrical impedance tomography'. The goal is to simultaneously estimate, from electrical impedance tomography data, certain characteristics of the Navier--Stokes fluid flow model together with time-varying concentration distribution. 11. Non-iterative imaging methods of thin, penetrable cracks, based on asymptotic expansion of the scattering amplitude and analysis of the multi-static response matrix, are discussed in the contribution by W-K Park, `On the imaging of thin dielectric inclusions buried within a half-space', completing, for a shallow burial case at multiple frequencies, the direct imaging of small obstacles (here, along their transverse dimension), MUSIC and non-MUSIC type indicator functions being used for that purpose. 12. The contribution by R Potthast, `A study on orthogonality sampling' envisages quick localization and shaping of obstacles from (portions of) far-field scattering patterns collected at one or more time-harmonic frequencies, via the simple calculation (and summation) of scalar products between those patterns and a test function. This is numerically exemplified for Neumann/Dirichlet boundary conditions and homogeneous/heterogeneous embedding media. 13. The contribution by J D Shea, P Kosmas, B D Van Veen and S C Hagness, `Contrast-enhanced microwave imaging of breast

  19. Applications of space teleoperator technology to the problems of the handicapped

    NASA Technical Reports Server (NTRS)

    Malone, T. B.; Deutsch, S.; Rubin, G.; Shenk, S. W.

    1973-01-01

    The identification of feasible and practical applications of space teleoperator technology for the problems of the handicapped were studied. A teleoperator system is defined by NASA as a remotely controlled, cybernetic, man-machine system designed to extend and augment man's sensory, manipulative, and locomotive capabilities. Based on a consideration of teleoperator systems, the scope of the study was limited to an investigation of these handicapped persons limited in sensory, manipulative, and locomotive capabilities. If the technology being developed for teleoperators has any direct application, it must be in these functional areas. Feasible and practical applications of teleoperator technology for the problems of the handicapped are described, and design criteria are presented with each application. A development plan is established to bring the application to the point of use.

  20. The Application of an Etiological Model of Personality Disorders to Problem Gambling.

    PubMed

    Brown, Meredith; Allen, J Sabura; Dowling, Nicki A

    2015-12-01

    Problem gambling is a significant mental health problem that creates a multitude of intrapersonal, interpersonal, and social difficulties. Recent empirical evidence suggests that personality disorders, and in particular borderline personality disorder (BPD), are commonly co-morbid with problem gambling. Despite this finding there has been very little research examining overlapping factors between these two disorders. The aim of this review is to summarise the literature exploring the relationship between problem gambling and personality disorders. The co-morbidity of personality disorders, particularly BPD, is reviewed and the characteristics of problem gamblers with co-morbid personality disorders are explored. An etiological model from the more advanced BPD literature-the biosocial developmental model of BPD-is used to review the similarities between problem gambling and BPD across four domains: early parent-child interactions, emotion regulation, co-morbid psychopathology and negative outcomes. It was concluded that personality disorders, in particular BPD are commonly co-morbid among problem gamblers and the presence of a personality disorder complicates the clinical picture. Furthermore BPD and problem gambling share similarities across the biosocial developmental model of BPD. Therefore clinicians working with problem gamblers should incorporate routine screening for personality disorders and pay careful attention to the therapeutic alliance, client motivations and therapeutic boundaries. Furthermore adjustments to therapy structure, goals and outcomes may be required. Directions for future research include further research into the applicability of the biosocial developmental model of BPD to problem gambling.

  1. Application of evolution strategies for the solution of an inverse problem in near-field optics.

    PubMed

    Macías, Demetrio; Vial, Alexandre; Barchiesi, Dominique

    2004-08-01

    We introduce an inversion procedure for the characterization of a nanostructure from near-field intensity data. The method proposed is based on heuristic arguments and makes use of evolution strategies for the solution of the inverse problem as a nonlinear constrained-optimization problem. By means of some examples we illustrate the performance of our inversion method. We also discuss its possibilities and potential applications.

  2. Two-scale homogenization of electromechanically coupled boundary value problems. Consistent linearization and applications

    NASA Astrophysics Data System (ADS)

    Schröder, Jörg; Keip, Marc-André

    2012-08-01

    The contribution addresses a direct micro-macro transition procedure for electromechanically coupled boundary value problems. The two-scale homogenization approach is implemented into a so-called FE2-method which allows for the computation of macroscopic boundary value problems in consideration of microscopic representative volume elements. The resulting formulation is applicable to the computation of linear as well as nonlinear problems. In the present paper, linear piezoelectric as well as nonlinear electrostrictive material behavior are investigated, where the constitutive equations on the microscale are derived from suitable thermodynamic potentials. The proposed direct homogenization procedure can also be applied for the computation of effective elastic, piezoelectric, dielectric, and electrostrictive material properties.

  3. Inverse problems with Poisson data: statistical regularization theory, applications and algorithms

    NASA Astrophysics Data System (ADS)

    Hohage, Thorsten; Werner, Frank

    2016-09-01

    Inverse problems with Poisson data arise in many photonic imaging modalities in medicine, engineering and astronomy. The design of regularization methods and estimators for such problems has been studied intensively over the last two decades. In this review we give an overview of statistical regularization theory for such problems, the most important applications, and the most widely used algorithms. The focus is on variational regularization methods in the form of penalized maximum likelihood estimators, which can be analyzed in a general setup. Complementing a number of recent convergence rate results we will establish consistency results. Moreover, we discuss estimators based on a wavelet-vaguelette decomposition of the (necessarily linear) forward operator. As most prominent applications we briefly introduce Positron emission tomography, inverse problems in fluorescence microscopy, and phase retrieval problems. The computation of a penalized maximum likelihood estimator involves the solution of a (typically convex) minimization problem. We also review several efficient algorithms which have been proposed for such problems over the last five years.

  4. A full-spectrum analysis of high-speed train interior noise under multi-physical-field coupling excitations

    NASA Astrophysics Data System (ADS)

    Zheng, Xu; Hao, Zhiyong; Wang, Xu; Mao, Jie

    2016-06-01

    High-speed-railway-train interior noise at low, medium, and high frequencies could be simulated by finite element analysis (FEA) or boundary element analysis (BEA), hybrid finite element analysis-statistical energy analysis (FEA-SEA) and statistical energy analysis (SEA), respectively. First, a new method named statistical acoustic energy flow (SAEF) is proposed, which can be applied to the full-spectrum HST interior noise simulation (including low, medium, and high frequencies) with only one model. In an SAEF model, the corresponding multi-physical-field coupling excitations are firstly fully considered and coupled to excite the interior noise. The interior noise attenuated by sound insulation panels of carriage is simulated through modeling the inflow acoustic energy from the exterior excitations into the interior acoustic cavities. Rigid multi-body dynamics, fast multi-pole BEA, and large-eddy simulation with indirect boundary element analysis are first employed to extract the multi-physical-field excitations, which include the wheel-rail interaction forces/secondary suspension forces, the wheel-rail rolling noise, and aerodynamic noise, respectively. All the peak values and their frequency bands of the simulated acoustic excitations are validated with those from the noise source identification test. Besides, the measured equipment noise inside equipment compartment is used as one of the excitation sources which contribute to the interior noise. Second, a full-trimmed FE carriage model is firstly constructed, and the simulated modal shapes and frequencies agree well with the measured ones, which has validated the global FE carriage model as well as the local FE models of the aluminum alloy-trim composite panel. Thus, the sound transmission loss model of any composite panel has indirectly been validated. Finally, the SAEF model of the carriage is constructed based on the accurate FE model and stimulated by the multi-physical-field excitations. The results show

  5. Preface to foundations of information/decision fusion with applications to engineering problems

    SciTech Connect

    Madan, R.N.; Rao, N.S.V.

    1996-10-01

    In engineering design, it was shown by von Neumann that a reliable system can be built using unreliable components by employing simple majority rule fusers. If error densities are known for individual pattern recognizers then an optimal fuser was shown to be implementable as a threshold function. Many applications have been developed for distributed sensor systems, sensor-based robotics, face recognition, decision fusion, recognition of handwritten characters, and automatic target recognition. Recently, information/decision fusion has been recognized as an independently growing field with its own principles and methods. While some of the fusion problems in engineering systems could be solved by applying existing results from other domains, many others require original approaches and solutions. In turn, these new approaches would lead to new applications in other areas. There are two paradigms at the extrema of the spectrum of the information/decision methods: (i) Fusion as Problem: In certain applications, fusion is explicitly specified in the problem statement. Particularly in robotics applications, many researchers realized the fundamental limitations of single sensor systems, thereby motivating the deployment of multiple sensors. In more general engineering applications, similar sensors are employed for fault tolerance, while in several others, different sensor modalities are required to achieve the given task. In these scenarios, fusion methods have to be first designed to solve the problem at hand. (ii) Fusion as Solution: In many instances (e.g., DNA analysis), a number of different solutions to a particular problem already exist. Often these solutions can be combined to obtain solutions that outperform any individual one. The area of forecasting is a good example of such paradigm. Although fusion is not explicitly specified in these problems, it is used as an ingredient of the solution.

  6. FOREWORD: Tackling inverse problems in a Banach space environment: from theory to applications Tackling inverse problems in a Banach space environment: from theory to applications

    NASA Astrophysics Data System (ADS)

    Schuster, Thomas; Hofmann, Bernd; Kaltenbacher, Barbara

    2012-10-01

    of concrete instances with special properties. The aim of this special section is to provide a forum for highly topical ongoing work in the area of regularization in Banach spaces, its numerics and its applications. Indeed, we have been lucky enough to obtain a number of excellent papers both from colleagues who have previously been contributing to this topic and from researchers entering the field due to its relevance in practical inverse problems. We would like to thank all contributers for enabling us to present a high quality collection of papers on topics ranging from various aspects of regularization via efficient numerical solution to applications in PDE models. We give a brief overview of the contributions included in this issue (here ordered alphabetically by first author). In their paper, Iterative regularization with general penalty term—theory and application to L1 and TV regularization, Radu Bot and Torsten Hein provide an extension of the Landweber iteration for linear operator equations in Banach space to general operators in place of the inverse duality mapping, which corresponds to the use of general regularization functionals in variational regularization. The L∞ topology in data space corresponds to the frequently occuring situation of uniformly distributed data noise. A numerically efficient solution of the resulting Tikhonov regularization problem via a Moreau-Yosida appriximation and a semismooth Newton method, along with a δ-free regularization parameter choice rule, is the topic of the paper L∞ fitting for inverse problems with uniform noise by Christian Clason. Extension of convergence rates results from classical source conditions to their generalization via variational inequalities with a priori and a posteriori stopping rules is the main contribution of the paper Regularization of linear ill-posed problems by the augmented Lagrangian method and variational inequalities by Klaus Frick and Markus Grasmair, again in the context of some

  7. Evaluation of a transient, simultaneous, arbitrary Lagrange-Euler based multi-physics method for simulating the mitral heart valve.

    PubMed

    Espino, Daniel M; Shepherd, Duncan E T; Hukins, David W L

    2014-01-01

    A transient multi-physics model of the mitral heart valve has been developed, which allows simultaneous calculation of fluid flow and structural deformation. A recently developed contact method has been applied to enable simulation of systole (the stage when blood pressure is elevated within the heart to pump blood to the body). The geometry was simplified to represent the mitral valve within the heart walls in two dimensions. Only the mitral valve undergoes deformation. A moving arbitrary Lagrange-Euler mesh is used to allow true fluid-structure interaction (FSI). The FSI model requires blood flow to induce valve closure by inducing strains in the region of 10-20%. Model predictions were found to be consistent with existing literature and will undergo further development.

  8. Investigation on Multi-Physics Simulation-Based Virtual Machining System for Vibratory Finishing of Integrally Bladed Rotors (IBRS)

    NASA Astrophysics Data System (ADS)

    Achiamah-Ampomah, N.; Cheng, Kai

    2016-02-01

    An investigation was carried out to improve the slow surface finishing times of integrally bladed rotors (IBRs) in the aerospace industry. Traditionally they are finished by hand, or more currently by abrasive flow machining. The use of a vibratory finishing technique to improve process times has been suggested; however as a largely empirical process, very few studies have been done to improve and optimize the cycle times, showing that critical and ongoing research is still needed in this area. An extensive review of the literature was carried out, and the findings used to identify the key parameters and model equations which govern the vibratory process. Recommendations were made towards a multi-physics-based simulation model, as well as projections made for the future of vibratory finishing and optimization of surface finishes and cycle times.

  9. Development and application of unified algorithms for problems in computational science

    NASA Technical Reports Server (NTRS)

    Shankar, Vijaya; Chakravarthy, Sukumar

    1987-01-01

    A framework is presented for developing computationally unified numerical algorithms for solving nonlinear equations that arise in modeling various problems in mathematical physics. The concept of computational unification is an attempt to encompass efficient solution procedures for computing various nonlinear phenomena that may occur in a given problem. For example, in Computational Fluid Dynamics (CFD), a unified algorithm will be one that allows for solutions to subsonic (elliptic), transonic (mixed elliptic-hyperbolic), and supersonic (hyperbolic) flows for both steady and unsteady problems. The objectives are: development of superior unified algorithms emphasizing accuracy and efficiency aspects; development of codes based on selected algorithms leading to validation; application of mature codes to realistic problems; and extension/application of CFD-based algorithms to problems in other areas of mathematical physics. The ultimate objective is to achieve integration of multidisciplinary technologies to enhance synergism in the design process through computational simulation. Specific unified algorithms for a hierarchy of gas dynamics equations and their applications to two other areas: electromagnetic scattering, and laser-materials interaction accounting for melting.

  10. Application of NASA management approach to solve complex problems on earth

    NASA Technical Reports Server (NTRS)

    Potate, J. S.

    1972-01-01

    The application of NASA management approach to solving complex problems on earth is discussed. The management of the Apollo program is presented as an example of effective management techniques. Four key elements of effective management are analyzed. Photographs of the Cape Kennedy launch sites and supporting equipment are included to support the discussions.

  11. Applications of dynamic scheduling technique to space related problems: Some case studies

    NASA Technical Reports Server (NTRS)

    Nakasuka, Shinichi; Ninomiya, Tetsujiro

    1994-01-01

    The paper discusses the applications of 'Dynamic Scheduling' technique, which has been invented for the scheduling of Flexible Manufacturing System, to two space related scheduling problems: operation scheduling of a future space transportation system, and resource allocation in a space system with limited resources such as space station or space shuttle.

  12. Math Teachers' Attitudes towards Photo Math Application in Solving Mathematical Problem Using Mobile Camera

    ERIC Educational Resources Information Center

    Hamadneh, Iyad M.; Al-Masaeed, Aslan

    2015-01-01

    This study aimed at finding out mathematics teachers' attitudes towards photo math application in solving mathematical problems using mobile camera; it also aim to identify significant differences in their attitudes according to their stage of teaching, educational qualifications, and teaching experience. The study used judgmental/purposive…

  13. The Views of Undergraduates about Problem-Based Learning Applications in a Biochemistry Course

    ERIC Educational Resources Information Center

    Tarhan, Leman; Ayyildiz, Yildizay

    2015-01-01

    The effect of problem-based learning (PBL) applications in an undergraduate biochemistry course on students' interest in this course was investigated through four modules during one semester. Students' views about active learning and improvement in social skills were also collected and evaluated. We conducted the study with 36 senior students from…

  14. Use of a Mobile Application to Help Students Develop Skills Needed in Solving Force Equilibrium Problems

    ERIC Educational Resources Information Center

    Yang, Eunice

    2016-01-01

    This paper discusses the use of a free mobile engineering application (app) called Autodesk® ForceEffect™ to provide students assistance with spatial visualization of forces and more practice in solving/visualizing statics problems compared to the traditional pencil-and-paper method. ForceEffect analyzes static rigid-body systems using free-body…

  15. Thinking about Applications: Effects on Mental Models and Creative Problem-Solving

    ERIC Educational Resources Information Center

    Barrett, Jamie D.; Peterson, David R.; Hester, Kimberly S.; Robledo, Issac C.; Day, Eric A.; Hougen, Dean P.; Mumford, Michael D.

    2013-01-01

    Many techniques have been used to train creative problem-solving skills. Although the available techniques have often proven to be effective, creative training often discounts the value of thinking about applications. In this study, 248 undergraduates were asked to develop advertising campaigns for a new high-energy soft drink. Solutions to this…

  16. Application of Model-Selection Criteria to Some Problems in Multivariate Analysis.

    ERIC Educational Resources Information Center

    Sclove, Stanley L.

    1987-01-01

    A review of model-selection criteria is presented, suggesting their similarities. Some problems treated by hypothesis tests may be more expeditiously treated by the application of model-selection criteria. Multivariate analysis, cluster analysis, and factor analysis are considered. (Author/GDC)

  17. The Integrated Problem-Solving Model of Crisis Intervention: Overview and Application

    ERIC Educational Resources Information Center

    Westefeld, John S.; Heckman-Stone, Carolyn

    2003-01-01

    Crisis intervention is a role that fits exceedingly well with counseling psychologists' interests and skills. This article provides an overview of a new crisis intervention model, the Integrated Problem-Solving Model (IPSM), and demonstrates its application to a specific crisis, sexual assault. It is hoped that this article will encourage…

  18. Approximate analysis for repeated eigenvalue problems with applications to controls-structure integrated design

    NASA Technical Reports Server (NTRS)

    Kenny, Sean P.; Hou, Gene J. W.

    1994-01-01

    A method for eigenvalue and eigenvector approximate analysis for the case of repeated eigenvalues with distinct first derivatives is presented. The approximate analysis method developed involves a reparameterization of the multivariable structural eigenvalue problem in terms of a single positive-valued parameter. The resulting equations yield first-order approximations to changes in the eigenvalues and the eigenvectors associated with the repeated eigenvalue problem. This work also presents a numerical technique that facilitates the definition of an eigenvector derivative for the case of repeated eigenvalues with repeated eigenvalue derivatives (of all orders). Examples are given which demonstrate the application of such equations for sensitivity and approximate analysis. Emphasis is placed on the application of sensitivity analysis to large-scale structural and controls-structures optimization problems.

  19. Concurrent reinforcement schedules for problem behavior and appropriate behavior: experimental applications of the matching law.

    PubMed

    Borrero, Carrie S W; Vollmer, Timothy R; Borrero, John C; Bourret, Jason C; Sloman, Kimberly N; Samaha, Andrew L; Dallery, Jesse

    2010-05-01

    This study evaluated how children who exhibited functionally equivalent problem and appropriate behavior allocate responding to experimentally arranged reinforcer rates. Relative reinforcer rates were arranged on concurrent variable-interval schedules and effects on relative response rates were interpreted using the generalized matching equation. Results showed that relative rates of responding approximated relative rates of reinforcement. Finally, interventions for problem behavior were evaluated and differential reinforcement of alternative behavior and extinction procedures were implemented to increase appropriate behavior and decrease problem behavior. Practical considerations for the application of the generalized matching equation specific to severe problem behavior are discussed, including difficulties associated with defining a reinforced response, and obtaining steady state responding in clinical settings.

  20. Development and Application of Differential Equation Numerical Techniques to Electromagnetic Scattering and Radiation Problems.

    NASA Astrophysics Data System (ADS)

    Simons, Neil Richard Samuel

    In this thesis the development and application of general purpose computer simulation techniques for macroscopic electromagnetic phenomena are investigated. These techniques are applicable to a wide variety of practical problems pertaining to: Electromagnetic Compatibility and Interference, Radar-Cross-Section, and the analysis and design of antennas. The goal of this research is to examine methods that are applicable to a wide variety of problems rather than specialized approaches that are only useful for specific problems. A brief review of the computational electromagnetics literature indicates two general types of methods are applicable. These are numerical approximation of integral-equation formulations and numerical approximation of differential-equation formulations. Because of their relative efficiency for inhomogeneous geometries, the direction of the thesis proceeds with numerical approximations to differential-equation based formulations. The differential-equation based numerical methods include various finite-difference, finite-element, finite -volume, and transmission line matrix methods. A literature review and overview of these numerical methods is provided. The goal of the overview is to provide the capability for the classification for existing and future differential equation based numerical methods to identify relative advantages and disadvantages. Extensions to the two-dimensional transmission line matrix method are presented. The extensions are intended to provide some of the flexibility traditionally associated with finite-difference and finite-element methods. Three new two-dimensional models are presented. Two of the new models utilize triangular rather than the usual rectangular spatial discretization. The third model introduces the capability of higher-order spatial accuracy. The efficiency and application of the new models are discussed. The development of two general-purpose electromagnetic simulation programs is presented. Both are

  1. The potential application of the blackboard model of problem solving to multidisciplinary design

    NASA Technical Reports Server (NTRS)

    Rogers, James L.

    1989-01-01

    The potential application of the blackboard model of problem solving to multidisciplinary design is discussed. Multidisciplinary design problems are complex, poorly structured, and lack a predetermined decision path from the initial starting point to the final solution. The final solution is achieved using data from different engineering disciplines. Ideally, for the final solution to be the optimum solution, there must be a significant amount of communication among the different disciplines plus intradisciplinary and interdisciplinary optimization. In reality, this is not what happens in today's sequential approach to multidisciplinary design. Therefore it is highly unlikely that the final solution is the true optimum solution from an interdisciplinary optimization standpoint. A multilevel decomposition approach is suggested as a technique to overcome the problems associated with the sequential approach, but no tool currently exists with which to fully implement this technique. A system based on the blackboard model of problem solving appears to be an ideal tool for implementing this technique because it offers an incremental problem solving approach that requires no a priori determined reasoning path. Thus it has the potential of finding a more optimum solution for the multidisciplinary design problems found in today's aerospace industries.

  2. Application of an interactive computer program to manage a problem-based dental curriculum.

    PubMed

    McGrath, Colman; Comfort, Margaret B; Luo, Yan; Samaranayake, Lakshman P; Clark, Christopher D

    2006-04-01

    Managing the change from traditional to problem-based learning (PBL) curricula is complex because PBL employs problem cases as the vehicle for learning. Each problem case covers a wide range of different learning issues across many disciplines and is coordinated by different facilitators drawn from the school's multidisciplinary pool. The objective of this project was to adapt an interactive computer program to manage a problem-based dental curriculum. Through application of a commercial database software--CATs (Curriculum Analysis Tools)--an electronic database for all modules of a five-year problem-based program was developed. This involved inputting basic information on each problem case relating to competencies covered, key words (learning objectives), participating faculty, independent study, and homework assignments, as well as inputting information on contact hours. General reports were generated to provide an overview of the curriculum. In addition, competency, key word, manpower, and clock-hour reports at three levels (individual PBL course component, yearly, and the entire curriculum) were produced. Implications and uses of such reports are discussed. The adaptation of electronic technology for managing dental curricula for use in a PBL curriculum has implications for all those involved in managing new-style PBL dental curricula and those who have concerns about managing the PBL process. PMID:16595531

  3. Recent Results from Application of the Implicit Particle Filter to High-dimensional Problems

    NASA Astrophysics Data System (ADS)

    Miller, R.; Weir, B.; Spitz, Y. H.

    2012-12-01

    We present our most recent results on the application of the implicit particle filter to a stochastic shallow water model of nearshore circulation. This highly nonlinear model has approximately 30,000 state variables, and, in our twin experiments, we assimilate 32 observed quantities. Application of most particle methods to problems of this size are subject to sample impoverishment. In our implementation of the implicit particle filter, we have found that manageable size ensembles can still retain a sufficient number of independent particles for reasonable accuracy.

  4. Application of chaos theory to solving the problems of social and environmental decline in Lesotho.

    PubMed

    Kakonge, John O

    2002-05-01

    This paper examines the definition of chaos theory and its use in different circumstances. The paper explains that environmental crisis is complex, chaotic and unstable and will remain so unless actions are taken to reverse the trend. It further suggests that chaos theory could be used to interpret the crisis and help identify solutions. By recommending the application of chaos theory to the environmental problems in Lesotho, the paper explores some of the key issues that contribute to and perpetuate the environmental situation, for example, the current land tenure system and the problem of overgrazing. In addition, it identifies appropriate and realistic government policies that could be implemented to address the environmental degradation in the country. The paper concludes that the application of chaos theory may be unable to help solve the environmental crisis in Lesotho unless there is political will and commitment and collective effort from all stakeholders, coupled with an attitudinal change. PMID:12173423

  5. Determination of Nonlinear Stiffness Coefficients for Finite Element Models with Application to the Random Vibration Problem

    NASA Technical Reports Server (NTRS)

    Muravyov, Alexander A.

    1999-01-01

    In this paper, a method for obtaining nonlinear stiffness coefficients in modal coordinates for geometrically nonlinear finite-element models is developed. The method requires application of a finite-element program with a geometrically non- linear static capability. The MSC/NASTRAN code is employed for this purpose. The equations of motion of a MDOF system are formulated in modal coordinates. A set of linear eigenvectors is used to approximate the solution of the nonlinear problem. The random vibration problem of the MDOF nonlinear system is then considered. The solutions obtained by application of two different versions of a stochastic linearization technique are compared with linear and exact (analytical) solutions in terms of root-mean-square (RMS) displacements and strains for a beam structure.

  6. On cell problems for Hamilton-Jacobi equations with non-coercive Hamiltonians and their application to homogenization problems

    NASA Astrophysics Data System (ADS)

    Hamamuki, Nao; Nakayasu, Atsushi; Namba, Tokinaga

    2015-12-01

    We study a cell problem arising in homogenization for a Hamilton-Jacobi equation whose Hamiltonian is not coercive. We introduce a generalized notion of effective Hamiltonians by approximating the equation and characterize the solvability of the cell problem in terms of the generalized effective Hamiltonian. Under some sufficient conditions, the result is applied to the associated homogenization problem. We also show that homogenization for non-coercive equations fails in general.

  7. Application of Dynamic Logic Algorithm to Inverse Scattering Problems Related to Plasma Diagnostics

    NASA Astrophysics Data System (ADS)

    Perlovsky, L.; Deming, R. W.; Sotnikov, V.

    2010-11-01

    In plasma diagnostics scattering of electromagnetic waves is widely used for identification of density and wave field perturbations. In the present work we use a powerful mathematical approach, dynamic logic (DL), to identify the spectra of scattered electromagnetic (EM) waves produced by the interaction of the incident EM wave with a Langmuir soliton in the presence of noise. The problem is especially difficult since the spectral amplitudes of the noise pattern are comparable with the amplitudes of the scattered waves. In the past DL has been applied to a number of complex problems in artificial intelligence, pattern recognition, and signal processing, resulting in revolutionary improvements. Here we demonstrate its application to plasma diagnostic problems. [4pt] Perlovsky, L.I., 2001. Neural Networks and Intellect: using model-based concepts. Oxford University Press, New York, NY.

  8. Application of the CIRSSE cooperating robot path planner to the NASA Langley truss assembly problem

    NASA Technical Reports Server (NTRS)

    Weaver, Jonathan M.; Derby, Stephen J.

    1993-01-01

    A method for autonomously planning collision free paths for two cooperating robots in a static environment was developed at the Center for Intelligent Robotic Systems for Space Exploration (CIRSSE). The method utilizes a divide-and-conquer type of heuristic and involves non-exhaustive mapping of configuration space. While there is no guarantee of finding a solution, the planner was successfully applied to a variety of problems including two cooperating 9 degrees of freedom (dof) robots. Although developed primarily for cooperating robots the method is also applicable to single robot path planning problems. A single 6 dof version of the planner was implemented for the truss assembly east, at NASA Langley's Automated Structural Assembly Lab (ASAL). The results indicate that the planner could be very useful in addressing the ASAL path planning problem and that further work along these lines is warranted.

  9. On the application of pseudo-spectral FFT technique to non-periodic problems

    NASA Technical Reports Server (NTRS)

    Biringen, S.; Kao, K. H.

    1988-01-01

    The reduction-to-periodicity method using the pseudo-spectral Fast Fourier Transform (FFT) technique is applied to the solution of nonperiodic problems including the two-dimensional Navier-Stokes equations. The accuracy of the method is demonstrated by calculating derivatives of given functions, one- and two-dimensional convective-diffusive problems, and by comparing the relative errors due to the FFT method with seocnd order Finite Difference Methods (FDM). Finally, the two-dimensional Navier-Stokes equations are solved by a fractional step procedure using both the FFT and the FDM methods for the driven cavity flow and the backward facing step problems. Comparisons of these solutions provide a realistic assessment of the FFT method indicating its range of applicability.

  10. Application of multidisciplinary design optimization formulation theory to a wind design problems

    SciTech Connect

    Frank, P.; Benton, J.R.; Borland, C.; Kao, T.J.; Barthelemy, J.

    1994-12-31

    Multidisciplinary Design Optimization, MDO, is optimal design with simultaneous consideration of several disciplines. MDO in conjunction with coupled high-fidelity analysis codes is in a formative stage of development. This talk describes application of MDO formulation theory to the problem of aeroelastic wing design. That is, wing design with simultaneous consideration of the disciplines of structures and aerodynamics. In addition to MDO formulation theory, particular attention is paid to practical problems. These problems include validation of the individual discipline analysis codes, the need for distributed computing and the need for inexpensive models to serve as optimization surrogates for compute intensive aerodynamics codes. An MDO solution method and associated test results will be presented.

  11. Optimization-based additive decomposition of weakly coercive problems with applications

    DOE PAGESBeta

    Bochev, Pavel B.; Ridzal, Denis

    2016-01-27

    In this study, we present an abstract mathematical framework for an optimization-based additive decomposition of a large class of variational problems into a collection of concurrent subproblems. The framework replaces a given monolithic problem by an equivalent constrained optimization formulation in which the subproblems define the optimization constraints and the objective is to minimize the mismatch between their solutions. The significance of this reformulation stems from the fact that one can solve the resulting optimality system by an iterative process involving only solutions of the subproblems. Consequently, assuming that stable numerical methods and efficient solvers are available for every subproblem,more » our reformulation leads to robust and efficient numerical algorithms for a given monolithic problem by breaking it into subproblems that can be handled more easily. An application of the framework to the Oseen equations illustrates its potential.« less

  12. The atmospheric component of the Mediterranean Sea water budget in a WRF multi-physics ensemble and observations

    NASA Astrophysics Data System (ADS)

    Di Luca, Alejandro; Flaounas, Emmanouil; Drobinski, Philippe; Brossier, Cindy Lebeaupin

    2014-11-01

    The use of high resolution atmosphere-ocean coupled regional climate models to study possible future climate changes in the Mediterranean Sea requires an accurate simulation of the atmospheric component of the water budget (i.e., evaporation, precipitation and runoff). A specific configuration of the version 3.1 of the weather research and forecasting (WRF) regional climate model was shown to systematically overestimate the Mediterranean Sea water budget mainly due to an excess of evaporation (~1,450 mm yr-1) compared with observed estimations (~1,150 mm yr-1). In this article, a 70-member multi-physics ensemble is used to try to understand the relative importance of various sub-grid scale processes in the Mediterranean Sea water budget and to evaluate its representation by comparing simulated results with observed-based estimates. The physics ensemble was constructed by performing 70 1-year long simulations using version 3.3 of the WRF model by combining six cumulus, four surface/planetary boundary layer and three radiation schemes. Results show that evaporation variability across the multi-physics ensemble (˜10 % of the mean evaporation) is dominated by the choice of the surface layer scheme that explains more than ˜70 % of the total variance and that the overestimation of evaporation in WRF simulations is generally related with an overestimation of surface exchange coefficients due to too large values of the surface roughness parameter and/or the simulation of too unstable surface conditions. Although the influence of radiation schemes on evaporation variability is small (˜13 % of the total variance), radiation schemes strongly influence exchange coefficients and vertical humidity gradients near the surface due to modifications of temperature lapse rates. The precipitation variability across the physics ensemble (˜35 % of the mean precipitation) is dominated by the choice of both cumulus (˜55 % of the total variance) and planetary boundary layer (˜32 % of

  13. NASTRAN thermal analyzer: Theory and application including a guide to modeling engineering problems, volume 2. [sample problem library guide

    NASA Technical Reports Server (NTRS)

    Jackson, C. E., Jr.

    1977-01-01

    A sample problem library containing 20 problems covering most facets of Nastran Thermal Analyzer modeling is presented. Areas discussed include radiative interchange, arbitrary nonlinear loads, transient temperature and steady-state structural plots, temperature-dependent conductivities, simulated multi-layer insulation, and constraint techniques. The use of the major control options and important DMAP alters is demonstrated.

  14. Algorithmic Perspectives of Network Transitive Reduction Problems and their Applications to Synthesis and Analysis of Biological Networks

    PubMed Central

    Aditya, Satabdi; DasGupta, Bhaskar; Karpinski, Marek

    2013-01-01

    In this survey paper, we will present a number of core algorithmic questions concerning several transitive reduction problems on network that have applications in network synthesis and analysis involving cellular processes. Our starting point will be the so-called minimum equivalent digraph problem, a classic computational problem in combinatorial algorithms. We will subsequently consider a few non-trivial extensions or generalizations of this problem motivated by applications in systems biology. We will then discuss the applications of these algorithmic methodologies in the context of three major biological research questions: synthesizing and simplifying signal transduction networks, analyzing disease networks, and measuring redundancy of biological networks. PMID:24833332

  15. Numerical simulation and experimental validation of biofilm in a multi-physics framework using an SPH based method

    NASA Astrophysics Data System (ADS)

    Soleimani, Meisam; Wriggers, Peter; Rath, Henryke; Stiesch, Meike

    2016-10-01

    In this paper, a 3D computational model has been developed to investigate biofilms in a multi-physics framework using smoothed particle hydrodynamics (SPH) based on a continuum approach. Biofilm formation is a complex process in the sense that several physical phenomena are coupled and consequently different time-scales are involved. On one hand, biofilm growth is driven by biological reaction and nutrient diffusion and on the other hand, it is influenced by fluid flow causing biofilm deformation and interface erosion in the context of fluid and deformable solid interaction. The geometrical and numerical complexity arising from these phenomena poses serious complications and challenges in grid-based techniques such as finite element. Here the solution is based on SPH as one of the powerful meshless methods. SPH based computational modeling is quite new in the biological community and the method is uniquely robust in capturing the interface-related processes of biofilm formation such as erosion. The obtained results show a good agreement with experimental and published data which demonstrates that the model is capable of simulating and predicting overall spatial and temporal evolution of biofilm.

  16. Uncertainties propagation in the framework of a Rod Ejection Accident modeling based on a multi-physics approach

    SciTech Connect

    Le Pallec, J. C.; Crouzet, N.; Bergeaud, V.; Delavaud, C.

    2012-07-01

    The control of uncertainties in the field of reactor physics and their propagation in best-estimate modeling are a major issue in safety analysis. In this framework, the CEA develops a methodology to perform multi-physics simulations including uncertainties analysis. The present paper aims to present and apply this methodology for the analysis of an accidental situation such as REA (Rod Ejection Accident). This accident is characterized by a strong interaction between the different areas of the reactor physics (neutronic, fuel thermal and thermal hydraulic). The modeling is performed with CRONOS2 code. The uncertainties analysis has been conducted with the URANIE platform developed by the CEA: For each identified response from the modeling (output) and considering a set of key parameters with their uncertainties (input), a surrogate model in the form of a neural network has been produced. The set of neural networks is then used to carry out a sensitivity analysis which consists on a global variance analysis with the determination of the Sobol indices for all responses. The sensitivity indices are obtained for the input parameters by an approach based on the use of polynomial chaos. The present exercise helped to develop a methodological flow scheme, to consolidate the use of URANIE tool in the framework of parallel calculations. Finally, the use of polynomial chaos allowed computing high order sensitivity indices and thus highlighting and classifying the influence of identified uncertainties on each response of the analysis (single and interaction effects). (authors)

  17. Multi-Physics Modeling of Molten Salt Transport in Solid Oxide Membrane (SOM) Electrolysis and Recycling of Magnesium

    SciTech Connect

    Powell, Adam; Pati, Soobhankar

    2012-03-11

    Solid Oxide Membrane (SOM) Electrolysis is a new energy-efficient zero-emissions process for producing high-purity magnesium and high-purity oxygen directly from industrial-grade MgO. SOM Recycling combines SOM electrolysis with electrorefining, continuously and efficiently producing high-purity magnesium from low-purity partially oxidized scrap. In both processes, electrolysis and/or electrorefining take place in the crucible, where raw material is continuously fed into the molten salt electrolyte, producing magnesium vapor at the cathode and oxygen at the inert anode inside the SOM. This paper describes a three-dimensional multi-physics finite-element model of ionic current, fluid flow driven by argon bubbling and thermal buoyancy, and heat and mass transport in the crucible. The model predicts the effects of stirring on the anode boundary layer and its time scale of formation, and the effect of natural convection at the outer wall. MOxST has developed this model as a tool for scale-up design of these closely-related processes.

  18. Numerical Stability and Accuracy of Temporally Coupled Multi-Physics Modules in Wind-Turbine CAE Tools

    SciTech Connect

    Gasmi, A.; Sprague, M. A.; Jonkman, J. M.; Jones, W. B.

    2013-02-01

    In this paper we examine the stability and accuracy of numerical algorithms for coupling time-dependent multi-physics modules relevant to computer-aided engineering (CAE) of wind turbines. This work is motivated by an in-progress major revision of FAST, the National Renewable Energy Laboratory's (NREL's) premier aero-elastic CAE simulation tool. We employ two simple examples as test systems, while algorithm descriptions are kept general. Coupled-system governing equations are framed in monolithic and partitioned representations as differential-algebraic equations. Explicit and implicit loose partition coupling is examined. In explicit coupling, partitions are advanced in time from known information. In implicit coupling, there is dependence on other-partition data at the next time step; coupling is accomplished through a predictor-corrector (PC) approach. Numerical time integration of coupled ordinary-differential equations (ODEs) is accomplished with one of three, fourth-order fixed-time-increment methods: Runge-Kutta (RK), Adams-Bashforth (AB), and Adams-Bashforth-Moulton (ABM). Through numerical experiments it is shown that explicit coupling can be dramatically less stable and less accurate than simulations performed with the monolithic system. However, PC implicit coupling restored stability and fourth-order accuracy for ABM; only second-order accuracy was achieved with RK integration. For systems without constraints, explicit time integration with AB and explicit loose coupling exhibited desired accuracy and stability.

  19. Numerical simulation and experimental validation of biofilm in a multi-physics framework using an SPH based method

    NASA Astrophysics Data System (ADS)

    Soleimani, Meisam; Wriggers, Peter; Rath, Henryke; Stiesch, Meike

    2016-06-01

    In this paper, a 3D computational model has been developed to investigate biofilms in a multi-physics framework using smoothed particle hydrodynamics (SPH) based on a continuum approach. Biofilm formation is a complex process in the sense that several physical phenomena are coupled and consequently different time-scales are involved. On one hand, biofilm growth is driven by biological reaction and nutrient diffusion and on the other hand, it is influenced by fluid flow causing biofilm deformation and interface erosion in the context of fluid and deformable solid interaction. The geometrical and numerical complexity arising from these phenomena poses serious complications and challenges in grid-based techniques such as finite element. Here the solution is based on SPH as one of the powerful meshless methods. SPH based computational modeling is quite new in the biological community and the method is uniquely robust in capturing the interface-related processes of biofilm formation such as erosion. The obtained results show a good agreement with experimental and published data which demonstrates that the model is capable of simulating and predicting overall spatial and temporal evolution of biofilm.

  20. Parallel Monte Carlo transport modeling in the context of a time-dependent, three-dimensional multi-physics code

    SciTech Connect

    Procassini, R.J.

    1997-12-31

    The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution of particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.

  1. Problems of in vitro SPF measurements brought about by viscous fingering generated during sunscreen applications.

    PubMed

    Fujikake, K; Tago, S; Plasson, R; Nakazawa, R; Okano, K; Maezawa, D; Mukawa, T; Kuroda, A; Asakura, K

    2014-01-01

    Up to date, no worldwide standard in vitro method has been established for the determination of the sun protection factor (SPF), since there are many problems in terms of its repeatability and reliability. Here, we have studied the problems on the in vitro SPF measurements brought about by the phenomenon called viscous fingering. A spatially periodic stripe pattern is usually formed spontaneously when a viscous fluid is applied onto a solid substrate. For the in vitro SPF measurements, the recommended amount of sunscreen is applied onto a substrate, and the intensity of the transmitted UV light through the sunscreen layer is evaluated. Our theoretical analysis indicated that the nonuniformity of the thickness of the sunscreen layer varied the net UV absorbance. Pseudo-sunscreen composites having no phase separation structures were prepared and applied on a quartz plate for the measurements of the UV absorbance. Two types of applicators, a block applicator and a 4-sided applicator were used. The flat surface was always obtained when the 4-sided applicator was used, while the spatially periodic stripe pattern was always generated spontaneously when the block applicator was used. The net UV absorbance of the layer on which the stripe pattern was formed was found to be lower than that of the flat layer having the same average thickness. Theoretical simulations quantitatively reproduced the variation of the net UV absorbance led by the change of the geometry of the layer. The results of this study propose the definite necessity of strict regulations on the coating method of sunscreens for the establishment of the in vitro SPF test method.

  2. Material derivatives of boundary integral operators in electromagnetism and application to inverse scattering problems

    NASA Astrophysics Data System (ADS)

    Ivanyshyn Yaman, Olha; Le Louër, Frédérique

    2016-09-01

    This paper deals with the material derivative analysis of the boundary integral operators arising from the scattering theory of time-harmonic electromagnetic waves and its application to inverse problems. We present new results using the Piola transform of the boundary parametrisation to transport the integral operators on a fixed reference boundary. The transported integral operators are infinitely differentiable with respect to the parametrisations and simplified expressions of the material derivatives are obtained. Using these results, we extend a nonlinear integral equations approach developed for solving acoustic inverse obstacle scattering problems to electromagnetism. The inverse problem is formulated as a pair of nonlinear and ill-posed integral equations for the unknown boundary representing the boundary condition and the measurements, for which the iteratively regularized Gauss-Newton method can be applied. The algorithm has the interesting feature that it avoids the numerous numerical solution of boundary value problems at each iteration step. Numerical experiments are presented in the special case of star-shaped obstacles.

  3. Application of different variants of the BEM in numerical modeling of bioheat transfer problems.

    PubMed

    Majchrzak, Ewa

    2013-09-01

    Heat transfer processes proceeding in the living organisms are described by the different mathematical models. In particular, the typical continuous model of bioheat transfer bases on the most popular Pennes equation, but the Cattaneo-Vernotte equation and the dual phase lag equation are also used. It should be pointed out that in parallel are also examined the vascular models, and then for the large blood vessels and tissue domain the energy equations are formulated separately. In the paper the different variants of the boundary element method as a tool of numerical solution of bioheat transfer problems are discussed. For the steady state problems and the vascular models the classical BEM algorithm and also the multiple reciprocity BEM are presented. For the transient problems connected with the heating of tissue, the various tissue models are considered for which the 1st scheme of the BEM, the BEM using discretization in time and the general BEM are applied. Examples of computations illustrate the possibilities of practical applications of boundary element method in the scope of bioheat transfer problems. PMID:24396977

  4. Simplified neutrosophic sets and their applications in multi-criteria group decision-making problems

    NASA Astrophysics Data System (ADS)

    Peng, Juan-juan; Wang, Jian-qiang; Wang, Jing; Zhang, Hong-yu; Chen, Xiao-hong

    2016-07-01

    As a variation of fuzzy sets and intuitionistic fuzzy sets, neutrosophic sets have been developed to represent uncertain, imprecise, incomplete and inconsistent information that exists in the real world. Simplified neutrosophic sets (SNSs) have been proposed for the main purpose of addressing issues with a set of specific numbers. However, there are certain problems regarding the existing operations of SNSs, as well as their aggregation operators and the comparison methods. Therefore, this paper defines the novel operations of simplified neutrosophic numbers (SNNs) and develops a comparison method based on the related research of intuitionistic fuzzy numbers. On the basis of these operations and the comparison method, some SNN aggregation operators are proposed. Additionally, an approach for multi-criteria group decision-making (MCGDM) problems is explored by applying these aggregation operators. Finally, an example to illustrate the applicability of the proposed method is provided and a comparison with some other methods is made.

  5. Application of High Order Acoustic Finite Elements to Transmission Losses and Enclosure Problems

    NASA Technical Reports Server (NTRS)

    Craggs, A.; Stevenson, G.

    1985-01-01

    A family of acoustic finite elements was developed based on C continuity (acoustic pressure being the nodal variable) and the no-flow condition. The family include triangular, quadrilateral and hexahedral isoparametric elements with linear quadratic and cubic variation in modelling and distortion. Of greatest use in problems with irregular boundaries are the cubic isoparametric elements: the 32 node hexahedral element for three-dimensional systems; and the twelve node quadrilateral and ten node triangular elements for two-dimensional/axisymmetric applications. These elements were applied to problems involving cavity resonances, transmission loss in silencers and the study of end effects, using a Floating Point Systems 164 attached array processor accessed through an Amdahl 5860 mainframe. The elements are presently being used to study the end effects associated with duct terminations within finite enclosures. The transmission losses with various silencers and sidebranches in ducts is also being studied using the same elements.

  6. Fuzzy evolutionary algorithm to solve chromosomes conflict and its application to lecture schedule problems

    NASA Astrophysics Data System (ADS)

    Marwati, Rini; Yulianti, Kartika; Pangestu, Herny Wulandari

    2016-02-01

    A fuzzy evolutionary algorithm is an integration of an evolutionary algorithm and a fuzzy system. In this paper, we present an application of a genetic algorithm to a fuzzy evolutionary algorithm to detect and to solve chromosomes conflict. A chromosome conflict is identified by existence of any two genes in a chromosome that has the same values as two genes in another chromosome. Based on this approach, we construct an algorithm to solve a lecture scheduling problem. Time codes, lecture codes, lecturer codes, and room codes are defined as genes. They are collected to become chromosomes. As a result, the conflicted schedule turns into chromosomes conflict. Built in the Delphi program, results show that the conflicted lecture schedule problem is solvable by this algorithm.

  7. The Effects of Problem Solving Applications on the Development of Science Process Skills, Logical Thinking Skills and Perception on Problem Solving Ability in the Science Laboratory

    ERIC Educational Resources Information Center

    Seyhan, Hatice Güngör

    2015-01-01

    This study was conducted with 98 prospective science teachers, who were composed of 50 prospective teachers that had participated in problem-solving applications and 48 prospective teachers who were taught within a more researcher-oriented teaching method in science laboratories. The first aim of this study was to determine the levels of…

  8. Scattering by randomly oriented ellipsoids: Application to aerosol and cloud problems

    NASA Technical Reports Server (NTRS)

    Asano, S.; Sato, M.; Hansen, J. E.

    1979-01-01

    A program was developed for computing the scattering and absorption by arbitrarily oriented and randomly oriented prolate and oblate spheroids. This permits examination of the effect of particle shape for cases ranging from needles through spheres to platelets. Applications of this capability to aerosol and cloud problems are discussed. Initial results suggest that the effect of nonspherical particle shape on transfer of radiation through aerosol layers and cirrus clouds, as required for many climate studies, can be readily accounted for by defining an appropriate effective spherical particle radius.

  9. Application of a hybrid generation/utility assessment heuristic to a class of scheduling problems

    NASA Technical Reports Server (NTRS)

    Heyward, Ann O.

    1989-01-01

    A two-stage heuristic solution approach for a class of multiobjective, n-job, 1-machine scheduling problems is described. Minimization of job-to-job interference for n jobs is sought. The first stage generates alternative schedule sequences by interchanging pairs of schedule elements. The set of alternative sequences can represent nodes of a decision tree; each node is reached via decision to interchange job elements. The second stage selects the parent node for the next generation of alternative sequences through automated paired comparison of objective performance for all current nodes. An application of the heuristic approach to communications satellite systems planning is presented.

  10. Application of remote sensing to state and regional problems. [for Mississippi

    NASA Technical Reports Server (NTRS)

    Miller, W. F.; Bouchillon, C. W.; Harris, J. C.; Carter, B.; Whisler, F. D.; Robinette, R.

    1974-01-01

    The primary purpose of the remote sensing applications program is for various members of the university community to participate in activities that improve the effective communication between the scientific community engaged in remote sensing research and development and the potential users of modern remote sensing technology. Activities of this program are assisting the State of Mississippi in recognizing and solving its environmental, resource and socio-economic problems through inventory, analysis, and monitoring by appropriate remote sensing systems. Objectives, accomplishments, and current status of the following individual projects are reported: (1) bark beetle project; (2) state park location planning; and (3) waste source location and stream channel geometry monitoring.

  11. Fixed point results for G-α-contractive maps with application to boundary value problems.

    PubMed

    Hussain, Nawab; Parvaneh, Vahid; Roshan, Jamal Rezaei

    2014-01-01

    We unify the concepts of G-metric, metric-like, and b-metric to define new notion of generalized b-metric-like space and discuss its topological and structural properties. In addition, certain fixed point theorems for two classes of G-α -admissible contractive mappings in such spaces are obtained and some new fixed point results are derived in corresponding partially ordered space. Moreover, some examples and an application to the existence of a solution for the first-order periodic boundary value problem are provided here to illustrate the usability of the obtained results.

  12. METLIN-PC: An applications-program package for problems of mathematical programming

    SciTech Connect

    Pshenichnyi, B.N.; Sobolenko, L.A.; Sosnovskii, A.A.; Aleksandrova, V.M.; Shul`zhenko, Yu.V.

    1994-05-01

    The METLIN-PC applications-program package (APP) was developed at the V.M. Glushkov Institute of Cybernetics of the Academy of Sciences of Ukraine on IBM PC XT and AT computers. The present version of the package was written in Turbo Pascal and Fortran-77. The METLIN-PC is chiefly designed for the solution of smooth problems of mathematical programming and is a further development of the METLIN prototype, which was created earlier on a BESM-6 computer. The principal property of the previous package is retained - the applications modules employ a single approach based on the linearization method of B.N. Pschenichnyi. Hence the name {open_quotes}METLIN.{close_quotes}

  13. The Application of Problem Solving Method on Science Teacher Trainees on the Solution of the Environmental Problems

    ERIC Educational Resources Information Center

    Dogru, Mustafa

    2008-01-01

    Helping students to improve their problems solving skills is the primary target of science teacher trainees. In modern science, for training the students, methods should be used for improving their thinking skills, making connections with events and concepts and scientific operations skills rather than information and definition giving. One of…

  14. Multi-Scale Multi-physics Methods Development for the Calculation of Hot-Spots in the NGNP

    SciTech Connect

    Downar, Thomas; Seker, Volkan

    2013-04-30

    Radioactive gaseous fission products are released out of the fuel element at a significantly higher rate when the fuel temperature exceeds 1600°C in high-temperature gas-cooled reactors (HTGRs). Therefore, it is of paramount importance to accurately predict the peak fuel temperature during all operational and design-basis accident conditions. The current methods used to predict the peak fuel temperature in HTGRs, such as the Next-Generation Nuclear Plant (NGNP), estimate the average fuel temperature in a computational mesh modeling hundreds of fuel pebbles or a fuel assembly in a pebble-bed reactor (PBR) or prismatic block type reactor (PMR), respectively. Experiments conducted in operating HTGRs indicate considerable uncertainty in the current methods and correlations used to predict actual temperatures. The objective of this project is to improve the accuracy in the prediction of local "hot" spots by developing multi-scale, multi-physics methods and implementing them within the framework of established codes used for NGNP analysis.The multi-scale approach which this project will implement begins with defining suitable scales for a physical and mathematical model and then deriving and applying the appropriate boundary conditions between scales. The macro scale is the greatest length that describes the entire reactor, whereas the meso scale models only a fuel block in a prismatic reactor and ten to hundreds of pebbles in a pebble bed reactor. The smallest scale is the micro scale--the level of a fuel kernel of the pebble in a PBR and fuel compact in a PMR--which needs to be resolved in order to calculate the peak temperature in a fuel kernel.

  15. Gas dynamics/furnace implosion problems validation and application of the EPRI program DUCSYS

    SciTech Connect

    Forrest, T.J.; Green, C.H.; Rea, J.

    1995-06-01

    Considerable Utility concern about power plant boiler implosion risks has recently resurfaced. This results largely from the current trend towards retrofitting environmental equipment to fossil fuel fired boilers, an action which is often accompanied by an increase in the risk faced, under fault conditions, from large negative pressure excursions in the furnace and its associated ductwork. Accompanying this trend has been a tightening of industry regulations with the publishing of new stricter guidelines on the prevention of furnace implosions and explosions by the National Fire Protection Association. The combined effect has been the need to assess boiler implosion risks as an integral part of fossil fuel fired boiler retro-fit design studies. The DUCSYS gas systems dynamics modelling system, which is currently being developed under contract by PowerGen, is EPRI`s response to this Utility demand. This paper describes briefly the physical processes involved in the implosion phenomenon, and discusses the main characteristics of the DUCSYS modeling system. Following this, the application of DUCSYS to study three power plant problems is discussed. The main study discusses the conversion of an existing Oil fired boiler to burn Orimulsion, a technology in which PowerGen leads the World. This application involves the retro-fitting of an electrostatic precipitator to the plant. DUCSYS is not however, purely a system for investigation of furnace implosion risks, but is currently being developed by PowerGen, on behalf of EPRI, as a general power plant has systems dynamics modeling system. The final two application studies consider the application of DUCSYS to two more general gas dynamics problems.

  16. Resolving all-order method convergence problems for atomic physics applications

    SciTech Connect

    Gharibnejad, H.; Derevianko, A.; Eliav, E.; Safronova, M. S.

    2011-05-15

    The development of the relativistic all-order method where all single, double, and partial triple excitations of the Dirac-Hartree-Fock wave function are included to all orders of perturbation theory led to many important results for the study of fundamental symmetries, development of atomic clocks, ultracold atom physics, and others, as well as provided recommended values of many atomic properties critically evaluated for their accuracy for a large number of monovalent systems. This approach requires iterative solutions of the linearized coupled-cluster equations leading to convergence issues in some cases where correlation corrections are particularly large or lead to an oscillating pattern. Moreover, these issues also lead to similar problems in the configuration-interaction (CI)+all-order method for many-particle systems. In this work, we have resolved most of the known convergence problems by applying two different convergence stabilizer methods, namely, reduced linear equation and direct inversion of iterative subspace. Examples are presented for B, Al, Zn{sup +}, and Yb{sup +}. Solving these convergence problems greatly expands the number of atomic species that can be treated with the all-order methods and is anticipated to facilitate many interesting future applications.

  17. A special application of absolute value techniques in authentic problem solving

    NASA Astrophysics Data System (ADS)

    Stupel, Moshe

    2013-06-01

    There are at least five different equivalent definitions of the absolute value concept. In instances where the task is an equation or inequality with only one or two absolute value expressions, it is a worthy educational experience for learners to solve the task using each one of the definitions. On the other hand, if more than two absolute value expressions are involved, the definition that is most helpful is the one involving solving by intervals and evaluating critical points. In point of fact, application of this technique is one reason that the topic of absolute value is important in mathematics in general and in mathematics teaching in particular. We present here an authentic practical problem that is solved using absolute values and the 'intervals' method, after which the solution is generalized with surprising results. This authentic problem also lends itself to investigation using educational technological tools such as GeoGebra dynamic geometry software: mathematics teachers can allow their students to initially cope with the problem by working in an inductive environment in which they conduct virtual experiments until a solid conjecture has been reached, after which they should prove the conjecture deductively, using classic theoretical mathematical tools.

  18. Use of a Mobile Application to Help Students Develop Skills Needed in Solving Force Equilibrium Problems

    NASA Astrophysics Data System (ADS)

    Yang, Eunice

    2016-02-01

    This paper discusses the use of a free mobile engineering application (app) called Autodesk® ForceEffect™ to provide students assistance with spatial visualization of forces and more practice in solving/visualizing statics problems compared to the traditional pencil-and-paper method. ForceEffect analyzes static rigid-body systems using free-body diagrams (FBDs) and provides solutions in real time. It is a cost-free software that is available for download on the Internet. The software is supported on the iOS™, Android™, and Google Chrome™ platforms. It is easy to use and the learning curve is approximately two hours using the tutorial provided within the app. The use of ForceEffect has the ability to provide students different problem modalities (textbook, real-world, and design) to help them acquire and improve on skills that are needed to solve force equilibrium problems. Although this paper focuses on the engineering mechanics statics course, the technology discussed is also relevant to the introductory physics course.

  19. Reduced-Size Integer Linear Programming Models for String Selection Problems: Application to the Farthest String Problem.

    PubMed

    Zörnig, Peter

    2015-08-01

    We present integer programming models for some variants of the farthest string problem. The number of variables and constraints is substantially less than that of the integer linear programming models known in the literature. Moreover, the solution of the linear programming-relaxation contains only a small proportion of noninteger values, which considerably simplifies the rounding process. Numerical tests have shown excellent results, especially when a small set of long sequences is given.

  20. Quick-hardening problems are eliminated with spray gun modification which mixes resin and accelerator liquids during application

    NASA Technical Reports Server (NTRS)

    Johnson, O. W.

    1964-01-01

    A modified spray gun, with separate containers for resin and additive components, solves the problems of quick hardening and nozzle clogging. At application, separate atomizers spray the liquids in front of the nozzle face where they blend.

  1. Recent advances, trends and new perspectives via enthalpy-based finite element formulations for applications to solidification problems

    NASA Technical Reports Server (NTRS)

    Tamma, Kumar K.; Namburu, Raju R.

    1990-01-01

    The present paper describes recent advances and trends in finite element developments and applications for solidification problems. In particular, in comparison to traditional methods of approach, new enthalpy-based architectures based on a generalized trapezoidal family of representations are presented which provide different perspectives, physical interpretation and solution architectures for effective numerical simulation of phase change processes encountered in solidification problems. Various numerical test models are presented and the results support the proposition for employing such formulations for general phase change applications.

  2. Application of the FETI Method to ASCI Problems: Scalability Results on a Thousand-Processors and Discussion of Highly Heterogeneous Problems

    SciTech Connect

    Bhardwaj, M.; Day, D.; Farhat, C.; Lesoinne, M.; Pierson, K; Rixen, D.

    1999-04-01

    We report on the application of the one-level FETI method to the solution of a class of structural problems associated with the Department of Energy's Accelerated Strategic Computing Initiative (ASCI). We focus on numerical and parallel scalability issues,and discuss the treatment by FETI of severe structural heterogeneities. We also report on preliminary performance results obtained on the ASCI Option Red supercomputer configured with as many as one thousand processors, for problems with as many as 5 million degrees of freedom.

  3. Science and Technology and Its Application to the Problems of Pollution, Transportation and Employment. Public Science Policy: Background Readings.

    ERIC Educational Resources Information Center

    Galvin, Donald W.; Jannakos, Nick

    The document covers what government leaders and the science and technology community must do to set up the mechanism and lines of communication required to bring technology to bear on current public problems. It identifies potential applications of new technology to social problems in the areas of pollution, transportation, employment and future…

  4. Final Report of Optimization Algorithms for Hierarchical Problems, with Applications to Nanoporous Materials

    SciTech Connect

    Nash, Stephen G.

    2013-11-11

    The research focuses on the modeling and optimization of nanoporous materials. In systems with hierarchical structure that we consider, the physics changes as the scale of the problem is reduced and it can be important to account for physics at the fine level to obtain accurate approximations at coarser levels. For example, nanoporous materials hold promise for energy production and storage. A significant issue is the fabrication of channels within these materials to allow rapid diffusion through the material. One goal of our research is to apply optimization methods to the design of nanoporous materials. Such problems are large and challenging, with hierarchical structure that we believe can be exploited, and with a large range of important scales, down to atomistic. This requires research on large-scale optimization for systems that exhibit different physics at different scales, and the development of algorithms applicable to designing nanoporous materials for many important applications in energy production, storage, distribution, and use. Our research has two major research thrusts. The first is hierarchical modeling. We plan to develop and study hierarchical optimization models for nanoporous materials. The models have hierarchical structure, and attempt to balance the conflicting aims of model fidelity and computational tractability. In addition, we analyze the general hierarchical model, as well as the specific application models, to determine their properties, particularly those properties that are relevant to the hierarchical optimization algorithms. The second thrust was to develop, analyze, and implement a class of hierarchical optimization algorithms, and apply them to the hierarchical models we have developed. We adapted and extended the optimization-based multigrid algorithms of Lewis and Nash to the optimization models exemplified by the hierarchical optimization model. This class of multigrid algorithms has been shown to be a powerful tool for

  5. A review of vector convergence acceleration methods, with applications to linear algebra problems

    NASA Astrophysics Data System (ADS)

    Brezinski, C.; Redivo-Zaglia, M.

    In this article, in a few pages, we will try to give an idea of convergence acceleration methods and extrapolation procedures for vector sequences, and to present some applications to linear algebra problems and to the treatment of the Gibbs phenomenon for Fourier series in order to show their effectiveness. The interested reader is referred to the literature for more details. In the bibliography, due to space limitation, we will only give the more recent items, and, for older ones, we refer to Brezinski and Redivo-Zaglia, Extrapolation methods. (Extrapolation Methods. Theory and Practice, North-Holland, 1991). This book also contains, on a magnetic support, a library (in Fortran 77 language) for convergence acceleration algorithms and extrapolation methods.

  6. Application of spectral Lanczos decomposition method to large scale problems arising geophysics

    SciTech Connect

    Tamarchenko, T.

    1996-12-31

    This paper presents an application of Spectral Lanczos Decomposition Method (SLDM) to numerical modeling of electromagnetic diffusion and elastic waves propagation in inhomogeneous media. SLDM approximates an action of a matrix function as a linear combination of basis vectors in Krylov subspace. I applied the method to model electromagnetic fields in three-dimensions and elastic waves in two dimensions. The finite-difference approximation of the spatial part of differential operator reduces the initial boundary-value problem to a system of ordinary differential equations with respect to time. The solution to this system requires calculating exponential and sine/cosine functions of the stiffness matrices. Large scale numerical examples are in a good agreement with the theoretical error bounds and stability estimates given by Druskin, Knizhnerman, 1987.

  7. The application of the statistical theory of extreme values to gust-load problems

    NASA Technical Reports Server (NTRS)

    Press, Harry

    1950-01-01

    An analysis is presented which indicates that the statistical theory of extreme values is applicable to the problems of predicting the frequency of encountering the larger gust loads and gust velocities for both specific test conditions as well as commercial transport operations. The extreme-value theory provides an analytic form for the distributions of maximum values of gust load and velocity. Methods of fitting the distribution are given along with a method of estimating the reliability of the predictions. The theory of extreme values is applied to available load data from commercial transport operations. The results indicate that the estimates of the frequency of encountering the larger loads are more consistent with the data and more reliable than those obtained in previous analyses. (author)

  8. On determining important aspects of mathematical models: Application to problems in physics and chemistry

    NASA Technical Reports Server (NTRS)

    Rabitz, Herschel

    1987-01-01

    The use of parametric and functional gradient sensitivity analysis techniques is considered for models described by partial differential equations. By interchanging appropriate dependent and independent variables, questions of inverse sensitivity may be addressed to gain insight into the inversion of observational data for parameter and function identification in mathematical models. It may be argued that the presence of a subset of dominantly strong coupled dependent variables will result in the overall system sensitivity behavior collapsing into a simple set of scaling and self similarity relations amongst elements of the entire matrix of sensitivity coefficients. These general tools are generic in nature, but herein their application to problems arising in selected areas of physics and chemistry is presented.

  9. Applications of Quantum Theory of Atomic and Molecular Scattering to Problems in Hypersonic Flow

    NASA Technical Reports Server (NTRS)

    Malik, F. Bary

    1995-01-01

    The general status of a grant to investigate the applications of quantum theory in atomic and molecular scattering problems in hypersonic flow is summarized. Abstracts of five articles and eleven full-length articles published or submitted for publication are included as attachments. The following topics are addressed in these articles: fragmentation of heavy ions (HZE particles); parameterization of absorption cross sections; light ion transport; emission of light fragments as an indicator of equilibrated populations; quantum mechanical, optical model methods for calculating cross sections for particle fragmentation by hydrogen; evaluation of NUCFRG2, the semi-empirical nuclear fragmentation database; investigation of the single- and double-ionization of He by proton and anti-proton collisions; Bose-Einstein condensation of nuclei; and a liquid drop model in HZE particle fragmentation by hydrogen.

  10. The lectin-gold technique: an overview of applications to pathological problems.

    PubMed

    Versura, P; Maltarello, M C; Bonvicini, F; Laschi, R

    1989-06-01

    Lectins are proteins, mainly of vegetal origin, which recognize glycosidic residues with high specificity; for this property they have been used for many studies of molecular biology. The colloidal gold represents at present the most popular electron dense marker employed in immunocytochemistry, since it offers intrinsic and unique characteristics which are superior to those displayed by the other markers. The cytochemical method which utilizes the gold-labelled lectins takes advantages from both the two systems, in order to optimize the localization of the glycoconjugates. The present paper reviews both the technical aspects of the preparation of the lectin-gold complex and its application to some selected pathological problems. In particular, the papers concerning the eye and ear tissues, the urinary, reproductive, nervous and digestive systems and the blood cells are quoted.

  11. Fourier emission infrared microspectrophotometer for surface analysis. I - Application to lubrication problems

    NASA Technical Reports Server (NTRS)

    Lauer, J. L.; King, V. W.

    1979-01-01

    A far-infrared interferometer was converted into an emission microspectrophotometer for surface analysis. To cover the mid-infrared as well as the far-infrared the Mylar beamsplitter was made replaceable by a germanium-coated salt plate, and the Moire fringe counting system used to locate the moveable Michelson mirror was improved to read 0.5 micron of mirror displacement. Digital electronics and a dedicated minicomputer were installed for data collection and processing. The most critical element for the recording of weak emission spectra from small areas was, however, a reflecting microscope objective and phase-locked signal detection with simultaneous referencing to a blackbody source. An application of the technique to lubrication problems is shown.

  12. On multidisciplinary research on the application of remote sensing to water resources problems

    NASA Technical Reports Server (NTRS)

    1972-01-01

    This research is directed toward development of a practical, operational remote sensing water quality monitoring system. To accomplish this, five fundamental aspects of the problem have been under investigation during the past three years. These are: (1) development of practical and economical methods of obtaining, handling and analyzing remote sensing data; (2) determination of the correlation between remote sensed imagery and actual water quality parameters; (3) determination of the optimum technique for monitoring specific water pollution parameters and for evaluating the reliability with which this can be accomplished; (4) determination of the extent of masking due to depth of penetration, bottom effects, film development effects, and angle falloff, and development of techniques to eliminate or minimize them; and (5) development of operational procedures which might be employed by a municipal, state or federal agency for the application of remote sensing to water quality monitoring, including space-generated data.

  13. Wine authenticity verification as a forensic problem: an application of likelihood ratio test to label verification.

    PubMed

    Martyna, Agnieszka; Zadora, Grzegorz; Stanimirova, Ivana; Ramos, Daniel

    2014-05-01

    The aim of the study was to investigate the applicability of the likelihood ratio (LR) approach for verifying the authenticity of 178 samples of 3 Italian wine brands: Barolo, Barbera, and Grignolino described by 27 parameters describing their chemical compositions. Since the problem of products authenticity may be of forensic interest, the likelihood ratio approach, expressing the role of the forensic expert, was proposed for determining the true origin of wines. It allows us to analyse the evidence in the context of two hypotheses, that the object belongs to one or another wine brand. Various LR models were the subject of the research and their accuracy was evaluated by the Empirical cross entropy (ECE) approach. The rates of correct classifications for the proposed models were higher than 90% and their performance evaluated by ECE was satisfactory.

  14. Wine authenticity verification as a forensic problem: an application of likelihood ratio test to label verification.

    PubMed

    Martyna, Agnieszka; Zadora, Grzegorz; Stanimirova, Ivana; Ramos, Daniel

    2014-05-01

    The aim of the study was to investigate the applicability of the likelihood ratio (LR) approach for verifying the authenticity of 178 samples of 3 Italian wine brands: Barolo, Barbera, and Grignolino described by 27 parameters describing their chemical compositions. Since the problem of products authenticity may be of forensic interest, the likelihood ratio approach, expressing the role of the forensic expert, was proposed for determining the true origin of wines. It allows us to analyse the evidence in the context of two hypotheses, that the object belongs to one or another wine brand. Various LR models were the subject of the research and their accuracy was evaluated by the Empirical cross entropy (ECE) approach. The rates of correct classifications for the proposed models were higher than 90% and their performance evaluated by ECE was satisfactory. PMID:24360452

  15. An extended theory of thin airfoils and its application to the biplane problem

    NASA Technical Reports Server (NTRS)

    Millikan, Clark B

    1931-01-01

    The report presents a new treatment, due essentially to von Karman, of the problem of the thin airfoil. The standard formulae for the angle of zero lift and zero moment are first developed and the analysis is then extended to give the effect of disturbing or interference velocities, corresponding to an arbitrary potential flow, which are superimposed on a normal rectilinear flow over the airfoil. An approximate method is presented for obtaining the velocities induced by a 2-dimensional airfoil at a point some distance away. In certain cases this method has considerable advantage over the simple "lifting line" procedure usually adopted. The interference effects for a 2-dimensional biplane are considered in the light of the previous analysis. The results of the earlier sections are then applied to the general problem of the interference effects for a 3-dimensional biplane, and formulae and charts are given which permit the characteristics of the individual wings of an arbitrary biplane without sweepback or dihedral to be calculated. In the final section the conclusions drawn from the application of the theory to a considerable number of special cases are discussed, and curves are given illustrating certain of these conclusions and serving as examples to indicate the nature of the agreement between the theory and experiment.

  16. Application and flight test of linearizing transformations using measurement feedback to the nonlinear control problem

    NASA Technical Reports Server (NTRS)

    Antoniewicz, Robert F.; Duke, Eugene L.; Menon, P. K. A.

    1991-01-01

    The design of nonlinear controllers has relied on the use of detailed aerodynamic and engine models that must be associated with the control law in the flight system implementation. Many of these controllers were applied to vehicle flight path control problems and have attempted to combine both inner- and outer-loop control functions in a single controller. An approach to the nonlinear trajectory control problem is presented. This approach uses linearizing transformations with measurement feedback to eliminate the need for detailed aircraft models in outer-loop control applications. By applying this approach and separating the inner-loop and outer-loop functions two things were achieved: (1) the need for incorporating detailed aerodynamic models in the controller is obviated; and (2) the controller is more easily incorporated into existing aircraft flight control systems. An implementation of the controller is discussed, and this controller is tested on a six degree-of-freedom F-15 simulation and in flight on an F-15 aircraft. Simulation data are presented which validates this approach over a large portion of the F-15 flight envelope. Proof of this concept is provided by flight-test data that closely matches simulation results. Flight-test data are also presented.

  17. Warhead verification as inverse problem: Applications of neutron spectrum unfolding from organic-scintillator measurements

    NASA Astrophysics Data System (ADS)

    Lawrence, Chris C.; Febbraro, Michael; Flaska, Marek; Pozzi, Sara A.; Becchetti, F. D.

    2016-08-01

    Verification of future warhead-dismantlement treaties will require detection of certain warhead attributes without the disclosure of sensitive design information, and this presents an unusual measurement challenge. Neutron spectroscopy—commonly eschewed as an ill-posed inverse problem—may hold special advantages for warhead verification by virtue of its insensitivity to certain neutron-source parameters like plutonium isotopics. In this article, we investigate the usefulness of unfolded neutron spectra obtained from organic-scintillator data for verifying a particular treaty-relevant warhead attribute: the presence of high-explosive and neutron-reflecting materials. Toward this end, several improvements on current unfolding capabilities are demonstrated: deuterated detectors are shown to have superior response-matrix condition to that of standard hydrogen-base scintintillators; a novel data-discretization scheme is proposed which removes important detector nonlinearities; and a technique is described for re-parameterizing the unfolding problem in order to constrain the parameter space of solutions sought, sidestepping the inverse problem altogether. These improvements are demonstrated with trial measurements and verified using accelerator-based time-of-flight calculation of reference spectra. Then, a demonstration is presented in which the elemental compositions of low-Z neutron-attenuating materials are estimated to within 10%. These techniques could have direct application in verifying the presence of high-explosive materials in a neutron-emitting test item, as well as other for treaty verification challenges.

  18. Parallel satellite orbital situational problems solver for space missions design and control

    NASA Astrophysics Data System (ADS)

    Atanassov, Atanas Marinov

    2016-11-01

    Solving different scientific problems for space applications demands implementation of observations, measurements or realization of active experiments during time intervals in which specific geometric and physical conditions are fulfilled. The solving of situational problems for determination of these time intervals when the satellite instruments work optimally is a very important part of all activities on every stage of preparation and realization of space missions. The elaboration of universal, flexible and robust approach for situation analysis, which is easily portable toward new satellite missions, is significant for reduction of missions' preparation times and costs. Every situation problem could be based on one or more situation conditions. Simultaneously solving different kinds of situation problems based on different number and types of situational conditions, each one of them satisfied on different segments of satellite orbit requires irregular calculations. Three formal approaches are presented. First one is related to situation problems description that allows achieving flexibility in situation problem assembling and presentation in computer memory. The second formal approach is connected with developing of situation problem solver organized as processor that executes specific code for every particular situational condition. The third formal approach is related to solver parallelization utilizing threads and dynamic scheduling based on "pool of threads" abstraction and ensures a good load balance. The developed situation problems solver is intended for incorporation in the frames of multi-physics multi-satellite space mission's design and simulation tools.

  19. The problem-context dependence of students' application of Newton's second law

    NASA Astrophysics Data System (ADS)

    Allbaugh, Alicia Ranee

    Previous research has indicated that improved knowledge organization allows experts to solve problems in a larger variety of contextual settings. In addition, it has been suggested that contextual appreciation is a form of learning ignored by much instruction. To that end, this study investigated students' understanding and application of Newton's Second Law (F = ma) in scenarios differing from those used in instruction of the concept. Instruction in these other contextual arenas, for example electrostatics, does not necessarily include Newton's laws explicitly. Instructors tacitly assumed that the student already has learned the concept fully from previous instruction on the topic. The study used a qualitative design in a constructivist framework. Students were asked questions regarding that concept in a series of six interviews that spanned several topics in a two-semester, calculus-based introductory physics course. No student was consistent with respect to the application of Newton's Second Law throughout the entire course. However, student responses from these interviews fell into clear categories and themes emerged. These categories revealed new contextually dependent misconceptions for Newton's second law. Additionally, student responses were clearly affected by the question contextual scenario for the following areas: Rotational Motion, Changing Mass Propulsion, Electric Charges, Electric and Magnetic Fields, Charge with Velocity.

  20. Mathematical problems in the application of multilinear models to facial emotion processing experiments

    NASA Astrophysics Data System (ADS)

    Andersen, Anders H.; Rayens, William S.; Li, Ren-Cang; Blonder, Lee X.

    2000-10-01

    In this paper we describe the enormous potential that multilinear models hold for the analysis of data from neuroimaging experiments that rely on functional magnetic resonance imaging (MRI) or other imaging modalities. A case is made for why one might fully expect that the successful introduction of these models to the neuroscience community could define the next generation of structure-seeking paradigms in the area. In spite of the potential for immediate application, there is much to do from the perspective of statistical science. That is, although multilinear models have already been particularly successful in chemistry and psychology, relatively little is known about their statistical properties. To that end, our research group at the University of Kentucky has made significant progress. In particular, we are in the process of developing formal influence measures for multilinear methods as well as associated classification models and effective implementations. We believe that these problems will be among the most important and useful to the scientific community. Details are presented herein and an application is given in the context of facial emotion processing experiments.

  1. Problems of applications of high power IR radiation in aquatic medium under high pressure

    NASA Astrophysics Data System (ADS)

    Sorokin, Yurii V.; Kuzyakov, Boris A.

    2004-06-01

    In this work the effects that appear in the optical breakdown are analyzed in water and the time dependences received also for the velocities and pressures at the wave fronts. The application of acoustic waves, generated by high power laser pulses in the aqueous medium, has quite serious perspectives for sounding. It is shown in the work that under comparatively low power density of radiation, as a result of a surface layer heating, the thermoelastic sresses arise, leading to the excitation of the acoustic waves. The analysis showed that the prognostic evaluations of the values of a light deflagration area are possible for a clear aqueous medium with the pressures up to 400 kg/cm2. With the presence of microinhomogeneities, it is necessary to know their total physical and chemical properties and detailed trustworthy data by their spatial distribution. A principally new approach was developed to the problem of videoinformation transmission from the object surfaces by the fiber-optic channel. The application of a precision measuring TV-camera with a color format in the range 0.3 - 0.98 μm allows to raise the information capacity of the transmitted information. The optimization of vision module choice are considered also.

  2. Multi-fluid problems in magnetohydrodynamics with applications to astrophysical processes

    NASA Astrophysics Data System (ADS)

    Greenfield, Eric John

    2016-01-01

    I begin this study by presenting an overview of the theory of magnetohydrodynamics and the necessary conditions to justify the fluid treatment of a plasma. Upon establishing the fluid description of a plasma we move on to a discussion of magnetohydrodynamics in both the ideal and Hall regimes. This framework is then extended to include multiple plasmas in order to consider two problems of interest in the field of theoretical space physics. The first is a study on the evolution of a partially ionized plasma, a topic with many applications in space physics. A multi-fluid approach is necessary in this case to account for the motions of an ion fluid, electron fluid and neutral atom fluid; all of which are coupled to one another by collisions and/or electromagnetic forces. The results of this study have direct application towards an open question concerning the cascade of Kolmogorov-like turbulence in the interstellar plasma which we will discuss below. The second application of multi-fluid magnetohydrodynamics that we consider in this thesis concerns the amplification of magnetic field upstream of a collisionless, parallel shock. The relevant fluids here are the ions and electrons comprising the interstellar plasma and the galactic cosmic ray ions. Previous works predict that the streaming of cosmic rays lead to an instability resulting in significant amplification of the interstellar magnetic field at supernova blastwaves. This prediction is routinely invoked to explain the acceleration of galactic cosmic rays up to energies of 1015 eV. I will examine this phenomenon in detail using the multi-fluid framework outlined below. The purpose of this work is to first confirm the existence of an instability using a purely fluid approach with no additional approximations. If confirmed, I will determine the necessary conditions for it to operate.

  3. Application of Schema Theory to the Instruction of Arithmetic Word Problem Solving Skills.

    ERIC Educational Resources Information Center

    Tsai, Chia-jer; Derry, Sharon J.

    An understanding-based approach to teaching arithmetic word problems is used in the Training Arithmetic Problem Solving Skills (TAPS) research project, for which four semantic schemas or problem representations have been revised and adopted: Combine, Compare, Change, and Vary. It is hypothesized that a good problem solver identifies the schema of…

  4. Applications of a finite-volume algorithm for incompressible MHD problems

    NASA Astrophysics Data System (ADS)

    Vantieghem, S.; Sheyko, A.; Jackson, A.

    2016-02-01

    We present the theory, algorithms and implementation of a parallel finite-volume algorithm for the solution of the incompressible magnetohydrodynamic (MHD) equations using unstructured grids that are applicable for a wide variety of geometries. Our method implements a mixed Adams-Bashforth/Crank-Nicolson scheme for the nonlinear terms in the MHD equations and we prove that it is stable independent of the time step. To ensure that the solenoidal condition is met for the magnetic field, we use a method whereby a pseudo-pressure is introduced into the induction equation; since we are concerned with incompressible flows, the resulting Poisson equation for the pseudo-pressure is solved alongside the equivalent Poisson problem for the velocity field. We validate our code in a variety of geometries including periodic boxes, spheres, spherical shells, spheroids and ellipsoids; for the finite geometries we implement the so-called ferromagnetic or pseudo-vacuum boundary conditions appropriate for a surrounding medium with infinite magnetic permeability. This implies that the magnetic field must be purely perpendicular to the boundary. We present a number of comparisons against previous results and against analytical solutions, which verify the code's accuracy. This documents the code's reliability as a prelude to its use in more difficult problems. We finally present a new simple drifting solution for thermal convection in a spherical shell that successfully sustains a magnetic field of simple geometry. By dint of its rapid stabilization from the given initial conditions, we deem it suitable as a benchmark against which other self-consistent dynamo codes can be tested.

  5. Application of Effective Field Theories to Problems in Nuclear and Hadronic Physics

    NASA Astrophysics Data System (ADS)

    Mereghetti, Emanuele

    The Effective Field Theory formalism is applied to the study of problems in hadronic and nuclear physics. We develop a framework to study the exclusive two-body decays of bottomonium into two charmed mesons and apply it to study the decays of the C-even bottomonia. Using a sequence of effective field theories, we take advantage of the separation between the scales contributing to the decay processes, 2mb >> mc >> ΛQCD. We prove that, at leading order in the EFT power counting, the decay rate factorizes into the convolution of two perturbative matching coefficients and three non-perturbative matrix elements, one for each hadron. We calculate the relations between the decay rate and non-perturbative bottomonium and D-meson matrix elements at leading order, with next-to-leading log resummation. The phenomenological implications of these relations are discussed. At lower energies, we use Chiral Perturbation Theory and nuclear EFTs to set up a framework for the study of time reversal (T) symmetry in one- and few-nucleon problems. We consider T violation from the QCD theta term and from all the possible dimension 6 operators, expressed in terms of light quarks, gluons and photons, that can be added to the Standard Model Lagrangian. We construct the low energy chiral Lagrangian stemming from different TV sources, and derive the implications for the nucleon Electric Dipole Form Factor and the deuteron T violating electromagnetic Form Factors. Finally, with an eye to applications to nuclei with A ≥ 2, we construct the T violating nucleon-nucleon potential from different sources of T violation.

  6. PREFACE: XVII International Youth Scientific School on Actual Problems of Magnetic Resonance and its Applications

    NASA Astrophysics Data System (ADS)

    2014-11-01

    Editors: M.S.Tagirov, V.V.Semashko, A.S.Nizamutdinov Kazan is the motherland of Electronic Paramagnetic Resonance (EPR) which was discovered in Kazan State University in 1944 by prof. E.K.Zavojskii. Since the Young Scientist School of Magnetic Resonance run by professor G.V.Skrotskii from MIPT stopped its work, Kazan took up the activity under the initiative of academician A.S.Borovik-Romanov. Nowadays this school is rejuvenated and the International Youth Scientific School studying "Actual problems of the magnetic resonance and its application" is developing. Traditionally the main subjects of the School meetings are: Magnetic Resonance in Solids, Chemistry, Geology, Biology and Medicine. The unchallenged organizers of that school are Kazan Federal University and Kazan E. K. Zavoisky Physical-Technical Institute. The rector of the School is professor Murat Tagirov, vice-rector - professor Valentine Zhikharev. Since 1997 more than 100 famous scientists from Germany, France, Switzerland, USA, Japan, Russia, Ukraine, Moldavia, Georgia provided plenary lecture presentations. Almost 700 young scientists have had an opportunity to participate in discussions of the latest scientific developments, to make their oral reports and to improve their knowledge and skills. To enhance competition among the young scientists, reports take place every year and the Program Committee members name the best reports, the authors of which are invited to prepare full-scale scientific papers. Since 2013 the International Youth Scientific School "Actual problems of the magnetic resonance and its application", following the tendency for comprehensive studies of matter properties and its interaction with electromagnetic fields, expanded "the field of interest" and opened the new section: Coherent Optics and Optical Spectroscopy. Many young people have submitted interesting reports on photonics, quantum electronics, laser physics, quantum optics, traditional optical and laser spectroscopy, non

  7. Application of Second-Moment Source Analysis to Three Problems in Earthquake Forecasting

    NASA Astrophysics Data System (ADS)

    Donovan, J.; Jordan, T. H.

    2011-12-01

    Though earthquake forecasting models have often represented seismic sources as space-time points (usually hypocenters), a more complete hazard analysis requires the consideration of finite-source effects, such as rupture extent, orientation, directivity, and stress drop. The most compact source representation that includes these effects is the finite moment tensor (FMT), which approximates the degree-two polynomial moments of the stress glut by its projection onto the seismic (degree-zero) moment tensor. This projection yields a scalar space-time source function whose degree-one moments define the centroid moment tensor (CMT) and whose degree-two moments define the FMT. We apply this finite-source parameterization to three forecasting problems. The first is the question of hypocenter bias: can we reject the null hypothesis that the conditional probability of hypocenter location is uniformly distributed over the rupture area? This hypothesis is currently used to specify rupture sets in the "extended" earthquake forecasts that drive simulation-based hazard models, such as CyberShake. Following McGuire et al. (2002), we test the hypothesis using the distribution of FMT directivity ratios calculated from a global data set of source slip inversions. The second is the question of source identification: given an observed FMT (and its errors), can we identify it with an FMT in the complete rupture set that represents an extended fault-based rupture forecast? Solving this problem will facilitate operational earthquake forecasting, which requires the rapid updating of earthquake triggering and clustering models. Our proposed method uses the second-order uncertainties as a norm on the FMT parameter space to identify the closest member of the hypothetical rupture set and to test whether this closest member is an adequate representation of the observed event. Finally, we address the aftershock excitation problem: given a mainshock, what is the spatial distribution of aftershock

  8. Finite-volume application of high-order ENO schemes to two-dimensional boundary-value problems

    NASA Technical Reports Server (NTRS)

    Casper, Jay

    1991-01-01

    Finite-volume applications of high-order accurate ENO schemes to two-dimensional boundary-value problems are studied. These schemes achieve high-order spatial accuracy, in smooth regions, by a piecewise polynomial approximation of the solution from cell averages. In addition, this spatial operation involves an adaptive stencil algorithm in order to avoid the oscillatory behavior that is associated with interpolation across steep gradients. High-order TVD Runge-Kutta methods are employed for time integration, thus making these schemes best suited for unsteady problems. Fifth- and sixth-order accurate applications are validated through a grid refinement study involving the solutions of scalar hyperbolic equations. A previously proposed extension for the Euler equations of gas dynamics is tested, including its application to solutions of boundary-value problems involving solid walls and curvilinear coordinates.

  9. Interface deformation in low reynolds number multiphase flows: Applications to selected problems in geodynamics

    SciTech Connect

    Gable, C.; Travis, B.J.; O`Connell, R.J.; Stone, H.A.

    1995-06-01

    Flow in the mantle of terrestrial planets produces stresses and topography on the planet`s surface which may allow us to infer the dynamics and evolution of the planet`s -interior. This project is directed towards understanding the relationship between dynamical processes related to buoyancy-driven flow and the observable expression (e.g. earthquakes, surface topography) of the flow. Problems considered include the ascent of mantle plumes and their interaction with compositional discontinuities, the deformation of subducted slabs, and effects of lateral viscosity variations on post-glacial rebound. We find that plumes rising from the lower mantle into a lower-viscosity upper mantle become extended vertically. As the plume spreads beneath the planet`s surface, the dynamic topography changes from a bell-shape to a plateau shape. The topography and surface stresses associated . with surface features called arachnoids, novae and coronae on Venus are consistent with the surface expression of a rising and spreading buoyant volume of fluid. Short wavelength viscosity variations, or sharp variations of lithosphere thickness, have a large effect on surface stresses. This study also considers the interaction and deformation of buoyancy-driven drops and bubbles in low Reynolds number multiphase systems. Applications include bubbles in magmas, the coalescence of liquid iron drops during core formation, and a wide range of industrial applications. Our methodology involves a combination of numerical boundary integral calculations, experiments and analytical work. For example, we find that for deformable drops the effects of deformation result in the vertical alignment of initially horizontally offset drops, thus enhancing the rate of coalescence.

  10. Finite element implementation of Robinson's unified viscoplastic model and its application to some uniaxial and multiaxial problems

    NASA Technical Reports Server (NTRS)

    Arya, V. K.; Kaufman, A.

    1987-01-01

    A description of the finite element implementation of Robinson's unified viscoplastic model into the General Purpose Finite Element Program (MARC) is presented. To demonstrate its application, the implementation is applied to some uniaxial and multiaxial problems. A comparison of the results for the multiaxial problem of a thick internally pressurized cylinder, obtained using the finite element implementation and an analytical solution, is also presented. The excellent agreement obtained confirms the correct finite element implementation of Robinson's model.

  11. Families of periodic orbits in Hill's problem with solar radiation pressure: application to Hayabusa 2

    NASA Astrophysics Data System (ADS)

    Giancotti, Marco; Campagnola, Stefano; Tsuda, Yuichi; Kawaguchi, Jun'ichiro

    2014-11-01

    This work studies periodic solutions applicable, as an extended phase, to the JAXA asteroid rendezvous mission Hayabusa 2 when it is close to target asteroid 1999 JU3. The motion of a spacecraft close to a small asteroid can be approximated with the equations of Hill's problem modified to account for the strong solar radiation pressure. The identification of families of periodic solutions in such systems is just starting and the field is largely unexplored. We find several periodic orbits using a grid search, then apply numerical continuation and bifurcation theory to a subset of these to explore the changes in the orbit families when the orbital energy is varied. This analysis gives information on their stability and bifurcations. We then compare the various families on the basis of the restrictions and requirements of the specific mission considered, such as the pointing of the solar panels and instruments. We also use information about their resilience against parameter errors and their ground tracks to identify one particularly promising type of solution.

  12. Applications of advanced technology to ash-related problems in boilers. Proceedings

    SciTech Connect

    Baxter, L.; DeSollar, R.

    1996-12-31

    This book addresses the behavior of inorganic material in combustion systems. The past decade has seen unprecedented improvements in understanding the rates and mechanisms of inorganic transformations and in developing analytical tools to predict them. These tools range from improved fuel analysis procedures to predictive computer codes. While this progress has been met with great enthusiasm within the research community, the practices of the industrial community remain largely unchanged. The papers in this book were selected from those presented at an Engineering Foundation Conference of the same title. All have been peer reviewed. The intent of the conference was to illustrate the application of advanced technology to ash-related problems in boilers and, by so doing, engage the research and industrial communities in more productive dialog. The 42 papers contained in these proceedings relate primarily to coal boilers in industry and power plants, but also biomass, oil shales, and black liquor fuels. Selected papers have been indexed separately for inclusion in the Energy Science and Technology Database.

  13. Applicability domains for classification problems: Benchmarking of distance to models for Ames mutagenicity set.

    PubMed

    Sushko, Iurii; Novotarskyi, Sergii; Körner, Robert; Pandey, Anil Kumar; Cherkasov, Artem; Li, Jiazhong; Gramatica, Paola; Hansen, Katja; Schroeter, Timon; Müller, Klaus-Robert; Xi, Lili; Liu, Huanxiang; Yao, Xiaojun; Öberg, Tomas; Hormozdiari, Farhad; Dao, Phuong; Sahinalp, Cenk; Todeschini, Roberto; Polishchuk, Pavel; Artemenko, Anatoliy; Kuz'min, Victor; Martin, Todd M; Young, Douglas M; Fourches, Denis; Muratov, Eugene; Tropsha, Alexander; Baskin, Igor; Horvath, Dragos; Marcou, Gilles; Muller, Christophe; Varnek, Alexander; Prokopenko, Volodymyr V; Tetko, Igor V

    2010-12-27

    The estimation of accuracy and applicability of QSAR and QSPR models for biological and physicochemical properties represents a critical problem. The developed parameter of "distance to model" (DM) is defined as a metric of similarity between the training and test set compounds that have been subjected to QSAR/QSPR modeling. In our previous work, we demonstrated the utility and optimal performance of DM metrics that have been based on the standard deviation within an ensemble of QSAR models. The current study applies such analysis to 30 QSAR models for the Ames mutagenicity data set that were previously reported within the 2009 QSAR challenge. We demonstrate that the DMs based on an ensemble (consensus) model provide systematically better performance than other DMs. The presented approach identifies 30-60% of compounds having an accuracy of prediction similar to the interlaboratory accuracy of the Ames test, which is estimated to be 90%. Thus, the in silico predictions can be used to halve the cost of experimental measurements by providing a similar prediction accuracy. The developed model has been made publicly available at http://ochem.eu/models/1 .

  14. Application of stroboscopic and pulsed-laser electronic speckle pattern interferometry (ESPI) to modal analysis problems

    NASA Astrophysics Data System (ADS)

    Van der Auweraer, H.; Steinbichler, H.; Vanlanduit, S.; Haberstok, C.; Freymann, R.; Storer, D.; Linet, V.

    2002-04-01

    Accurate structural models are key to the optimization of the vibro-acoustic behaviour of panel-like structures. However, at the frequencies of relevance to the acoustic problem, the structural modes are very complex, requiring high-spatial-resolution measurements. The present paper discusses a vibration testing system based on pulsed-laser holographic electronic speckle pattern interferometry (ESPI) measurements. It is a characteristic of the method that time-triggered (and not time-averaged) vibration images are obtained. Its integration into a practicable modal testing and analysis procedure is reviewed. The accumulation of results at multiple excitation frequencies allows one to build up frequency response functions. A novel parameter extraction approach using spline-based data reduction and maximum-likelihood parameter estimation was developed. Specific extensions have been added in view of the industrial application of the approach. These include the integration of geometry and response information, the integration of multiple views into one single model, the integration with finite-element model data and the prior identification of the critical panels and critical modes. A global procedure was hence established. The approach has been applied to several industrial case studies, including car panels, the firewall of a monovolume car, a full vehicle, panels of a light truck and a household product. The research was conducted in the context of the EUREKA project HOLOMODAL and the Brite-Euram project SALOME.

  15. Problems in the application of a null lens for precise measurements of aspheric mirrors.

    PubMed

    Chkhalo, N I; Malyshev, I V; Pestov, A E; Polkovnikov, V N; Salashchenko, N N; Toropov, M N; Soloviev, A A

    2016-01-20

    Problems in the application of a null lens for surface shape measurements of aspherical mirrors are discussed using the example of manufacturing an aspherical concave mirror for the beyond extreme ultraviolet nanolithographer. A method for allowing measurement of the surface shape of a sample under study and the aberration of a null lens simultaneously, and for evaluating measurement accuracy, is described. Using this method, we made a mirror with an aspheric surface of the 6th order (i.e., the maximum deviation from the best-fit sphere is 6.6 μm) with the parameters of the deviations from the designed surface PV=5.3  nm and RMS=0.8  nm. An approximation of the surface shape was carried out using Zernike polynomials {Z(n)(m)(r,φ),m+n≤36}. The physical limitations of this technique are analyzed. It is shown that for aspheric measurements to an Angstrom accuracy, one needs to have a null lens with errors of less than 1 nm. For accurate measurements, it is necessary to establish compliance with the coordinates on the sample and on the interferogram.

  16. Space-Related Applications of Intelligent Control: Which Algorithm to Choose? (Theoretical Analysis of the Problem)

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik

    1996-01-01

    For a space mission to be successful it is vitally important to have a good control strategy. For example, with the Space Shuttle it is necessary to guarantee the success and smoothness of docking, the smoothness and fuel efficiency of trajectory control, etc. For an automated planetary mission it is important to control the spacecraft's trajectory, and after that, to control the planetary rover so that it would be operable for the longest possible period of time. In many complicated control situations, traditional methods of control theory are difficult or even impossible to apply. In general, in uncertain situations, where no routine methods are directly applicable, we must rely on the creativity and skill of the human operators. In order to simulate these experts, an intelligent control methodology must be developed. The research objectives of this project were: to analyze existing control techniques; to find out which of these techniques is the best with respect to the basic optimality criteria (stability, smoothness, robustness); and, if for some problems, none of the existing techniques is satisfactory, to design new, better intelligent control techniques.

  17. On some generalization of the area theorem with applications to the problem of rolling balls

    NASA Astrophysics Data System (ADS)

    Chaplygin, Sergey A.

    2012-04-01

    This publication contributes to the series of RCD translations of Sergey Alexeevich Chaplygin's scientific heritage. Earlier we published three of his papers on non-holonomic dynamics (vol. 7, no. 2; vol. 13, no. 4) and two papers on hydrodynamics (vol. 12, nos. 1, 2). The present paper deals with mechanical systems that consist of several spheres and discusses generalized conditions for the existence of integrals of motion (linear in velocities) in such systems. First published in 1897 and awarded by the Gold Medal of Russian Academy of Sciences, this work has not lost its scientific significance and relevance. (In particular, its principal ideas are further developed and extended in the recent article "Two Non-holonomic Integrable Problems Tracing Back to Chaplygin", published in this issue, see p. 191). Note that non-holonomic models for rolling motion of spherical shells, including the case where the shells contain intricate mechanisms inside, are currently of particular interest in the context of their application in the design of ball-shaped mobile robots. We hope that this classical work will be estimated at its true worth by the English-speaking world.

  18. Hybrid modeling of spatial continuity for application to numerical inverse problems

    USGS Publications Warehouse

    Friedel, Michael J.; Iwashita, Fabio

    2013-01-01

    A novel two-step modeling approach is presented to obtain optimal starting values and geostatistical constraints for numerical inverse problems otherwise characterized by spatially-limited field data. First, a type of unsupervised neural network, called the self-organizing map (SOM), is trained to recognize nonlinear relations among environmental variables (covariates) occurring at various scales. The values of these variables are then estimated at random locations across the model domain by iterative minimization of SOM topographic error vectors. Cross-validation is used to ensure unbiasedness and compute prediction uncertainty for select subsets of the data. Second, analytical functions are fit to experimental variograms derived from original plus resampled SOM estimates producing model variograms. Sequential Gaussian simulation is used to evaluate spatial uncertainty associated with the analytical functions and probable range for constraining variables. The hybrid modeling of spatial continuity is demonstrated using spatially-limited hydrologic measurements at different scales in Brazil: (1) physical soil properties (sand, silt, clay, hydraulic conductivity) in the 42 km2 Vargem de Caldas basin; (2) well yield and electrical conductivity of groundwater in the 132 km2 fractured crystalline aquifer; and (3) specific capacity, hydraulic head, and major ions in a 100,000 km2 transboundary fractured-basalt aquifer. These results illustrate the benefits of exploiting nonlinear relations among sparse and disparate data sets for modeling spatial continuity, but the actual application of these spatial data to improve numerical inverse modeling requires testing.

  19. Navier-Stokes flow in the weighted Hardy space with applications to time decay problem

    NASA Astrophysics Data System (ADS)

    Okabe, Takahiro; Tsutsui, Yohei

    2016-08-01

    The asymptotic expansions of the Navier-Stokes flow in Rn and the rates of decay are studied with aid of weighted Hardy spaces. Fujigaki and Miyakawa [12], Miyakawa [28] proved the nth order asymptotic expansion of the Navier-Stokes flow if initial data decays like (1 + | x |)-n-1 and if nth moment of initial data is finite. In the present paper, it is clarified that the moment condition for initial data is essential in order to obtain higher order asymptotic expansion of the flow and to consider the rapid time decay problem. The second author [39] established the weighted estimates of the strong solutions in the weighted Hardy spaces with small initial data which belongs to Ln and a weighed Hardy space. Firstly, the refinement of the previous work [39] is achieved with alternative proof. Then the existence time of the solution in the weighted Hardy spaces is characterized without any Hardy norm. As a result, in two dimensional case the smallness condition on initial data is completely removed. As an application, the rapid time decay of the flow is investigated with aid of asymptotic expansions and of the symmetry conditions introduced by Brandolese [3].

  20. Applicable Problems in the History of Mathematics: Practical Examples for the Classroom

    ERIC Educational Resources Information Center

    Savizi, Behnaz

    2007-01-01

    This text has been centered on two main ideas: the specifications of a good problem to be introduced in a classroom; and according to Freudethal's view, the importance of teaching the students how to apply mathematics in their own real life problems. Putting these two ideas together, we may conclude that historical real world problems fit the…

  1. New Developments of Computational Fluid Dynamics and Their Applications to Practical Engineering Problems

    NASA Astrophysics Data System (ADS)

    Chen, Hudong

    2001-06-01

    There have been considerable advances in Lattice Boltzmann (LB) based methods in the last decade. By now, the fundamental concept of using the approach as an alternative tool for computational fluid dynamics (CFD) has been substantially appreciated and validated in mainstream scientific research and in industrial engineering communities. Lattice Boltzmann based methods possess several major advantages: a) less numerical dissipation due to the linear Lagrange type advection operator in the Boltzmann equation; b) local dynamic interactions suitable for highly parallel processing; c) physical handling of boundary conditions for complicated geometries and accurate control of fluxes; d) microscopically consistent modeling of thermodynamics and of interface properties in complex multiphase flows. It provides a great opportunity to apply the method to practical engineering problems encountered in a wide range of industries from automotive, aerospace to chemical, biomedical, petroleum, nuclear, and others. One of the key challenges is to extend the applicability of this alternative approach to regimes of highly turbulent flows commonly encountered in practical engineering situations involving high Reynolds numbers. Over the past ten years, significant efforts have been made on this front at Exa Corporation in developing a lattice Boltzmann based commercial CFD software, PowerFLOW. It has become a useful computational tool for the simulation of turbulent aerodynamics in practical engineering problems involving extremely complex geometries and flow situations, such as in new automotive vehicle designs world wide. In this talk, we present an overall LB based algorithm concept along with certain key extensions in order to accurately handle turbulent flows involving extremely complex geometries. To demonstrate the accuracy of turbulent flow simulations, we provide a set of validation results for some well known academic benchmarks. These include straight channels, backward

  2. An application of a linear programing technique to nonlinear minimax problems

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.

    1973-01-01

    A differential correction technique for solving nonlinear minimax problems is presented. The basis of the technique is a linear programing algorithm which solves the linear minimax problem. By linearizing the original nonlinear equations about a nominal solution, both nonlinear approximation and estimation problems using the minimax norm may be solved iteratively. Some consideration is also given to improving convergence and to the treatment of problems with more than one measured quantity. A sample problem is treated with this technique and with the least-squares differential correction method to illustrate the properties of the minimax solution. The results indicate that for the sample approximation problem, the minimax technique provides better estimates than the least-squares method if a sufficient amount of data is used. For the sample estimation problem, the minimax estimates are better if the mathematical model is incomplete.

  3. On an iterative ensemble smoother and its application to a reservoir facies estimation problem

    NASA Astrophysics Data System (ADS)

    Luo, Xiaodong; Chen, Yan; Valestrand, Randi; Stordal, Andreas; Lorentzen, Rolf; Nævdal, Geir

    2014-05-01

    For data assimilation problems there are different ways in utilizing the available observations. While certain data assimilation algorithms, for instance, the ensemble Kalman filter (EnKF, see, for examples, Aanonsen et al., 2009; Evensen, 2006) assimilate the observations sequentially in time, other data assimilation algorithms may instead collect the observations at different time instants and assimilate them simultaneously. In general such algorithms can be classified as smoothers. In this aspect, the ensemble smoother (ES, see, for example, Evensen and van Leeuwen, 2000) can be considered as an smoother counterpart of the EnKF. The EnKF has been widely used for reservoir data assimilation (history matching) problems since its introduction to the community of petroleum engineering (Nævdal et al., 2002). The applications of the ES to reservoir data assimilation problems are also investigated recently (see, for example, Skjervheim and Evensen, 2011). Compared to the EnKF, the ES has certain technical advantages, including, for instance, avoiding the restarts associated with each update step in the EnKF and also having fewer variables to update, which may result in a significant reduction in simulation time, while providing similar assimilation results to those obtained by the EnKF (Skjervheim and Evensen, 2011). To further improve the performance of the ES, some iterative ensemble smoothers are suggested in the literature, in which the iterations are carried out in the forms of certain iterative optimization algorithms, e.g., the Gaussian-Newton (Chen and Oliver, 2012) or the Levenberg-Marquardt method (Chen and Oliver, 2013; Emerick and Reynolds, 2012), or in the context of adaptive Gaussian mixture (AGM, see Stordal and Lorentzen, 2013). In Emerick and Reynolds (2012) the iteration formula is derived based on the idea that, for linear observations, the final results of the iterative ES should be equal to the estimate of the EnKF. In Chen and Oliver (2013), the

  4. Family Behavior Therapy for Substance Abuse and Other Associated Problems: A Review of Its Intervention Components and Applicability

    ERIC Educational Resources Information Center

    Donohue, Brad; Azrin, Nathan; Allen, Daniel N.; Romero, Valerie; Hill, Heather H.; Tracy, Kendra; Lapota, Holly; Gorney, Suzanne; Abdel-al, Ruweida; Caldas, Diana; Herdzik, Karen; Bradshaw, Kelsey; Valdez, Robby; Van Hasselt, Vincent B.

    2009-01-01

    A comprehensive evidence-based treatment for substance abuse and other associated problems (Family Behavior Therapy) is described, including its application to both adolescents and adults across a wide range of clinical contexts (i.e., criminal justice, child welfare). Relevant to practitioners and applied clinical researchers, topic areas include…

  5. PREFACE: XVI International Youth Scientific School 'Actual Problems of Magnetic Resonance and its Applications'

    NASA Astrophysics Data System (ADS)

    Salakhov, M. Kh; Tagirov, M. S.; Dooglav, A. V.

    2013-12-01

    In 1997, A S Borovik-Romanov, the Academician of RAS, and A V Aganov, the head of the Physics Department of Kazan State University, suggested that the 'School of Magnetic Resonance', well known in the Soviet Union, should recommence and be regularly held in Kazan. This school was created in 1968 by G V Scrotskii, the prominent scientist in the field of magnetic resonance and the editor of many famous books on magnetic resonance (authored by A Abragam, B. Bleaney, C. Slichter, and many others) translated and edited in the Soviet Union. In 1991 the last, the 12th School, was held under the supervision of G V Scrotskii. Since 1997, more than 600 young scientists, 'schoolboys', have taken part in the School meetings, made their oral reports and participated in heated discussions. Every year a competition among the young scientist takes place and the Program Committee members name the best reports, the authors of which are invited to prepare full-scale scientific papers. The XVI International Youth Scientific School 'Actual problems of the magnetic resonance and its application' in its themes is slightly different from previous ones. A new section has been opened this year: Coherent Optics and Optical Spectroscopy. Many young people have submitted interesting reports on optical research, many of the reports are devoted to the implementation of nanotechnology in optical studies. The XVI International Youth Scientific School has been supported by the Program of development of Kazan Federal University. It is a pleasure to thank the sponsors (BRUKER Ltd, Moscow, the Russian Academy of Science, the Dynasty foundation of Dmitrii Zimin, Russia, Russian Foundation for Basic Research) and all the participants and contributors for making the International School meeting possible and interesting. A V Dooglav, M Kh Salakhov and M S Tagirov The Editors

  6. A collection of homework problems about the application of electricity and magnetism to medicine and biology

    NASA Astrophysics Data System (ADS)

    Roth, Bradley J.; Hobbie, Russell K.

    2014-05-01

    This article contains a collection of homework problems to help students learn how concepts from electricity and magnetism can be applied to topics in medicine and biology. The problems are at a level typical of an undergraduate electricity and magnetism class, covering topics such as nerve electrophysiology, transcranial magnetic stimulation, and magnetic resonance imaging. The goal of these problems is to train biology and medical students to use quantitative methods, and also to introduce physics and engineering students to biological phenomena.

  7. Linear quadratic tracking problems in Hilbert space - Application to optimal active noise suppression

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Silcox, R. J.; Keeling, S. L.; Wang, C.

    1989-01-01

    A unified treatment of the linear quadratic tracking (LQT) problem, in which a control system's dynamics are modeled by a linear evolution equation with a nonhomogeneous component that is linearly dependent on the control function u, is presented; the treatment proceeds from the theoretical formulation to a numerical approximation framework. Attention is given to two categories of LQT problems in an infinite time interval: the finite energy and the finite average energy. The behavior of the optimal solution for finite time-interval problems as the length of the interval tends to infinity is discussed. Also presented are the formulations and properties of LQT problems in a finite time interval.

  8. Davidon-Broyden rank-one minimization methods in Hilbert space with application to optimal control problems

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.

    1972-01-01

    The Davidon-Broyden class of rank one, quasi-Newton minimization methods is extended from Euclidean spaces to infinite-dimensional, real Hilbert spaces. For several techniques of choosing the step size, conditions are found which assure convergence of the associated iterates to the location of the minimum of a positive definite quadratic functional. For those techniques, convergence is achieved without the problem of the computation of a one-dimensional minimum at each iteration. The application of this class of minimization methods for the direct computation of the solution of an optimal control problem is outlined. The performance of various members of the class are compared by solving a sample optimal control problem. Finally, the sample problem is solved by other known gradient methods, and the results are compared with those obtained with the rank one quasi-Newton methods.

  9. A Perturbation Theory for Hamilton's Principal Function: Applications to Boundary Value Problems

    NASA Astrophysics Data System (ADS)

    Munoa, Oier Penagaricano

    This thesis introduces an analytical perturbation theory for Hamilton's principal function and Hamilton's characteristic function. Based on Hamilton's principle and the research carried out by Sir William Rowan Hamilton, a perturbation theory is developed to analytically solve two-point boundary value problems. The principal function is shown to solve the two-point boundary value problem through simple differentiation and elimination. The characteristic function is related to the principal function through a Legendre transformation, and can also be used to solve two-point boundary value problems. In order to obtain the solution to the perturbed two-point boundary value problem the knowledge of the nominal solution is sufficient. The perturbation theory is applied to the two body problem to study the perturbed dynamics in the vicinity of the Hohmann transfer. It is found that the perturbation can actually offer a lower cost two-impulse transfer to the target orbit than the Hohmann transfer. The numerical error analysis of the perturbation theory is shown for different orders of calculation. Coupling Hamilton's principal and characteristic functions yields an analytical perturbation theory for the initial value problem, where the state of the perturbed system can be accurately obtained. The perturbation theory is applied to the restricted three-body problem, where the system is viewed as a two-body problem perturbed by the presence of a third body. It is shown that the first order theory can be sufficient to solve the problem, winch is expressed in terms of Delaunay elements. The solution to the initial value problem is applied to derive a Keplerian periapsis map that can be used for low-energy space mission design problems.

  10. Students' Understanding and Application of the Area under the Curve Concept in Physics Problems

    ERIC Educational Resources Information Center

    Nguyen, Dong-Hai; Rebello, N. Sanjay

    2011-01-01

    This study investigates how students understand and apply the area under the curve concept and the integral-area relation in solving introductory physics problems. We interviewed 20 students in the first semester and 15 students from the same cohort in the second semester of a calculus-based physics course sequence on several problems involving…

  11. "Cast Your Net Widely": Three Steps to Expanding and Refining Your Problem before Action Learning Application

    ERIC Educational Resources Information Center

    Reese, Simon R.

    2015-01-01

    This paper reflects upon a three-step process to expand the problem definition in the early stages of an action learning project. The process created a community-powered problem-solving approach within the action learning context. The simple three steps expanded upon in the paper create independence, dependence, and inter-dependence to aid the…

  12. Neural networks for nonlinear and mixed complementarity problems and their applications.

    PubMed

    Dang, Chuangyin; Leung, Yee; Gao, Xing-Bao; Chen, Kai-zhou

    2004-03-01

    This paper presents two feedback neural networks for solving a nonlinear and mixed complementarity problem. The first feedback neural network is designed to solve the strictly monotone problem. This one has no parameter and possesses a very simple structure for implementation in hardware. Based on a new idea, the second feedback neural network for solving the monotone problem is constructed by using the first one as a subnetwork. This feedback neural network has the least number of state variables. The stability of a solution of the problem is proved. When the problem is strictly monotone, the unique solution is uniformly and asymptotically stable in the large. When the problem has many solutions, it is guaranteed that, for any initial point, the trajectory of the network does converge to an exact solution of the problem. Feasibility and efficiency of the proposed neural networks are supported by simulation experiments. Moreover, the feedback neural network can also be applied to solve general nonlinear convex programming and nonlinear monotone variational inequalities problems with convex constraints.

  13. Application of Graph Theory in an Intelligent Tutoring System for Solving Mathematical Word Problems

    ERIC Educational Resources Information Center

    Nabiyev, Vasif V.; Çakiroglu, Ünal; Karal, Hasan; Erümit, Ali K.; Çebi, Ayça

    2016-01-01

    This study is aimed to construct a model to transform word "motion problems" in to an algorithmic form in order to be processed by an intelligent tutoring system (ITS). First; categorizing the characteristics of motion problems, second; suggesting a model for the categories were carried out. In order to solve all categories of the…

  14. Design and Application of Interactive Simulations in Problem-Solving in University-Level Physics Education

    ERIC Educational Resources Information Center

    Ceberio, Mikel; Almudí, José Manuel; Franco, Ángel

    2016-01-01

    In recent years, interactive computer simulations have been progressively integrated in the teaching of the sciences and have contributed significant improvements in the teaching-learning process. Practicing problem-solving is a key factor in science and engineering education. The aim of this study was to design simulation-based problem-solving…

  15. The Home Point System: Token Reinforcement Procedure for Application by Parents of Children with Behavior Problems.

    ERIC Educational Resources Information Center

    Christophersen, Edward R.; And Others

    Reported parent-child problems within the home are often composed of numerous instances in which the children refuse to help with household chores, bicker among themselves or engage in verbally inappropriate behavior toward the parents. Traditional family therapy, even when long-term, has not been notably successful in ameliorating these problems.…

  16. Nonlinear singularly perturbed optimal control problems with singular arcs. [flight mechanics application

    NASA Technical Reports Server (NTRS)

    Ardema, M. D.

    1979-01-01

    Singular perturbation techniques are studied for dealing with singular arc problems by analyzing a relatively low-order but otherwise general system. This system encompasses many flight mechanic problems including Goddard's problem and a version of the minimum time-to-climb problem. Boundary layer solutions are constructed which are stable and reach the outer solution in a finite time. A uniformly valid composite solution is then formed from the reduced and boundary layer solutions. The value of the approximate solution is that it is relatively easy to obtain and does not involve singular arcs. To illustrate the utility of the results, the technique is used to obtain an approximate solution of a simplified version of the aircraft minimum time-to-climb problem.

  17. Design and Application of Interactive Simulations in Problem-Solving in University-Level Physics Education

    NASA Astrophysics Data System (ADS)

    Ceberio, Mikel; Almudí, José Manuel; Franco, Ángel

    2016-08-01

    In recent years, interactive computer simulations have been progressively integrated in the teaching of the sciences and have contributed significant improvements in the teaching-learning process. Practicing problem-solving is a key factor in science and engineering education. The aim of this study was to design simulation-based problem-solving teaching materials and assess their effectiveness in improving students' ability to solve problems in university-level physics. Firstly, we analyze the effect of using simulation-based materials in the development of students' skills in employing procedures that are typically used in the scientific method of problem-solving. We found that a significant percentage of the experimental students used expert-type scientific procedures such as qualitative analysis of the problem, making hypotheses, and analysis of results. At the end of the course, only a minority of the students persisted with habits based solely on mathematical equations. Secondly, we compare the effectiveness in terms of problem-solving of the experimental group students with the students who are taught conventionally. We found that the implementation of the problem-solving strategy improved experimental students' results regarding obtaining a correct solution from the academic point of view, in standard textbook problems. Thirdly, we explore students' satisfaction with simulation-based problem-solving teaching materials and we found that the majority appear to be satisfied with the methodology proposed and took on a favorable attitude to learning problem-solving. The research was carried out among first-year Engineering Degree students.

  18. [Impact of DSM-5: Application and Problems Based on Clinical and Research Viewpoints on Anxiety Disorders].

    PubMed

    Shioiri, Toshiki

    2015-01-01

    of fears from two or more agoraphobia-related situations is now required, because this is a robust means for distinguishing agoraphobia from specific phobias. Also, the criteria for agoraphobia are now extended to be consistent with criteria sets for other anxiety disorders (e.g., a clinician's judgment of the fears as being out of proportion to the actual danger in the situation, with a typical duration of 6 months or more). From the above, these changes from DSM-IV-TR to DSM-5 in anxiety disorders make our judgments faster and more efficient in clinical practice, and DSM-5 is more useful to elucidate the pathology. In this manuscript, we discuss the application and problems based on clinical and research viewpoints regarding anxiety disorders in DSM-5. PMID:26827411

  19. Consensus properties and their large-scale applications for the gene duplication problem.

    PubMed

    Moon, Jucheol; Lin, Harris T; Eulenstein, Oliver

    2016-06-01

    Solving the gene duplication problem is a classical approach for species tree inference from gene trees that are confounded by gene duplications. This problem takes a collection of gene trees and seeks a species tree that implies the minimum number of gene duplications. Wilkinson et al. posed the conjecture that the gene duplication problem satisfies the desirable Pareto property for clusters. That is, for every instance of the problem, all clusters that are commonly present in the input gene trees of this instance, called strict consensus, will also be found in every solution to this instance. We prove that this conjecture does not generally hold. Despite this negative result we show that the gene duplication problem satisfies a weaker version of the Pareto property where the strict consensus is found in at least one solution (rather than all solutions). This weaker property contributes to our design of an efficient scalable algorithm for the gene duplication problem. We demonstrate the performance of our algorithm in analyzing large-scale empirical datasets. Finally, we utilize the algorithm to evaluate the accuracy of standard heuristics for the gene duplication problem using simulated datasets. PMID:27122201

  20. A New Differential Evolution Algorithm and Its Application to Real Life Problems

    NASA Astrophysics Data System (ADS)

    Pant, Millie; Ali, Musrrat; Singh, V. P.

    2009-07-01

    Most of the real life problems occurring in various disciplines of science and engineering can be modeled as optimization problems. Also, most of these problems are nonlinear in nature which requires a suitable and efficient optimization algorithm to reach to an optimum value. In the past few years various algorithms has been proposed to deal with nonlinear optimization problems. Differential Evolution (DE) is a stochastic, population based search technique, which can be classified as an Evolutionary Algorithm (EA) using the concepts of selection crossover and reproduction to guide the search. It has emerged as a powerful tool for solving optimization problems in the past few years. However, the convergence rate of DE still does not meet all the requirements, and attempts to speed up differential evolution are considered necessary. In order to improve the performance of DE, we propose a modified DE algorithm called DEPCX which uses parent centric approach to manipulate the solution vectors. The performance of DEPCX is validated on a test bed of five benchmark functions and five real life engineering design problems. Numerical results are compared with original differential evolution (DE) and with TDE, another recently modified version of DE. Empirical analysis of the results clearly indicates the competence and efficiency of the proposed DEPCX algorithm for solving benchmark as well as real life problems with a good convergence rate.

  1. Problem solving in nursing practice: application, process, skill acquisition and measurement.

    PubMed

    Roberts, J D; While, A E; Fitzpatrick, J M

    1993-06-01

    This paper analyses the role of problem solving in nursing practice including the process, acquisition and measurement of problem-solving skills. It is argued that while problem-solving ability is acknowledged as critical if today's nurse practitioner is to maintain effective clinical practice, to date it retains a marginal place in nurse education curricula. Further, it has attracted limited empirical study. Such an omission, it is argued, requires urgent redress if the nursing profession is to meet effectively the challenges of the next decade and beyond.

  2. Line Spring Model and Its Applications to Part-Through Crack Problems in Plates and Shells

    NASA Technical Reports Server (NTRS)

    Erdogan, F.; Aksel, B.

    1986-01-01

    The line spring model is described and extended to cover the problem of interaction of multiple internal and surface cracks in plates and shells. The shape functions for various related crack geometries obtained from the plane strain solution and the results of some multiple crack problems are presented. The problems considered include coplanar surface cracks on the same or opposite sides of a plate, nonsymmetrically located coplanar internal elliptic cracks, and in a very limited way the surface and corner cracks in a plate of finite width and a surface crack in a cylindrical shell with fixed end.

  3. Application of the perturbation iteration method to boundary layer type problems.

    PubMed

    Pakdemirli, Mehmet

    2016-01-01

    The recently developed perturbation iteration method is applied to boundary layer type singular problems for the first time. As a preliminary work on the topic, the simplest algorithm of PIA(1,1) is employed in the calculations. Linear and nonlinear problems are solved to outline the basic ideas of the new solution technique. The inner and outer solutions are determined with the iteration algorithm and matched to construct a composite expansion valid within all parts of the domain. The solutions are contrasted with the available exact or numerical solutions. It is shown that the perturbation-iteration algorithm can be effectively used for solving boundary layer type problems. PMID:27026904

  4. Final Technical Report for "Applied Mathematics Research: Simulation Based Optimization and Application to Electromagnetic Inverse Problems"

    SciTech Connect

    Haber, Eldad

    2014-03-17

    The focus of research was: Developing adaptive mesh for the solution of Maxwell's equations; Developing a parallel framework for time dependent inverse Maxwell's equations; Developing multilevel methods for optimization problems with inequal- ity constraints; A new inversion code for inverse Maxwell's equations in the 0th frequency (DC resistivity); A new inversion code for inverse Maxwell's equations in low frequency regime. Although the research concentrated on electromagnetic forward and in- verse problems the results of the research was applied to the problem of image registration.

  5. Application of the perturbation iteration method to boundary layer type problems.

    PubMed

    Pakdemirli, Mehmet

    2016-01-01

    The recently developed perturbation iteration method is applied to boundary layer type singular problems for the first time. As a preliminary work on the topic, the simplest algorithm of PIA(1,1) is employed in the calculations. Linear and nonlinear problems are solved to outline the basic ideas of the new solution technique. The inner and outer solutions are determined with the iteration algorithm and matched to construct a composite expansion valid within all parts of the domain. The solutions are contrasted with the available exact or numerical solutions. It is shown that the perturbation-iteration algorithm can be effectively used for solving boundary layer type problems.

  6. Error estimation and adaptivity in finite element analysis of convective heat transfer problems. Part 2: Validation and applications

    SciTech Connect

    Franca, A.S.; Haghighi, K.

    1996-06-01

    This is the second of two articles concerning error estimation and adaptive refinement techniques applied to convective heat transfer problems. In the first article (Part 1), the development of the proposed methodology was presented. This article (Part 2) concerns the validation of the formulation. Examples dealing with heat and momentum transfer were used to verify the efficiency and accuracy of this technique. Applications include sterilization of food products and pasteurization of liquids contained in bottles. The desired accuracy level was always attained. Refined meshes agreed with the physical aspects of the problems. Results show significant improvements when compared with the conventional finite element approach.

  7. Code verification for unsteady 3-D fluid-solid interaction problems

    NASA Astrophysics Data System (ADS)

    Yu, Kintak Raymond; Étienne, Stéphane; Hay, Alexander; Pelletier, Dominique

    2015-12-01

    This paper describes a procedure to synthesize Manufactured Solutions for Code Verification of an important class of Fluid-Structure Interaction (FSI) problems whose behaviors can be modeled as rigid body vibrations in incompressible fluids. We refer this class of FSI problems as Fluid-Solid Interaction problems, which can be found in many practical engineering applications. The methodology can be utilized to develop Manufactured Solutions for both 2-D and 3-D cases. We demonstrate the procedure with our numerical code. We present details of the formulation and methodology. We also provide the reasonings behind our proposed approach. Results from grid and time step refinement studies confirm the verification of our solver and demonstrate the versatility of the simple synthesis procedure. In addition, the results also demonstrate that the modified decoupled approach to verify flow problems with high-order time-stepping schemes can be employed equally well to verify code for multi-physics problems (here, those of the Fluid-Solid Interaction) when the numerical discretization is based on the Method of Lines.

  8. Moose: An Open-Source Framework to Enable Rapid Development of Collaborative, Multi-Scale, Multi-Physics Simulation Tools

    NASA Astrophysics Data System (ADS)

    Slaughter, A. E.; Permann, C.; Peterson, J. W.; Gaston, D.; Andrs, D.; Miller, J.

    2014-12-01

    The Idaho National Laboratory (INL)-developed Multiphysics Object Oriented Simulation Environment (MOOSE; www.mooseframework.org), is an open-source, parallel computational framework for enabling the solution of complex, fully implicit multiphysics systems. MOOSE provides a set of computational tools that scientists and engineers can use to create sophisticated multiphysics simulations. Applications built using MOOSE have computed solutions for chemical reaction and transport equations, computational fluid dynamics, solid mechanics, heat conduction, mesoscale materials modeling, geomechanics, and others. To facilitate the coupling of diverse and highly-coupled physical systems, MOOSE employs the Jacobian-free Newton-Krylov (JFNK) method when solving the coupled nonlinear systems of equations arising in multiphysics applications. The MOOSE framework is written in C++, and leverages other high-quality, open-source scientific software packages such as LibMesh, Hypre, and PETSc. MOOSE uses a "hybrid parallel" model which combines both shared memory (thread-based) and distributed memory (MPI-based) parallelism to ensure efficient resource utilization on a wide range of computational hardware. MOOSE-based applications are inherently modular, which allows for simulation expansion (via coupling of additional physics modules) and the creation of multi-scale simulations. Any application developed with MOOSE supports running (in parallel) any other MOOSE-based application. Each application can be developed independently, yet easily communicate with other applications (e.g., conductivity in a slope-scale model could be a constant input, or a complete phase-field micro-structure simulation) without additional code being written. This method of development has proven effective at INL and expedites the development of sophisticated, sustainable, and collaborative simulation tools.

  9. Integration of the DRAGON5/DONJON5 codes in the SALOME platform for performing multi-physics calculations in nuclear engineering

    NASA Astrophysics Data System (ADS)

    Hébert, Alain

    2014-06-01

    We are presenting the computer science techniques involved in the integration of codes DRAGON5 and DONJON5 in the SALOME platform. This integration brings new capabilities in designing multi-physics computational schemes, with the possibility to couple our reactor physics codes with thermal-hydraulics or thermo-mechanics codes from other organizations. A demonstration is presented where two code components are coupled using the YACS module of SALOME, based on the CORBA protocol. The first component is a full-core 3D steady-state neuronic calculation in a PWR performed using DONJON5. The second component implement a set of 1D thermal-hydraulics calculations, each performed over a single assembly.

  10. Multi-physics simulation and fabrication of a compact 128 × 128 micro-electro-mechanical system Fabry-Perot cavity tunable filter array for infrared hyperspectral imager.

    PubMed

    Meng, Qinghua; Chen, Sihai; Lai, Jianjun; Huang, Ying; Sun, Zhenjun

    2015-08-01

    This paper demonstrates the design and fabrication of a 128×128 micro-electro-mechanical systems Fabry-Perot (F-P) cavity filter array, which can be applied for the hyperspectral imager. To obtain better mechanical performance of the filters, F-P cavity supporting structures are analyzed by multi-physics finite element modeling. The simulation results indicate that Z-arm is the key component of the structure. The F-P cavity array with Z-arm structures was also fabricated. The experimental results show excellent parallelism of the bridge deck, which agree with the simulation results. A conclusion is drawn that Z-arm supporting structures are important to hyperspectral imaging system, which can achieve a large tuning range and high fill factor compared to straight arm structures. The filter arrays have the potential to replace the traditional dispersive element.

  11. Application of Particle Swarm Optimization Algorithm in the Heating System Planning Problem

    PubMed Central

    Ma, Rong-Jiang; Yu, Nan-Yang; Hu, Jun-Yi

    2013-01-01

    Based on the life cycle cost (LCC) approach, this paper presents an integral mathematical model and particle swarm optimization (PSO) algorithm for the heating system planning (HSP) problem. The proposed mathematical model minimizes the cost of heating system as the objective for a given life cycle time. For the particularity of HSP problem, the general particle swarm optimization algorithm was improved. An actual case study was calculated to check its feasibility in practical use. The results show that the improved particle swarm optimization (IPSO) algorithm can more preferably solve the HSP problem than PSO algorithm. Moreover, the results also present the potential to provide useful information when making decisions in the practical planning process. Therefore, it is believed that if this approach is applied correctly and in combination with other elements, it can become a powerful and effective optimization tool for HSP problem. PMID:23935429

  12. An application of computer algebra system Cadabra to scientific problems of physics

    NASA Astrophysics Data System (ADS)

    Sevastianov, L. A.; Kulyabov, D. S.; Kokotchikova, M. G.

    2009-12-01

    In this article we present two examples solved in a new problem-oriented computer algebra system Cadabra. Solution of the same examples in widespread universal computer algebra system Maple turn out to be more difficult.

  13. Application of particle swarm optimization algorithm in the heating system planning problem.

    PubMed

    Ma, Rong-Jiang; Yu, Nan-Yang; Hu, Jun-Yi

    2013-01-01

    Based on the life cycle cost (LCC) approach, this paper presents an integral mathematical model and particle swarm optimization (PSO) algorithm for the heating system planning (HSP) problem. The proposed mathematical model minimizes the cost of heating system as the objective for a given life cycle time. For the particularity of HSP problem, the general particle swarm optimization algorithm was improved. An actual case study was calculated to check its feasibility in practical use. The results show that the improved particle swarm optimization (IPSO) algorithm can more preferably solve the HSP problem than PSO algorithm. Moreover, the results also present the potential to provide useful information when making decisions in the practical planning process. Therefore, it is believed that if this approach is applied correctly and in combination with other elements, it can become a powerful and effective optimization tool for HSP problem.

  14. Problem gambling of Chinese college students: application of the theory of planned behavior.

    PubMed

    Wu, Anise M S; Tang, Catherine So-kum

    2012-06-01

    The present study, using the theory of planned behavior (TPB), investigated psychological correlates of intention to gamble and problem gambling among Chinese college students. Nine hundred and thirty two Chinese college students (aged from 18 to 25 years) in Hong Kong and Macao were surveyed. The findings generally support the efficacy of the TPB in explaining gambling intention and problems among Chinese college students. Specifically, the results of the path analysis indicate gambling intention and perceived control over gambling as the most proximal predictors of problem gambling, whereas attitudes, subjective norms, and perceived control, which are TPB components, influence gambling intention. Thus, these three TPB components should make up the core contents of the prevention and intervention efforts against problem gambling for Chinese college students. PMID:21556791

  15. Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Byun, Chansup; Kwak, Dochan (Technical Monitor)

    2001-01-01

    A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel super computers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.

  16. Applications of remote sensing to estuarine problems. [estuaries of Chesapeake Bay

    NASA Technical Reports Server (NTRS)

    Munday, J. C., Jr.

    1975-01-01

    A variety of siting problems for the estuaries of the lower Chesapeake Bay have been solved with cost beneficial remote sensing techniques. Principal techniques used were repetitive 1:30,000 color photography of dye emitting buoys to map circulation patterns, and investigation of water color boundaries via color and color infrared imagery to scales of 1:120,000. Problems solved included sewage outfall siting, shoreline preservation and enhancement, oil pollution risk assessment, and protection of shellfish beds from dredge operations.

  17. Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel supercomputers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.

  18. Application of the stroboscopic method to the stellar three-body problem

    NASA Astrophysics Data System (ADS)

    Ling, J. F.

    1991-11-01

    The present study examines the three-body problem by directly integrating the equations of motion in the orbital elements. The problem was set up in a barycentric chain with the orbits as perturbed ellipses, and the integration was performed using the semianalytic stroboscopic method. Perturbation theory is applied and the solution for a particular order is obtained by way of successive approximations on the fundamental period of the fast variable.

  19. Problems with numerical techniques: Application to mid-loop operation transients

    SciTech Connect

    Bryce, W.M.; Lillington, J.N.

    1997-07-01

    There has been an increasing need to consider accidents at shutdown which have been shown in some PSAs to provide a significant contribution to overall risk. In the UK experience has been gained at three levels: (1) Assessment of codes against experiments; (2) Plant studies specifically for Sizewell B; and (3) Detailed review of modelling to support the plant studies for Sizewell B. The work has largely been carried out using various versions of RELAP5 and SCDAP/RELAP5. The paper details some of the problems that have needed to be addressed. It is believed by the authors that these kinds of problems are probably generic to most of the present generation system thermal-hydraulic codes for the conditions present in mid-loop transients. Thus as far as possible these problems and solutions are proposed in generic terms. The areas addressed include: condensables at low pressure, poor time step calculation detection, water packing, inadequate physical modelling, numerical heat transfer and mass errors. In general single code modifications have been proposed to solve the problems. These have been very much concerned with means of improving existing models rather than by formulating a completely new approach. They have been produced after a particular problem has arisen. Thus, and this has been borne out in practice, the danger is that when new transients are attempted, new problems arise which then also require patching.

  20. Facilitating students' application of the integral and the area under the curve concepts in physics problems

    NASA Astrophysics Data System (ADS)

    Nguyen, Dong-Hai

    This research project investigates the difficulties students encounter when solving physics problems involving the integral and the area under the curve concepts and the strategies to facilitate students learning to solve those types of problems. The research contexts of this project are calculus-based physics courses covering mechanics and electromagnetism. In phase I of the project, individual teaching/learning interviews were conducted with 20 students in mechanics and 15 students from the same cohort in electromagnetism. The students were asked to solve problems on several topics of mechanics and electromagnetism. These problems involved calculating physical quantities (e.g. velocity, acceleration, work, electric field, electric resistance, electric current) by integrating or finding the area under the curve of functions of related quantities (e.g. position, velocity, force, charge density, resistivity, current density). Verbal hints were provided when students made an error or were unable to proceed. A total number of 140 one-hour interviews were conducted in this phase, which provided insights into students' difficulties when solving the problems involving the integral and the area under the curve concepts and the hints to help students overcome those difficulties. In phase II of the project, tutorials were created to facilitate students' learning to solve physics problems involving the integral and the area under the curve concepts. Each tutorial consisted of a set of exercises and a protocol that incorporated the helpful hints to target the difficulties that students expressed in phase I of the project. Focus group learning interviews were conducted to test the effectiveness of the tutorials in comparison with standard learning materials (i.e. textbook problems and solutions). Overall results indicated that students learning with our tutorials outperformed students learning with standard materials in applying the integral and the area under the curve

  1. Case studies: Application of SEA in provincial level expressway infrastructure network planning in China - Current existing problems

    SciTech Connect

    Zhou Kaiyi; Sheate, William R.

    2011-11-15

    Since the Law of the People's Republic of China on Environmental Impact Assessment was enacted in 2003 and Huanfa 2004 No. 98 was released in 2004, Strategic Environmental Assessment (SEA) has been officially being implemented in the expressway infrastructure planning field in China. Through scrutinizing two SEA application cases of China's provincial level expressway infrastructure (PLEI) network plans, it is found that current SEA practice in expressway infrastructure planning field has a number of problems including: SEA practitioners do not fully understand the objective of SEA; its potential contributions to strategic planning and decision-making is extremely limited; the employed application procedure and prediction and assessment techniques are too simple to bring objective, unbiased and scientific results; and no alternative options are considered. All these problems directly lead to poor quality SEA and consequently weaken SEA's effectiveness.

  2. Multi-product newsvendor problem with hybrid demand and its applications to ordering pharmaceutical reference standard materials

    NASA Astrophysics Data System (ADS)

    Wang, Dan; Qin, Zhongfeng

    2016-04-01

    Uncertainty is inherent in the newsvendor problem. Most of the existing literature is devoted to characterizing the uncertainty either by randomness or by fuzziness. However, in many cases, randomness and fuzziness simultaneously appear in the same problem. Motivated by this observation, we investigate the multi-product newsvendor problem by considering the demands as hybrid variables which are proposed to describe quantities with double uncertainties. According to the expected value criterion, we formulate an expected profit maximization model and convert it to a deterministic form when the chance distributions are given. We discuss two special cases of hybrid variable demands and give their chance distributions. Then we design hybrid simulation to estimate the chance distribution and use genetic algorithm to solve the proposed models. Finally, we proceed to present numerical examples of purchasing pharmaceutical reference standard materials to illustrate the applicability of our methodology and the effectiveness of genetic algorithm.

  3. An efficient computational method for solving nonlinear stochastic Itô integral equations: Application for stochastic problems in physics

    SciTech Connect

    Heydari, M.H.; Hooshmandasl, M.R.; Cattani, C.; Maalek Ghaini, F.M.

    2015-02-15

    Because of the nonlinearity, closed-form solutions of many important stochastic functional equations are virtually impossible to obtain. Thus, numerical solutions are a viable alternative. In this paper, a new computational method based on the generalized hat basis functions together with their stochastic operational matrix of Itô-integration is proposed for solving nonlinear stochastic Itô integral equations in large intervals. In the proposed method, a new technique for computing nonlinear terms in such problems is presented. The main advantage of the proposed method is that it transforms problems under consideration into nonlinear systems of algebraic equations which can be simply solved. Error analysis of the proposed method is investigated and also the efficiency of this method is shown on some concrete examples. The obtained results reveal that the proposed method is very accurate and efficient. As two useful applications, the proposed method is applied to obtain approximate solutions of the stochastic population growth models and stochastic pendulum problem.

  4. High Order Discontinuous Gelerkin Methods for Convection Dominated Problems with Application to Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Shu, Chi-Wang

    2000-01-01

    This project is about the investigation of the development of the discontinuous Galerkin finite element methods, for general geometry and triangulations, for solving convection dominated problems, with applications to aeroacoustics. On the analysis side, we have studied the efficient and stable discontinuous Galerkin framework for small second derivative terms, for example in Navier-Stokes equations, and also for related equations such as the Hamilton-Jacobi equations. This is a truly local discontinuous formulation where derivatives are considered as new variables. On the applied side, we have implemented and tested the efficiency of different approaches numerically. Related issues in high order ENO and WENO finite difference methods and spectral methods have also been investigated. Jointly with Hu, we have presented a discontinuous Galerkin finite element method for solving the nonlinear Hamilton-Jacobi equations. This method is based on the RungeKutta discontinuous Galerkin finite element method for solving conservation laws. The method has the flexibility of treating complicated geometry by using arbitrary triangulation, can achieve high order accuracy with a local, compact stencil, and are suited for efficient parallel implementation. One and two dimensional numerical examples are given to illustrate the capability of the method. Jointly with Hu, we have constructed third and fourth order WENO schemes on two dimensional unstructured meshes (triangles) in the finite volume formulation. The third order schemes are based on a combination of linear polynomials with nonlinear weights, and the fourth order schemes are based on combination of quadratic polynomials with nonlinear weights. We have addressed several difficult issues associated with high order WENO schemes on unstructured mesh, including the choice of linear and nonlinear weights, what to do with negative weights, etc. Numerical examples are shown to demonstrate the accuracies and robustness of the

  5. Assessing trail conditions in protected areas: Application of a problem-assessment method in Great Smoky Mountains National Park, USA

    USGS Publications Warehouse

    Leung, Y.-F.; Marion, J.

    1999-01-01

    The degradation of trail resources associated with expanding recreation and tourism visitation is a growing management problem in protected areas worldwide. In order to make judicious trail and visitor management decisions, protected area managers need objective and timely information on trail resource conditions. This paper introduces a trail survey method that efficiently characterizes the lineal extent of common trail problems. The method was applied to a large sample of trails within Great Smoky Mountains National Park, a highuse protected area in the USA. The Trail ProblemAssessment Method (TPAM) employs a continuous search for multiple indicators of predefined tread problems, yielding census data documenting the location, occurrence and extent of each problem. The present application employed 23 different indicators in three categories to gather inventory, resource condition, and design and maintenance data of each surveyed trail. Seventy-two backcountry hiking trails (528 km), or 35% of the Park's total trail length, were surveyed. Soil erosion and wet soil were found to be the two most common impacts on a lineal extent basis. Trails with serious tread problems were well distributed throughout the Park, although wet muddy treads tended to be concentrated in areas where horse use was high. The effectiveness of maintenance features installed to divert water from trail treads was also evaluated. Water bars were found to be more effective than drainage dips. The TPAM was able to provide Park managers with objective and quantitative information for use in trail planning, management and maintenance decisions, and is applicable to other protected areas elsewhere with different environmental and impact characteristics.

  6. Application of the Biot model to ultrasound in bone: inverse problem.

    PubMed

    Sebaa, N; Fellah, Z A; Fellah, M; Ogam, E; Mitri, F G; Depollier, C; Lauriks, W

    2008-07-01

    This paper concerns the ultrasonic characterization of human cancellous bone samples by solving the inverse problem using experimentally measured signals. The inverse problem is solved numerically by the least squares method. Five parameters are inverted: porosity, tortuosity, viscous characteristic length, Young modulus, and Poisson ratio of the skeletal frame. The minimization of the discrepancy between experiment and theory is made in the time domain. The ultrasonic propagation in cancellous bone is modelled using the Biot theory modified by the Johnson-Koplik-Dashen model for viscous exchange between fluid and structure. The sensitivity of the Young modulus and the Poisson ratio of the skeletal frame is studied showing their effect on the fast and slow waveforms. The inverse problem is shown to be well posed, and its solution to be unique. Experimental results for slow and fast waves transmitted through human cancellous bone samples are given and compared with theoretical predictions.

  7. The Backup-Gilbert method and its application to the electrical conductivity problem

    NASA Technical Reports Server (NTRS)

    Parker, R. L.

    1972-01-01

    The theory of Backus and Gilbert gives a technique for solving the general linear inverse problem. Observational error and lack of data are shown to reduce the reliability of the solution in different ways: the former introduces statistical uncertainties in the model, while the latter smooths out the detail. Precision can be improved by sacrificing resolving power, and vice versa, so that some compromise may be made between the two in choosing the best model. Nonlinear inverse problems can be brought into the domain of the theory by linearizing about a typical solution. The inverse problem of electrical conductivity in the mantle is used to illustrate the Backus-Gilbert technique; an example of the tradeoff diagram is given.

  8. Predictive models based on sensitivity theory and their application to practical shielding problems

    SciTech Connect

    Bhuiyan, S.I.; Roussin, R.W.; Lucius, J.L.; Bartine, D.E.

    1983-01-01

    Two new calculational models based on the use of cross-section sensitivity coefficients have been devised for calculating radiation transport in relatively simple shields. The two models, one an exponential model and the other a power model, have been applied, together with the traditional linear model, to 1- and 2-m-thick concrete-slab problems in which the water content, reinforcing-steel content, or composition of the concrete was varied. Comparing the results obtained with the three models with those obtained from exact one-dimensional discrete-ordinates transport calculations indicates that the exponential model, named the BEST model (for basic exponential shielding trend), is a particularly promising predictive tool for shielding problems dominated by exponential attenuation. When applied to a deep-penetration sodium problem, the BEST model also yields better results than do calculations based on second-order sensitivity theory.

  9. The linearized characteristics method and its application to practical nonlinear supersonic problems

    NASA Technical Reports Server (NTRS)

    Ferri, Antonio

    1952-01-01

    The methods of characteristics has been linearized by assuming that the flow field can be represented as a basic flow field determined by nonlinearized methods and a linearized superposed flow field that accounts for small changes of boundary conditions. The method has been applied to two-dimensional rotational flow where the basic flow is potential flow and to axially symmetric problems where conical flows have been used as the basic flows. In both cases the method allows the determination of the flow field to be simplified and the numerical work to be reduced to a few calculations. The calculations of axially symmetric flow can be simplified if tabulated values of some coefficients of the conical flow are obtained. The method has also been applied to slender bodies without symmetry and to some three-dimensional wing problems where two-dimensional flow can be used as the basic flow. Both problems were unsolved before in the approximation of nonlinear flow.

  10. Inverse problems for abstract evolution equations with applications in electrodynamics and elasticity

    NASA Astrophysics Data System (ADS)

    Kirsch, Andreas; Rieder, Andreas

    2016-08-01

    It is common knowledge—mainly based on experience—that parameter identification problems in partial differential equations are ill-posed. Yet, a mathematical sound argumentation is missing, except for some special cases. We present a general theory for inverse problems related to abstract evolution equations which explains not only their local ill-posedness but also provides the Fréchet derivative and its adjoint of the corresponding parameter-to-solution map which are needed, e.g., in Newton-like solvers. Our abstract results are applied to inverse problems related to the following first order hyperbolic systems: Maxwell’s equation (electromagnetic scattering in conducting media) and elastic wave equation (seismic imaging).

  11. Boundary value problem for the solution of magnetic cutoff rigidities and some special applications

    NASA Technical Reports Server (NTRS)

    Edmonds, Larry

    1987-01-01

    Since a planet's magnetic field can sometimes provide a spacecraft with some protection against cosmic ray and solar flare particles, it is important to be able to quantify this protection. This is done by calculating cutoff rigidities. An alternate to the conventional method (particle trajectory tracing) is introduced, which is to treat the problem as a boundary value problem. In this approach trajectory tracing is only needed to supply boundary conditions. In some special cases, trajectory tracing is not needed at all because the problem can be solved analytically. A differential equation governing cutoff rigidities is derived for static magnetic fields. The presense of solid objects, which can block a trajectory and other force fields are not included. A few qualititative comments, on existence and uniqueness of solutions, are made which may be useful when deciding how the boundary conditions should be set up. Also included are topics on axially symmetric fields.

  12. Applications of elliptic operator theory to the isotropic interior transmission eigenvalue problem

    NASA Astrophysics Data System (ADS)

    Lakshtanov, E.; Vainberg, B.

    2013-10-01

    The paper concerns the isotropic interior transmission eigenvalue (ITE) problem. This problem is not elliptic, but we show that, using the Dirichlet-to-Neumann map, it can be reduced to an elliptic one. This leads to the discreteness of the spectrum as well as to certain results on a possible location of the transmission eigenvalues. If the index of refraction \\sqrt{n(x)} is real, then we obtain a result on the existence of infinitely many positive ITEs and the Weyl-type lower bound on its counting function. All the results are obtained under the assumption that n(x) - 1 does not vanish at the boundary of the obstacle or it vanishes identically, but its normal derivative does not vanish at the boundary. We consider the classical transmission problem as well as the case when the inhomogeneous medium contains an obstacle. Some results on the discreteness and localization of the spectrum are obtained for complex valued n(x).

  13. Chebyshev polynomials in the spectral Tau method and applications to Eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Johnson, Duane

    1996-01-01

    Chebyshev Spectral methods have received much attention recently as a technique for the rapid solution of ordinary differential equations. This technique also works well for solving linear eigenvalue problems. Specific detail is given to the properties and algebra of chebyshev polynomials; the use of chebyshev polynomials in spectral methods; and the recurrence relationships that are developed. These formula and equations are then applied to several examples which are worked out in detail. The appendix contains an example FORTRAN program used in solving an eigenvalue problem.

  14. The potential application of the blackboard model of problem solving to multidisciplinary design

    NASA Technical Reports Server (NTRS)

    Rogers, J. L.

    1989-01-01

    Problems associated with the sequential approach to multidisciplinary design are discussed. A blackboard model is suggested as a potential tool for implementing the multilevel decomposition approach to overcome these problems. The blackboard model serves as a global database for the solution with each discipline acting as a knowledge source for updating the solution. With this approach, it is possible for engineers to improve the coordination, communication, and cooperation in the conceptual design process, allowing them to achieve a more optimal design from an interdisciplinary standpoint.

  15. Neural network for solving Nash equilibrium problem in application of multiuser power control.

    PubMed

    He, Xing; Yu, Junzhi; Huang, Tingwen; Li, Chuandong; Li, Chaojie

    2014-09-01

    In this paper, based on an equivalent mixed linear complementarity problem, we propose a neural network to solve multiuser power control optimization problems (MPCOP), which is modeled as the noncooperative Nash game in modern digital subscriber line (DSL). If the channel crosstalk coefficients matrix is positive semidefinite, it is shown that the proposed neural network is stable in the sense of Lyapunov and global convergence to a Nash equilibrium, and the Nash equilibrium is unique if the channel crosstalk coefficients matrix is positive definite. Finally, simulation results on two numerical examples show the effectiveness and performance of the proposed neural network.

  16. Application of the Thurston bifurcation solution strategy to problems with modal interaction

    NASA Technical Reports Server (NTRS)

    Rankin, C. C.; Brogan, F. A.

    1988-01-01

    The solution of bifurcation problems with closely-spaced critical points is achieved by first separating the singular part of the equation system encountered during a Newton iteration and carrying the Taylor expansion of the reduced system out to higher order. This separation is accomplished by transforming the equation system into an equivalent system in which some of the original unknowns are replaced with an equal number of modal amplitude coefficients. This method was used to continue the analysis of two significant example problems well past multiple fiburcation points, allowing a detailed examination of postbuckling behavior in the presence of modal interaction.

  17. Application of a novel finite difference method to dynamic crack problems

    NASA Technical Reports Server (NTRS)

    Chen, Y. M.; Wilkins, M. L.

    1976-01-01

    A versatile finite difference method (HEMP and HEMP 3D computer programs) was developed originally for solving dynamic problems in continuum mechanics. It was extended to analyze the stress field around cracks in a solid with finite geometry subjected to dynamic loads and to simulate numerically the dynamic fracture phenomena with success. This method is an explicit finite difference method applied to the Lagrangian formulation of the equations of continuum mechanics in two and three space dimensions and time. The calculational grid moves with the material and in this way it gives a more detailed description of the physics of the problem than the Eulerian formulation.

  18. Applications of Magnetic Suspension Technology to Large Scale Facilities: Progress, Problems and Promises

    NASA Technical Reports Server (NTRS)

    Britcher, Colin P.

    1997-01-01

    This paper will briefly review previous work in wind tunnel Magnetic Suspension and Balance Systems (MSBS) and will examine the handful of systems around the world currently known to be in operational condition or undergoing recommissioning. Technical developments emerging from research programs at NASA and elsewhere will be reviewed briefly, where there is potential impact on large-scale MSBSS. The likely aerodynamic applications for large MSBSs will be addressed, since these applications should properly drive system designs. A recently proposed application to ultra-high Reynolds number testing will then be addressed in some detail. Finally, some opinions on the technical feasibility and usefulness of a large MSBS will be given.

  19. Applications of Taylor-Galerkin finite element method to compressible internal flow problems

    NASA Technical Reports Server (NTRS)

    Sohn, Jeong L.; Kim, Yongmo; Chung, T. J.

    1989-01-01

    A two-step Taylor-Galerkin finite element method with Lapidus' artificial viscosity scheme is applied to several test cases for internal compressible inviscid flow problems. Investigations for the effect of supersonic/subsonic inlet and outlet boundary conditions on computational results are particularly emphasized.

  20. On one modification of traveling salesman problem oriented on application in atomic engineering

    SciTech Connect

    Chentsov, A. G.; Sesekin, A. N.; Shcheklein, S. E.; Tashlykov, O. L.

    2010-10-25

    The mathematical model of a problem of minimization of a dose of an irradiation of the personnel which is carrying out dismantling of the completing block of a nuclear power plant is considered. Dismantling of elements of the block is carried out consistently. A brigade of workers having carried out dismantling of the next element of the block passes to similar work on other element of the block. Thus it is supposed that on the sequence of performance of works restrictions are imposed. These restrictions assume that on a number of pairs of works the condition is imposed: the second work cannot be executed before the first. This problem is similar to a known traveling salesman problem with the difference that expenses function depends on the list of outstanding works, and on sequence of performance of works and corresponding motions the constraints in the form of antecedence are imposed. The variant of the dynamic programming method is developed for such problem and the corresponding software is created.

  1. Silverton Conference on Applications of the Zero Gravity Space Shuttle Environment to Problems in Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Eisner, M. (Editor)

    1974-01-01

    The possible utilization of the zero gravity resource for studies in a variety of fluid dynamics and fluid-dynamic related problems was investigated. A group of experiments are discussed and described in detail; these include experiments in the areas of geophysical fluid models, fluid dynamics, mass transfer processes, electrokinetic separation of large particles, and biophysical and physiological areas.

  2. Allied Health Applications Integrated into Developmental Mathematics Using Problem Based Learning

    ERIC Educational Resources Information Center

    Shore, Mark; Shore, JoAnna; Boggs, Stacey

    2004-01-01

    For this FIPSE funded project, mathematics faculty attended allied health classes and allied health faculty attended developmental mathematics courses to incorporate health examples into the developmental mathematics curriculum. Through the course of this grant a 450-page developmental mathematics book was written with many problems from a variety…

  3. Exploring the Pareto frontier using multisexual evolutionary algorithms: an application to a flexible manufacturing problem

    NASA Astrophysics Data System (ADS)

    Bonissone, Stefano R.; Subbu, Raj

    2002-12-01

    In multi-objective optimization (MOO) problems we need to optimize many possibly conflicting objectives. For instance, in manufacturing planning we might want to minimize the cost and production time while maximizing the product's quality. We propose the use of evolutionary algorithms (EAs) to solve these problems. Solutions are represented as individuals in a population and are assigned scores according to a fitness function that determines their relative quality. Strong solutions are selected for reproduction, and pass their genetic material to the next generation. Weak solutions are removed from the population. The fitness function evaluates each solution and returns a related score. In MOO problems, this fitness function is vector-valued, i.e. it returns a value for each objective. Therefore, instead of a global optimum, we try to find the Pareto-optimal or non-dominated frontier. We use multi-sexual EAs with as many genders as optimization criteria. We have created new crossover and gender assignment functions, and experimented with various parameters to determine the best setting (yielding the highest number of non-dominated solutions.) These experiments are conducted using a variety of fitness functions, and the algorithms are later evaluated on a flexible manufacturing problem with total cost and time minimization objectives.

  4. Application of DOT-MORSE coupling to the analysis of three-dimensional SNAP shielding problems

    NASA Technical Reports Server (NTRS)

    Straker, E. A.; Childs, R. L.; Emmett, M. B.

    1972-01-01

    The use of discrete ordinates and Monte Carlo techniques to solve radiation transport problems is discussed. A general discussion of two possible coupling schemes is given for the two methods. The calculation of the reactor radiation scattered from a docked service and command module is used as an example of coupling discrete ordinates (DOT) and Monte Carlo (MORSE) calculations.

  5. An Application of the Patient-Oriented Problem-Solving (POPS) System.

    ERIC Educational Resources Information Center

    Chiodo, Gary T.; And Others

    1991-01-01

    The Patient-Oriented Problem-Solving System, a cooperative learning model, was implemented in a second year immunology course at the Oregon Health Sciences University School of Dentistry, to correlate basic and clinical sciences information about Acquired Immune Deficiency Syndrome. Student enthusiasm and learning were substantial. (MSE)

  6. Problem-Based Learning in Graduate Management Education: An Integrative Model and Interdisciplinary Application

    ERIC Educational Resources Information Center

    Brownell, Judi; Jameson, Daphne A.

    2004-01-01

    This article develops a model of problem-based learning (PBL) and shows how PBL has been used for a decade in one graduate management program. PBL capitalizes on synergies among cognitive, affective, and behavioral learning. Although management education usually privileges cognitive learning, affective learning is equally important. By focusing on…

  7. Application of fuzzy theories to formulation of multi-objective design problems. [for helicopters

    NASA Technical Reports Server (NTRS)

    Dhingra, A. K.; Rao, S. S.; Miura, H.

    1988-01-01

    Much of the decision making in real world takes place in an environment in which the goals, the constraints, and the consequences of possible actions are not known precisely. In order to deal with imprecision quantitatively, the tools of fuzzy set theory can by used. This paper demonstrates the effectiveness of fuzzy theories in the formulation and solution of two types of helicopter design problems involving multiple objectives. The first problem deals with the determination of optimal flight parameters to accomplish a specified mission in the presence of three competing objectives. The second problem addresses the optimal design of the main rotor of a helicopter involving eight objective functions. A method of solving these multi-objective problems using nonlinear programming techniques is presented. Results obtained using fuzzy formulation are compared with those obtained using crisp optimization techniques. The outlined procedures are expected to be useful in situations where doubt arises about the exactness of permissible values, degree of credibility, and correctness of statements and judgements.

  8. Large Context Problems and Their Applications to Education: Some Contemporary Examples

    ERIC Educational Resources Information Center

    Winchester, Ian

    2006-01-01

    Some 35 years ago, Gerard K. O'Neill used the large context of space travel with his undergraduate physics students. A Canadian physics teacher, Art Stinner, independently arrived at a similar notion in a more limited but, therefore, more generally useful sense, which he referred to as the "large context problem" approach. At a slightly earlier…

  9. Understanding Problems Faced by Classroom Teachers: An Application of Q-Methodology.

    ERIC Educational Resources Information Center

    Rivera, Deborah B.

    This paper examines the effects of two types of data collection strategies on Q-technique factor analysis. The subjects (classroom teachers) were divided into two groups: Group 1 (n=21) and Group 2 (n=23). The subjects responded to an instrument designed to measure the degree to which certain aspects of teaching are viewed as problems; the…

  10. Concurrent Reinforcement Schedules for Problem Behavior and Appropriate Behavior: Experimental Applications of the Matching Law

    ERIC Educational Resources Information Center

    Borrero, Carrie S. W.; Vollmer, Timothy R.; Borrero, John C.; Bourret, Jason C.; Sloman, Kimberly N.; Samaha, Andrew L.; Dallery, Jesse

    2010-01-01

    This study evaluated how children who exhibited functionally equivalent problem and appropriate behavior allocate responding to experimentally arranged reinforcer rates. Relative reinforcer rates were arranged on concurrent variable-interval schedules and effects on relative response rates were interpreted using the generalized matching equation.…

  11. Applicability domains for classification problems: benchmarking of distance to models for AMES mutagenicity set

    EPA Science Inventory

    For QSAR and QSPR modeling of biological and physicochemical properties, estimating the accuracy of predictions is a critical problem. The “distance to model” (DM) can be defined as a metric that defines the similarity between the training set molecules and the test set compound ...

  12. Learner Perspectives of Online Problem-Based Learning and Applications from Cognitive Load Theory

    ERIC Educational Resources Information Center

    Chen, Ruth

    2016-01-01

    Problem-based learning (PBL) courses have historically been situated in physical classrooms involving in-person interactions. As online learning is embraced in higher education, programs that use PBL can integrate online platforms to support curriculum delivery and facilitate student engagement. This report describes student perspectives of the…

  13. Teacher-Designed Software for Interactive Linear Equations: Concepts, Interpretive Skills, Applications & Word-Problem Solving.

    ERIC Educational Resources Information Center

    Lawrence, Virginia

    No longer just a user of commercial software, the 21st century teacher is a designer of interactive software based on theories of learning. This software, a comprehensive study of straightline equations, enhances conceptual understanding, sketching, graphic interpretive and word problem solving skills as well as making connections to real-life and…

  14. Computer-Based Assessment of Complex Problem Solving: Concept, Implementation, and Application

    ERIC Educational Resources Information Center

    Greiff, Samuel; Wustenberg, Sascha; Holt, Daniel V.; Goldhammer, Frank; Funke, Joachim

    2013-01-01

    Complex Problem Solving (CPS) skills are essential to successfully deal with environments that change dynamically and involve a large number of interconnected and partially unknown causal influences. The increasing importance of such skills in the 21st century requires appropriate assessment and intervention methods, which in turn rely on adequate…

  15. A tabu search evalutionary algorithm for multiobjective optimization: Application to a bi-criterion aircraft structural reliability problem

    NASA Astrophysics Data System (ADS)

    Long, Kim Chenming

    Real-world engineering optimization problems often require the consideration of multiple conflicting and noncommensurate objectives, subject to nonconvex constraint regions in a high-dimensional decision space. Further challenges occur for combinatorial multiobjective problems in which the decision variables are not continuous. Traditional multiobjective optimization methods of operations research, such as weighting and epsilon constraint methods, are ill-suited to solving these complex, multiobjective problems. This has given rise to the application of a wide range of metaheuristic optimization algorithms, such as evolutionary, particle swarm, simulated annealing, and ant colony methods, to multiobjective optimization. Several multiobjective evolutionary algorithms have been developed, including the strength Pareto evolutionary algorithm (SPEA) and the non-dominated sorting genetic algorithm (NSGA), for determining the Pareto-optimal set of non-dominated solutions. Although numerous researchers have developed a wide range of multiobjective optimization algorithms, there is a continuing need to construct computationally efficient algorithms with an improved ability to converge to globally non-dominated solutions along the Pareto-optimal front for complex, large-scale, multiobjective engineering optimization problems. This is particularly important when the multiple objective functions and constraints of the real-world system cannot be expressed in explicit mathematical representations. This research presents a novel metaheuristic evolutionary algorithm for complex multiobjective optimization problems, which combines the metaheuristic tabu search algorithm with the evolutionary algorithm (TSEA), as embodied in genetic algorithms. TSEA is successfully applied to bicriteria (i.e., structural reliability and retrofit cost) optimization of the aircraft tail structure fatigue life, which increases its reliability by prolonging fatigue life. A comparison for this

  16. An Application of the Difference Potentials Method to Solving External Problems in CFD

    NASA Technical Reports Server (NTRS)

    Ryaben 'Kii, Victor S.; Tsynkov, Semyon V.

    1997-01-01

    Numerical solution of infinite-domain boundary-value problems requires some special techniques that would make the problem available for treatment on the computer. Indeed, the problem must be discretized in a way that the computer operates with only finite amount of information. Therefore, the original infinite-domain formulation must be altered and/or augmented so that on one hand the solution is not changed (or changed slightly) and on the other hand the finite discrete formulation becomes available. One widely used approach to constructing such discretizations consists of truncating the unbounded original domain and then setting the artificial boundary conditions (ABC's) at the newly formed external boundary. The role of the ABC's is to close the truncated problem and at the same time to ensure that the solution found inside the finite computational domain would be maximally close to (in the ideal case, exactly the same as) the corresponding fragment of the original infinite-domain solution. Let us emphasize that the proper treatment of artificial boundaries may have a profound impact on the overall quality and performance of numerical algorithms. The latter statement is corroborated by the numerous computational experiments and especially concerns the area of CFD, in which external problems present a wide class of practically important formulations. In this paper, we review some work that has been done over the recent years on constructing highly accurate nonlocal ABC's for calculation of compressible external flows. The approach is based on implementation of the generalized potentials and pseudodifferential boundary projection operators analogous to those proposed first by Calderon. The difference potentials method (DPM) by Ryaben'kii is used for the effective computation of the generalized potentials and projections. The resulting ABC's clearly outperform the existing methods from the standpoints of accuracy and robustness, in many cases noticeably speed up

  17. Clinic on Library Applications of Data Processing, 1972. Proceedings: Applications of On-Line Computers to Library Problems.

    ERIC Educational Resources Information Center

    Lancaster, F. Wilfrid, Ed.

    In planning this ninth annual clinic an attempt was made to include papers on a wide range of library applications of on-line computers, as well as to include libraries of various types and various sizes. Two papers deal with on-line circulation control (the Ohio State University system, described by Hugh C. Atkinson, and the Northwestern…

  18. Factorizing monolithic applications

    SciTech Connect

    Hall, J.H.; Ankeny, L.A.; Clancy, S.P.

    1998-12-31

    The Blanca project is part of the US Department of Energy`s (DOE) Accelerated Strategic Computing Initiative (ASCI), which focuses on Science-Based Stockpile Stewardship through the large-scale simulation of multi-physics, multi-dimensional problems. Blanca is the only Los Alamos National Laboratory (LANL)-based ASCI project that is written entirely in C++. Tecolote, a new framework used in developing Blanca physics codes, provides an infrastructure for gluing together any number of components; this framework is then used to create applications that encompass a wide variety of physics models, numerical solution options, and underlying data storage schemes. The advantage of this approach is that only the essential components for the given model need be activated at runtime. Tecolote has been designed for code re-use and to isolate the computer science mechanics from the physics aspects as much as possible -- allowing physics model developers to write algorithms in a style quite similar to the underlying physics equations that govern the computational physics. This paper describes the advantages of component architectures and contrasts the Tecolote framework with Microsoft`s OLE and Apple`s OpenDoc. An actual factorization of a traditional monolithic application into its basic components is also described.

  19. Development, application and assessment of a taxonomy for characterizing international environmental problems. Master's thesis

    SciTech Connect

    Koehler, M.D.; Marrs, J.A.

    1990-01-01

    As national leaders become increasingly aware of the environmental risks that modern technology adds to existing natural environmental problems, they have begun to search for ways to prioritize the risks they face. Several experts in risk assessment, including Professor Gordon Goodman of the Stockholm Environmental Institute, researchers at Clark University's Center for Environment, Technology, Development (CENTED), and the United States Environmental Protection Agency, have already developed some hazard characterization taxonomies that attempt to fill this need. The Kennedy School of Government (KSG) taxonomy if the next iteration of taxonomies designed to characterize environmental problems. The purpose of this Policy Analysis Exercise (PAE) is to test and evaluate the KSG taxonomy. In order to accomplish these goals, the United States and India are presented as case studies. The final section of this PAE provides recommendations to policy makers who use the KSG taxonomy.

  20. A novel artificial fish swarm algorithm for solving large-scale reliability-redundancy application problem.

    PubMed

    He, Qiang; Hu, Xiangtao; Ren, Hong; Zhang, Hongqi

    2015-11-01

    A novel artificial fish swarm algorithm (NAFSA) is proposed for solving large-scale reliability-redundancy allocation problem (RAP). In NAFSA, the social behaviors of fish swarm are classified in three ways: foraging behavior, reproductive behavior, and random behavior. The foraging behavior designs two position-updating strategies. And, the selection and crossover operators are applied to define the reproductive ability of an artificial fish. For the random behavior, which is essentially a mutation strategy, the basic cloud generator is used as the mutation operator. Finally, numerical results of four benchmark problems and a large-scale RAP are reported and compared. NAFSA shows good performance in terms of computational accuracy and computational efficiency for large scale RAP. PMID:26474934

  1. A Vector Study of Linearized Supersonic Flow Applications to Nonplanar Problems

    NASA Technical Reports Server (NTRS)

    Martin, John C

    1953-01-01

    A vector study of the partial-differential equation of steady linearized supersonic flow is presented. General expressions which relate the velocity potential in the stream to the conditions on the disturbing surfaces, are derived. In connection with these general expressions the concept of the finite part of an integral is discussed. A discussion of problems dealing with planar bodies is given and the conditions for the solution to be unique are investigated. Problems concerning nonplanar systems are investigated, and methods are derived for the solution of some simple nonplanar bodies. The surface pressure distribution and the damping in roll are found for rolling tails consisting of four, six, and eight rectangular fins for the Mach number range where the region of interference between adjacent fins does not affect the fin tips.

  2. The application of cost averaging techniques to robust control of the benchmark problem

    NASA Technical Reports Server (NTRS)

    Hagood, Nesbitt W.; Crawley, Edward F.

    1991-01-01

    A method is presented for the synthesis of robust controllers for linear time invariant systems with parameterized uncertainty structures. The method involves minimizing the average quadratic (H2) cost over the parameterized system. Bonded average cost implies stability over the set of systems. The average cost functional is minimized to derive robust fixed-order dynamic compensators. The robustness properties of these controllers are demonstrated on the sample problem.

  3. Ongoing applications of soft computing technologies to real-world problems at Physical Optics Corporation

    NASA Astrophysics Data System (ADS)

    Kostrzewski, Andrew A.; Kim, Dai Hyun; Jannson, Tomasz P.; Savant, Gajendra D.; Kim, Jeongdal; Chen, Judy

    1998-10-01

    Soft computing is a set of promising computational tools for solving problems that are inherently well solved by humans but not by standard computing means. This paper presents an overview of R and D activities at Physical Optics Corporation in the area of soft computing. The company has been involved in soft computing for over ten years, and has pioneered several soft-computing methodologies, including fuzzied genetic algorithms and neuro-fuzzy networks. Several practical implementations of soft computing are discussed.

  4. The Riccati equation, imprimitive actions and symplectic forms. [with application to decentralized optimal control problem

    NASA Technical Reports Server (NTRS)

    Garzia, M. R.; Loparo, K. A.; Martin, C. F.

    1982-01-01

    This paper looks at the structure of the solution of a matrix Riccati differential equation under a predefined group of transformations. The group of transformations used is an expanded form of the feedback group. It is shown that this group of transformations is a subgroup of the symplectic group. The orbits of the Riccati differential equation under the action of this group are studied and it is seen how these techniques apply to a decentralized optimal control problem.

  5. Progress toward a circulation atlas for application to coastal water siting problems

    NASA Technical Reports Server (NTRS)

    Munday, J. C., Jr.; Gordon, H. H.

    1978-01-01

    Circulation data needed to resolve coastal siting problems are assembled from historical hydrographic and remote sensing studies in the form of a Circulation Atlas. Empirical data are used instead of numerical model simulations to achieve fine resolution and include fronts and convergence zones. Eulerian and Langrangian data are collected, transformed, and combined into trajectory maps and current vector maps as a function of tidal phase and wind vector. Initial Atlas development is centered on the Elizabeth River, Hampton Roads, Virgina.

  6. Finite-element/progressive-lattice-sampling response surface methodology and application to benchmark probability quantification problems

    SciTech Connect

    Romero, V.J.; Bankston, S.D.

    1998-03-01

    Optimal response surface construction is being investigated as part of Sandia discretionary (LDRD) research into Analytic Nondeterministic Methods. The goal is to achieve an adequate representation of system behavior over the relevant parameter space of a problem with a minimum of computational and user effort. This is important in global optimization and in estimation of system probabilistic response, which are both made more viable by replacing large complex computer models with fast-running accurate and noiseless approximations. A Finite Element/Lattice Sampling (FE/LS) methodology for constructing progressively refined finite element response surfaces that reuse previous generations of samples is described here. Similar finite element implementations can be extended to N-dimensional problems and/or random fields and applied to other types of structured sampling paradigms, such as classical experimental design and Gauss, Lobatto, and Patterson sampling. Here the FE/LS model is applied in a ``decoupled`` Monte Carlo analysis of two sets of probability quantification test problems. The analytic test problems, spanning a large range of probabilities and very demanding failure region geometries, constitute a good testbed for comparing the performance of various nondeterministic analysis methods. In results here, FE/LS decoupled Monte Carlo analysis required orders of magnitude less computer time than direct Monte Carlo analysis, with no appreciable loss of accuracy. Thus, when arriving at probabilities or distributions by Monte Carlo, it appears to be more efficient to expend computer-model function evaluations on building a FE/LS response surface than to expend them in direct Monte Carlo sampling.

  7. The application of MINIQUASI to thermal program boundary and initial value problems

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The feasibility of applying the solution techniques of Miniquasi to the set of equations which govern a thermoregulatory model is investigated. For solving nonlinear equations and/or boundary conditions, a Taylor Series expansion is required for linearization of both equations and boundary conditions. The solutions are iterative and in each iteration, a problem like the linear case is solved. It is shown that Miniquasi cannot be applied to the thermoregulatory model as originally planned.

  8. GPU accelerated solver for nonlinear reaction-diffusion systems. Application to the electrophysiology problem

    NASA Astrophysics Data System (ADS)

    Mena, Andres; Ferrero, Jose M.; Rodriguez Matas, Jose F.

    2015-11-01

    Solving the electric activity of the heart possess a big challenge, not only because of the structural complexities inherent to the heart tissue, but also because of the complex electric behaviour of the cardiac cells. The multi-scale nature of the electrophysiology problem makes difficult its numerical solution, requiring temporal and spatial resolutions of 0.1 ms and 0.2 mm respectively for accurate simulations, leading to models with millions degrees of freedom that need to be solved for thousand time steps. Solution of this problem requires the use of algorithms with higher level of parallelism in multi-core platforms. In this regard the newer programmable graphic processing units (GPU) has become a valid alternative due to their tremendous computational horsepower. This paper presents results obtained with a novel electrophysiology simulation software entirely developed in Compute Unified Device Architecture (CUDA). The software implements fully explicit and semi-implicit solvers for the monodomain model, using operator splitting. Performance is compared with classical multi-core MPI based solvers operating on dedicated high-performance computer clusters. Results obtained with the GPU based solver show enormous potential for this technology with accelerations over 50 × for three-dimensional problems.

  9. Application of general invariance relations reduction method to solution of radiation transfer problems

    NASA Astrophysics Data System (ADS)

    Rogovtsov, Nikolai N.; Borovik, Felix

    2016-11-01

    A brief analysis of different properties and principles of invariance to solve a number of classical problems of the radiation transport theory is presented. The main ideas, constructions, and assertions used in the general invariance relations reduction method are described in outline. The most important distinctive features of this general method of solving a wide enough range of problems of the radiation transport theory and mathematical physics are listed. To illustrate the potential of this method, a number of problems of the scalar radiative transfer theory have been solved rigorously in the article. The main stages of rigorous derivations of asymptotical formulas for the smallest in modulo elements of the discrete spectrum and the eigenfunctions, corresponding to them, of the characteristic equation for the case of an arbitrary phase function and almost conservative scattering are described. Formulas of the same type for the azimuthal averaged reflection function, the plane and spherical albedos have been obtained rigorously. New analytical representations for the reflection function, the plane and spherical albedos have been obtained, and effective algorithms for calculating these values have been offered for the case of a practically arbitrary phase function satisfying the Hölder condition. New analytical representation of the «surface» Green function of the scalar radiative transfer equation for a semi-infinite plane-parallel conservatively scattering medium has been found. The deep regime asymptotics of the "volume" Green function has been obtained for the case of a turbid medium of cylindrical form.

  10. Sequential Monte Carlo samplers for semi-linear inverse problems and application to magnetoencephalography

    NASA Astrophysics Data System (ADS)

    Sommariva, Sara; Sorrentino, Alberto

    2014-11-01

    We discuss the use of a recent class of sequential Monte Carlo methods for solving inverse problems characterized by a semi-linear structure, i.e. where the data depend linearly on a subset of variables and nonlinearly on the remaining ones. In this type of problems, under proper Gaussian assumptions one can marginalize the linear variables. This means that the Monte Carlo procedure needs only to be applied to the nonlinear variables, while the linear ones can be treated analytically; as a result, the Monte Carlo variance and/or the computational cost decrease. We use this approach to solve the inverse problem of magnetoencephalography, with a multi-dipole model for the sources. Here, data depend nonlinearly on the number of sources and their locations, and depend linearly on their current vectors. The semi-analytic approach enables us to estimate the number of dipoles and their location from a whole time-series, rather than a single time point, while keeping a low computational cost.

  11. Analysis of a parallelized nonlinear elliptic boundary value problem solver with application to reacting flows

    NASA Technical Reports Server (NTRS)

    Keyes, David E.; Smooke, Mitchell D.

    1987-01-01

    A parallelized finite difference code based on the Newton method for systems of nonlinear elliptic boundary value problems in two dimensions is analyzed in terms of computational complexity and parallel efficiency. An approximate cost function depending on 15 dimensionless parameters is derived for algorithms based on stripwise and boxwise decompositions of the domain and a one-to-one assignment of the strip or box subdomains to processors. The sensitivity of the cost functions to the parameters is explored in regions of parameter space corresponding to model small-order systems with inexpensive function evaluations and also a coupled system of nineteen equations with very expensive function evaluations. The algorithm was implemented on the Intel Hypercube, and some experimental results for the model problems with stripwise decompositions are presented and compared with the theory. In the context of computational combustion problems, multiprocessors of either message-passing or shared-memory type may be employed with stripwise decompositions to realize speedup of O(n), where n is mesh resolution in one direction, for reasonable n.

  12. GPU accelerated solver for nonlinear reaction-diffusion systems. Application to the electrophysiology problem

    NASA Astrophysics Data System (ADS)

    Mena, Andres; Ferrero, Jose M.; Rodriguez Matas, Jose F.

    2015-11-01

    Solving the electric activity of the heart possess a big challenge, not only because of the structural complexities inherent to the heart tissue, but also because of the complex electric behaviour of the cardiac cells. The multi-scale nature of the electrophysiology problem makes difficult its numerical solution, requiring temporal and spatial resolutions of 0.1 ms and 0.2 mm respectively for accurate simulations, leading to models with millions degrees of freedom that need to be solved for thousand time steps. Solution of this problem requires the use of algorithms with higher level of parallelism in multi-core platforms. In this regard the newer programmable graphic processing units (GPU) has become a valid alternative due to their tremendous computational horsepower. This paper presents results obtained with a novel electrophysiology simulation software entirely developed in Compute Unified Device Architecture (CUDA). The software implements fully explicit and semi-implicit solvers for the monodomain model, using operator splitting. Performance is compared with classical multi-core MPI based solvers operating on dedicated high-performance computer clusters. Results obtained with the GPU based solver show enormous potential for this technology with accelerations over 50 × for three-dimensional problems.

  13. The geometry of discombinations and its applications to semi-inverse problems in anelasticity.

    PubMed

    Yavari, Arash; Goriely, Alain

    2014-09-01

    The geometrical formulation of continuum mechanics provides us with a powerful approach to understand and solve problems in anelasticity where an elastic deformation is combined with a non-elastic component arising from defects, thermal stresses, growth effects or other effects leading to residual stresses. The central idea is to assume that the material manifold, prescribing the reference configuration for a body, has an intrinsic, non-Euclidean, geometrical structure. Residual stresses then naturally arise when this configuration is mapped into Euclidean space. Here, we consider the problem of discombinations (a new term that we introduce in this paper), that is, a combined distribution of fields of dislocations, disclinations and point defects. Given a discombination, we compute the geometrical characteristics of the material manifold (curvature, torsion, non-metricity), its Cartan's moving frames and structural equations. This identification provides a powerful algorithm to solve semi-inverse problems with non-elastic components. As an example, we calculate the residual stress field of a cylindrically symmetric distribution of discombinations in an infinite circular cylindrical bar made of an incompressible hyperelastic isotropic elastic solid. PMID:25197257

  14. Application of CFD Analysis to Design Support and Problem Resolution for ASRM and RSRM

    NASA Technical Reports Server (NTRS)

    Dill, Richard A.; Whitesides, R. Harold

    1993-01-01

    The use of Navier-Stokes CFD codes to predict the internal flow field environment in a solid rocket motor is a very important analysis element during the design phase of a motor development program. These computational flow field solutions uncover a variety of potential problems associated with motor performance as well as suggesting solutions to these problems. CFD codes have also proven to be of great benefit in explaining problems associated with operational motors such as in the case of the pressure spike problem with the STS-54B flight motor. This paper presents results from analyses involving both motor design support and problem resolution. The issues discussed include the fluid dynamic/mechanical stress coupling at field joints relative to significant propellant deformations, the prediction of axial and radial pressure gradients in the motor associated with motor performance and propellant mechanical loading, the prediction of transition of the internal flow in the motor associated with erosive burning, the accumulation of slag at the field joints and in the submerged nozzle region, impingement of flow on the nozzle nose, and pressure gradients in the nozzle region of the motor. The analyses presented in this paper have been performed using a two-dimensional axisymmetric model. Fluent/BFC, a three dimensional Navier-Stokes flow field code, has been used to make the numerical calculations. This code utilizes a staggered grid formulation along with the SIMPLER numerical pressure-velocity coupling algorithm. Wall functions are used to represent the character of the viscous sub-layer flow, and an adjusted k-epsilon turbulence model especially configured for mass injection internal flows, is used to model the growth of turbulence in the motor port. Conclusions discussed in this paper consider flow field effects on the forward, center, and aft propellant grains except for the head end star grain region of the forward propellant segment. The field joints and the

  15. Application of CFD analysis to design support and problem resolution for ASRM and RSRM

    NASA Astrophysics Data System (ADS)

    Dill, Richard A.; Whitesides, R. Harold

    1993-07-01

    The use of Navier-Stokes CFD codes to predict the internal flow field environment in a solid rocket motor is a very important analysis element during the design phase of a motor development program. These computational flow field solutions uncover a variety of potential problems associated with motor performance as well as suggesting solutions to these problems. CFD codes have also proven to be of great benefit in explaining problems associated with operational motors such as in the case of the pressure spike problem with the STS-54B flight motor. This paper presents results from analyses involving both motor design support and problem resolution. The issues discussed include the fluid dynamic/mechanical stress coupling at field joints relative to significant propellant deformations, the prediction of axial and radial pressure gradients in the motor associated with motor performance and propellant mechanical loading, the prediction of transition of the internal flow in the motor associated with erosive burning, the accumulation of slag at the field joints and in the submerged nozzle region, impingement of flow on the nozzle nose, and pressure gradients in the nozzle region of the motor. The analyses presented in this paper have been performed using a two-dimensional axisymmetric model. Fluent/BFC, a three dimensional Navier-Stokes flow field code, has been used to make the numerical calculations. This code utilizes a staggered grid formulation along with the SIMPLER numerical pressure-velocity coupling algorithm. Wall functions are used to represent the character of the viscous sub-layer flow, and an adjusted k-epsilon turbulence model especially configured for mass injection internal flows, is used to model the growth of turbulence in the motor port. Conclusions discussed in this paper consider flow field effects on the forward, center, and aft propellant grains except for the head end star grain region of the forward propellant segment. The field joints and the

  16. Criterion for applicability of a linear approximation in problems on the development of small perturbations in gas dynamics

    SciTech Connect

    Meshkov, E.E.; Mokhov, V.N.

    1983-01-01

    Stability problems and the development of small perturbations in gasdynamics are ordinarily investigated by using the solution of linearized equations. The applicability of the linear approximation is usually determined by the smallness of the perturbation. However, a linear approximation turns out to be false in a number of cases. The authors consider a plane problem in which a characteristic surface curved along a sinusoid moves over a substance at a constant velocity. In this case, the change in surface shape with time is determined by the Huygens principle. Also considered is the one-dimensional flow of an ideal gas with adiabatic index ..nu.. in which there is a small sinusoidal perturbation at the initial time. These examples are encountered locally in the majority of problems on flow stability in gasdynamics. It is shown that the shape of the reflected wave front deviates from the sinusoidal with time and the formation of singularities and the deviation from the sinusoidal shape slow down as the amplitude of the initial perturbation diminishes. The authors conclude that utilization of a linearized approximation to solve the gasdynamics equations is possible only up to a certain time. Consequently, application of asymptotic formulas obtained on the basis of a linear approximation for a finite magnitude of the perbation requires an additional foundation.

  17. Multi-resolution Shape Analysis via Non-Euclidean Wavelets: Applications to Mesh Segmentation and Surface Alignment Problems.

    PubMed

    Kim, Won Hwa; Chung, Moo K; Singh, Vikas

    2013-01-01

    The analysis of 3-D shape meshes is a fundamental problem in computer vision, graphics, and medical imaging. Frequently, the needs of the application require that our analysis take a multi-resolution view of the shape's local and global topology, and that the solution is consistent across multiple scales. Unfortunately, the preferred mathematical construct which offers this behavior in classical image/signal processing, Wavelets, is no longer applicable in this general setting (data with non-uniform topology). In particular, the traditional definition does not allow writing out an expansion for graphs that do not correspond to the uniformly sampled lattice (e.g., images). In this paper, we adapt recent results in harmonic analysis, to derive Non-Euclidean Wavelets based algorithms for a range of shape analysis problems in vision and medical imaging. We show how descriptors derived from the dual domain representation offer native multi-resolution behavior for characterizing local/global topology around vertices. With only minor modifications, the framework yields a method for extracting interest/key points from shapes, a surprisingly simple algorithm for 3-D shape segmentation (competitive with state of the art), and a method for surface alignment (without landmarks). We give an extensive set of comparison results on a large shape segmentation benchmark and derive a uniqueness theorem for the surface alignment problem.

  18. Digital image enhancement techniques used in some ERTS application problems. [geology, geomorphology, and oceanography

    NASA Technical Reports Server (NTRS)

    Goetz, A. F. H.; Billingsley, F. C.

    1974-01-01

    Enhancements discussed include contrast stretching, multiratio color displays, Fourier plane operations to remove striping and boosting MTF response to enhance high spatial frequency content. The use of each technique in a specific application in the fields of geology, geomorphology and oceanography is demonstrated.

  19. Lanczos transformation for quantum impurity problems in d-dimensional lattices: Application to graphene nanoribbons

    NASA Astrophysics Data System (ADS)

    Büsser, C. A.; Martins, G. B.; Feiguin, A. E.

    2013-12-01

    We present a completely unbiased and controlled numerical method to solve quantum impurity problems in d-dimensional lattices. This approach is based on a canonical transformation, of the Lanczos form, where the complete lattice Hamiltonian is exactly mapped onto an equivalent one-dimensional system, in the same spirit as Wilson's numerical renormalization, and Haydock's recursion method. We introduce many-body interactions in the form of a Kondo or Anderson impurity and we solve the low-dimensional problem using the density matrix renormalization group. The technique is particularly suited to study systems that are inhomogeneous, and/or have a boundary. The resulting dimensional reduction translates into a reduction of the scaling of the entanglement entropy by a factor Ld-1, where L is the linear dimension of the original d-dimensional lattice. This allows one to calculate the ground state of a magnetic impurity attached to an L×L square lattice and an L×L×L cubic lattice with L up to 140 sites. We also study the localized edge states in graphene nanoribbons by attaching a magnetic impurity to the edge or the center of the system. For armchair metallic nanoribbons we find a slow decay of the spin correlations as a consequence of the delocalized metallic states. In the case of zigzag ribbons, the decay of the spin correlations depends on the position of the impurity. If the impurity is situated in the bulk of the ribbon, the decay is slow as in the metallic case. On the other hand, if the adatom is attached to the edge, the decay is fast, within few sites of the impurity, as a consequence of the localized edge states, and the short correlation length. The mapping can be combined with ab initio band structure calculations to model the system, and to understand correlation effects in quantum impurity problems starting from first principles.

  20. Application of subgroup decomposition in diffusion theory to gas cooled thermal reactor problem

    SciTech Connect

    Yasseri, S.; Rahnema, F.

    2013-07-01

    In this paper, the accuracy and computational efficiency of the subgroup decomposition (SGD) method in diffusion theory is assessed in a ID benchmark problem characteristic of gas cooled thermal systems. This method can be viewed as a significant improvement in accuracy of standard coarse-group calculations used for VHTR whole core analysis in which core environmental effect and energy angle coupling are pronounced. It is shown that a 2-group SGD calculation reproduces fine-group (47) results with 1.5 to 6 times faster computational speed depending on the stabilizing schemes while it is as efficient as single standard 6-group diffusion calculation. (authors)

  1. Problems of Development and Application of Metal Matrix Composite Powders for Additive Technologies

    NASA Astrophysics Data System (ADS)

    Korosteleva, Elena N.; Pribytkov, Gennadii A.; Krinitcyn, Maxim G.; Baranovskii, Anton V.; Korzhova, Victoria V.

    2016-07-01

    The paper considers the problem of structure formation in composites with carbide phase and a metal binder under self-propagating high-temperature synthesis (SHS) of powder mixtures. The relation between metal binder content and their structure and wear resistance of coatings was studied. It has been shown that dispersion of the carbide phase and volume content of metal binder in the composite powders structure could be regulated purposefully for all of studied composites. It was found that the structure of surfaced coating was fully inherited of composite powders. Modification or coarsening of the structure at the expense of recrystallization or coagulation carbide phase during deposition and sputtering does not occur.

  2. Raman-based geobarometry of ultrahigh-pressure metamorphic rocks: applications, problems, and perspectives.

    PubMed

    Korsakov, Andrey V; Zhukov, Vladimir P; Vandenabeele, Peter

    2010-08-01

    Raman-based geobarometry has recently become increasingly popular because it is an elegant way to obtain information on peak metamorphic conditions or the entire pressure-temperature-time (P-T-t) path of metamorphic rocks, especially those formed under ultrahigh-pressure (UHP) conditions. However, several problems need to be solved to get reliable estimates of metamorphic conditions. In this paper we present some examples of difficulties which can arise during the Raman spectroscopy study of solid inclusions from ultrahigh-pressure metamorphic rocks.

  3. Efficient method for scattering problems in open billiards: Theory and applications

    NASA Astrophysics Data System (ADS)

    Akguc, Gursoy B.; Seligman, Thomas H.

    2006-12-01

    We present an efficient method to solve scattering problems in two-dimensional open billiards with two leads and a complicated scattering region. The basic idea is to transform the scattering region to a rectangle, which will lead to complicated dynamics in the interior, but simple boundary conditions. The method can be specialized to closed billiards, and it allows the treatment of interacting particles in the billiard. We apply this method to quantum echoes measured recently in a microwave cavity, and indicate how it can be used for interacting particles.

  4. [Application of problem-based learning in teaching practice of Science of Meridians and Acupoints].

    PubMed

    Wang, Xiaoyan; Tang, Jiqin; Ying, Zhenhao; Zhang, Yongchen

    2015-02-01

    Science of Meridians and Acupoints is the bridge between basic medicine and clinical medicine of acupuncture and moxibustion. This teaching practice was conducted in reference to the teaching mode of problembased learning (PBL), in association with the clinical design problems, by taking as the students as the role and guided by teachers. In order to stimulate students' active learning enthusiasm, the writers implemented the class teaching in views of the typical questions of clinical design, presentation of study group, emphasis on drawing meridian running courses and acupoint locations, summarization and analysis, as well as comprehensive evaluation so that the comprehensive innovative ability of students and the teaching quality could be improved.

  5. Pricing Theory of Derivatives in Financial Engineering and the Problems on the Application to Electricity Markets

    NASA Astrophysics Data System (ADS)

    Misawa, Tetsuya

    Recently, the wholesale electric power exchange has been founded in Japan. With the progress of the electricity market, some management schemes of electricity price risk will be necessary. In financial markets or the preceding electricity markets, various “derivatives" on assets in the markets are often used as management tools to hedge the price risk. This paper gives a short commentary on some fundamental concepts of the derivatives and the pricing theory in the financial engineering, and discusses the problems on the financial engineering approach to electricity derivatives.

  6. Surface characterization of commercial fibers for solid-phase microextraction and related problems in their application.

    PubMed

    Haberhauer-Troyer, C; Crnoja, M; Rosenberg, E; Grasserbauer, M

    2000-02-01

    The surfaces of commercially available polydimethylsiloxane (PDMS) and Carboxen-PDMS fibers for solid-phase microextraction (SPME) were investigated by optical and electron microscopy. Damage to the coating as well as contamination of new fibers and a highly variable number of pores in Carboxen-PDMS coatings were observed. Together with the contamination of the fibers during their use with metallic particles originating from the SPME fiber holder they are possible explanations for the problems encountered in the analysis of organolead, organotin and organosulfur compounds, such as artifact formation and low repeatability. PMID:11220312

  7. Spherical cavity-expansion forcing function in PRONTO 3D for application to penetration problems

    SciTech Connect

    Warren, T.L.; Tabbara, M.R.

    1997-05-01

    In certain penetration events the primary mode of deformation of the target can be approximated by known analytical expressions. In the context of an analysis code, this approximation eliminates the need for modeling the target as well as the need for a contact algorithm. This technique substantially reduces execution time. In this spirit, a forcing function which is derived from a spherical-cavity expansion analysis has been implemented in PRONTO 3D. This implementation is capable of computing the structural and component responses of a projectile due to three dimensional penetration events. Sample problems demonstrate good agreement with experimental and analytical results.

  8. The p-Dirichlet-to-Neumann operator with applications to elliptic and parabolic problems

    NASA Astrophysics Data System (ADS)

    Hauer, Daniel

    2015-10-01

    In this paper, we investigate the Dirichlet-to-Neumann operator associated with second order quasi-linear operators of p-Laplace type for 1 < p < ∞, which acts on the boundary of a bounded Lipschitz domain in Rd for d ≥ 2. We establish well-posedness and Hölder-continuity with uniform estimates of weak solutions of some elliptic boundary-value problems involving the Dirichlet-to-Neumann operator. By employing these regularity results of weak solutions of elliptic problems, we show that the semigroup generated by the negative Dirichlet-to-Neumann operator on Lq enjoys an Lq -C 0, α-smoothing effect and the negative Dirichlet-to-Neumann operator on the set of continuous functions on the boundary of the domain generates a strongly continuous and order-preserving semigroup. Moreover, we establish convergence in large time with decay rates of all trajectories of the semigroup, and in the singular case (1 + ε) ∨ 2 d/d + 2 ≤/p < 2 for some ε > 0, we give upper estimates of the finite time of extinction.

  9. Applications of shallow high-resolution seismic reflection to various environmental problems

    USGS Publications Warehouse

    Miller, R.D.; Steeples, D.W.

    1994-01-01

    Shallow seismic reflection has been successfully applied to environmental problems in a variety of geologic settings. Increased dynamic range of recording equipment and decreased cost of processing hardware and software have made seismic reflection a cost-effective means of imaging shallow geologic targets. Seismic data possess sufficient resolution in many areas to detect faulting with displacement of less than 3 m and beds as thin as 1 m. We have detected reflections from depths as shallow as 2 m. Subsurface voids associated with abandoned coal mines at depths of less than 20 m can be detected and mapped. Seismic reflection has been successful in mapping disturbed subsurface associated with dissolution mining of salt. A graben detected and traced by seismic reflection was shown to be a preferential pathway for leachate leaking from a chemical storage pond. As shown by these case histories, shallow high-resolution seismic reflection has the potential to significantly enhance the economics and efficiency of preventing and/or solving many environmental problems. ?? 1994.

  10. Improved kernel gradient free-smoothed particle hydrodynamics and its applications to heat transfer problems

    NASA Astrophysics Data System (ADS)

    Juan-Mian, Lei; Xue-Ying, Peng

    2016-02-01

    Kernel gradient free-smoothed particle hydrodynamics (KGF-SPH) is a modified smoothed particle hydrodynamics (SPH) method which has higher precision than the conventional SPH. However, the Laplacian in KGF-SPH is approximated by the two-pass model which increases computational cost. A new kind of discretization scheme for the Laplacian is proposed in this paper, then a method with higher precision and better stability, called Improved KGF-SPH, is developed by modifying KGF-SPH with this new Laplacian model. One-dimensional (1D) and two-dimensional (2D) heat conduction problems are used to test the precision and stability of the Improved KGF-SPH. The numerical results demonstrate that the Improved KGF-SPH is more accurate than SPH, and stabler than KGF-SPH. Natural convection in a closed square cavity at different Rayleigh numbers are modeled by the Improved KGF-SPH with shifting particle position, and the Improved KGF-SPH results are presented in comparison with those of SPH and finite volume method (FVM). The numerical results demonstrate that the Improved KGF-SPH is a more accurate method to study and model the heat transfer problems.

  11. Application of higher-order cepstral techniques in problems of fetal heart signal extraction

    NASA Astrophysics Data System (ADS)

    Sabry-Rizk, Madiha; Zgallai, Walid; Hardiman, P.; O'Riordan, J.

    1996-10-01

    Recently, cepstral analysis based on second order statistics and homomorphic filtering techniques have been used in the adaptive decomposition of overlapping, or otherwise, and noise contaminated ECG complexes of mothers and fetals obtained by a transabdominal surface electrodes connected to a monitoring instrument, an interface card, and a PC. Differential time delays of fetal heart beats measured from a reference point located on the mother complex after transformation to cepstra domains are first obtained and this is followed by fetal heart rate variability computations. Homomorphic filtering in the complex cepstral domain and the subuent transformation to the time domain results in fetal complex recovery. However, three problems have been identified with second-order based cepstral techniques that needed rectification in this paper. These are (1) errors resulting from the phase unwrapping algorithms and leading to fetal complex perturbation, (2) the unavoidable conversion of noise statistics from Gaussianess to non-Gaussianess due to the highly non-linear nature of homomorphic transform does warrant stringent noise cancellation routines, (3) due to the aforementioned problems in (1) and (2), it is difficult to adaptively optimize windows to include all individual fetal complexes in the time domain based on amplitude thresholding routines in the complex cepstral domain (i.e. the task of `zooming' in on weak fetal complexes requires more processing time). The use of third-order based high resolution differential cepstrum technique results in recovery of the delay of the order of 120 milliseconds.

  12. Response Surfaces of Neural Networks Learned Using Bayesian Framework and Its Application to Optimization Problem

    NASA Astrophysics Data System (ADS)

    Takeda, Norio

    We verified the generalization ability of the response surfaces of artificial neural networks (NNs), and that the surfaces could be applied to an engineering-design problem. A Bayesian framework to regularize NNs, which was proposed by Gull and Skilling, can be used to generate NN response surfaces with excellent generalization ability, i.e., to determine the regularizing constants in an objective function minimized during NN learning. This well-generalized NN might be useful to find an optimal solution in the process of response surface methodology (RSM). We, therefore, describe three rules based on the Bayesian framework to update the regularizing constants, utilizing these rules to generate NN response surfaces with noisy teacher data drawn from a typical unimodal or multimodal function. Good generalization ability was achieved with regularized NN response surfaces, even though an update rule including trace evaluation failed to determine the regularizing constants regardless of the response function. We, next, selected the most appropriate update rule, which included eigenvalue evaluation, and then the NN response surface regularized using the update rule was applied to finding the optimal solution to an illustrative engineering-design problem. The NN response surface did not fit the noise in the teacher data, and consequently, it could effectively be used to achieve a satisfactory solution. This may increase the opportunities for using NN in the process of RSM.

  13. Application of SEAWAT to select variable-density and viscosity problems

    USGS Publications Warehouse

    Dausman, Alyssa M.; Langevin, Christian D.; Thorne, Danny T.; Sukop, Michael C.

    2010-01-01

    SEAWAT is a combined version of MODFLOW and MT3DMS, designed to simulate three-dimensional, variable-density, saturated groundwater flow. The most recent version of the SEAWAT program, SEAWAT Version 4 (or SEAWAT_V4), supports equations of state for fluid density and viscosity. In SEAWAT_V4, fluid density can be calculated as a function of one or more MT3DMS species, and optionally, fluid pressure. Fluid viscosity is calculated as a function of one or more MT3DMS species, and the program also includes additional functions for representing the dependence of fluid viscosity on temperature. This report documents testing of and experimentation with SEAWAT_V4 with six previously published problems that include various combinations of density-dependent flow due to temperature variations and/or concentration variations of one or more species. Some of the problems also include variations in viscosity that result from temperature differences in water and oil. Comparisons between the results of SEAWAT_V4 and other published results are generally consistent with one another, with minor differences considered acceptable.

  14. Robust moving mesh algorithms for hybrid stretched meshes: Application to moving boundaries problems

    NASA Astrophysics Data System (ADS)

    Landry, Jonathan; Soulaïmani, Azzeddine; Luke, Edward; Ben Haj Ali, Amine

    2016-12-01

    A robust Mesh-Mover Algorithm (MMA) approach is designed to adapt meshes of moving boundaries problems. A new methodology is developed from the best combination of well-known algorithms in order to preserve the quality of initial meshes. In most situations, MMAs distribute mesh deformation while preserving a good mesh quality. However, invalid meshes are generated when the motion is complex and/or involves multiple bodies. After studying a few MMA limitations, we propose the following approach: use the Inverse Distance Weighting (IDW) function to produce the displacement field, then apply the Geometric Element Transformation Method (GETMe) smoothing algorithms to improve the resulting mesh quality, and use an untangler to revert negative elements. The proposed approach has been proven efficient to adapt meshes for various realistic aerodynamic motions: a symmetric wing that has suffered large tip bending and twisting and the high-lift components of a swept wing that has moved to different flight stages. Finally, the fluid flow problem has been solved on meshes that have moved and they have produced results close to experimental ones. However, for situations where moving boundaries are too close to each other, more improvements need to be made or other approaches should be taken, such as an overset grid method.

  15. Practical problems relating to the hovercraft application of marine gas turbines

    NASA Astrophysics Data System (ADS)

    Jin-Zhang, Z.

    Design specifications of the marine gas turbine in a hovercraft application are discussed, in addition to the requirements for load distribution of the turbine power in this application. The effective load of the gas turbine is found to be about 57 percent higher than that of the air-cooled diesel engine, and a comparison between the two engines indicates that the effective load of the diesel-driven boat becomes advantageous only when the endurance is more than 26 hours. A multistage filter for air-water separation could reduce the salt content to less than 0.01 ppm where the pressure loss is less than 100 mm water head, and a low profile-resistance ejector without a mixing section could be developed to reduce the engine room pressure to the 45-50 C range.

  16. Link Winds: A visual data analysis system and its application to the atmospheric ozone depletion problem

    NASA Technical Reports Server (NTRS)

    Jacobson, Allan S.; Berkin, Andrew L.

    1995-01-01

    The Linked Windows Interactive Data System (LinkWinds) is a prototype visual data exploration system resulting from a NASA Jet Propulsion Laboratory (JPL) program of research into the application of graphical methods for rapidly accessing, displaying, and analyzing large multi variate multidisciplinary data sets. Running under UNIX it is an integrated multi-application executing environment using a data-linking paradigm to dynamically interconnect and control multiple windows containing a variety of displays and manipulators. This paradigm, resulting in a system similar to a graphical spreadsheet, is not only a powerful method for organizing large amounts of data for analysis, but leads to a highly intuitive, easy-to-learn user interface. It provides great flexibility in rapidly interacting with large masses of complex data to detect trends, correlations, and anomalies. The system, containing an expanding suite of non-domain-specific applications, provides for the ingestion of a variety of data base formats and hard -copy output of all displays. Remote networked workstations running LinkWinds may be interconnected, providing a multiuser science environment (MUSE) for collaborative data exploration by a distributed science team. The system is being developed in close collaboration with investigators in a variety of science disciplines using both archived and real-time data. It is currently being used to support the Microwave Limb Sounder (MLS) in orbit aboard the Upper Atmosphere Research Satellite (UARS). This paper describes the application of LinkWinds to this data to rapidly detect features, such as the ozone hole configuration, and to analyze correlations between chemical constituents of the atmosphere.

  17. Application of wave mechanics theory to fluid dynamics problems: Boundary layer on a circular cylinder including turbulence

    NASA Technical Reports Server (NTRS)

    Krzywoblocki, M. Z. V.

    1974-01-01

    The application of the elements of quantum (wave) mechanics to some special problems in the field of macroscopic fluid dynamics is discussed. Emphasis is placed on the flow of a viscous, incompressible fluid around a circular cylinder. The following subjects are considered: (1) the flow of a nonviscous fluid around a circular cylinder, (2) the restrictions imposed the stream function by the number of dimensions of space, and (3) the flow past three dimensional bodies in a viscous fluid, particularly past a circular cylinder in the symmetrical case.

  18. Applications of density functional theory calculations to selected problems in hydrocarbon processing

    NASA Astrophysics Data System (ADS)

    Nabar, Rahul

    Recent advances in theoretical techniques and computational hardware have made it possible to apply Density Functional Theory (DFT) methods to realistic problems in heterogeneous catalysis. Hydrocarbon processing is economically, and strategically, a very important industrial sector in today's world. In this thesis, we employ DFT methods to examine several important problems in hydrocarbon processing. Fischer Tropsch Synthesis (FTS) is a mature technology to convert synthesis gas derived from coal, natural-gas or biomass into liquid fuels, specifically diesel. Iron is an active FTS catalyst, but the absence of detailed reaction mechanisms make it difficult to maximize activity and optimize product distribution. We evaluate thermochemistry, kinetics and Rate Determining Steps (RDS) for Fischer Tropsch Synthesis on several models of Fe catalysts: Fe(110), Fe(211) and Pt promoted Fe(110). Our studies indicated that CO-dissociation is likely to be the RDS under most reaction conditions, but the DFT-calculated activation energy ( Ea) for direct CO dissociation was too large to explain the observed catalyst activity. Consequently we demonstrate that H-assisted CO-dissociation pathways are competitive with direct CO dissociation on both Co and Fe catalysts and could be responsible for a major fraction of the reaction flux (especially at high CO coverages). We then extend this alternative mechanistic model to closed-packed facets of nine transition metal catalysts (Fe, Co, Ni, Ru, Rh, Pd, Os, Ir and Pt). H-assisted CO dissociation offers a kinetically easier route on each of the metals studied. DFT methods are also applied to another problem from the petroleum industry: discovery of poison-resistant, bimetallic, alloy catalysts (poisons: C, S, CI, P). Our systematic screening studies identify several Near Surface Alloys (NSAs) that are expected to be highly poison-resistant yet stable and avoiding adsorbate induced reconstruction. Adsorption trends are also correlated with

  19. Applicability of the particle filter for high-dimensional problems using a massively parallel computer

    NASA Astrophysics Data System (ADS)

    Nakano, S.; Higuchi, T.

    2012-04-01

    The particle filter (PF) is one of ensemble-based algorithms for data assimilation. The PF obtains an approximation of a posterior PDF of a state by resampling with replacement from a prior ensemble. The procedure of the PF does not assume linearity or Gaussianity. Thus, it can be applied to general nonlinear problems. However, in order to obtain appropriate results for high-dimensional problems, the PF requires an enormous number of ensemble members. Since the PF must calculate the time integral for each particle at each time step, the large ensemble size results in prohibitive computational cost. There exists various methods for reducing the number of particle. In contrast, we employ a straightforward approach to overcome this problem; that is, we use a massively parallel computer to achieve sufficiently large ensemble size. Since the time integral in the PF can be readily be parallelized, we can notably improve the computational efficiency using a parallel computer. However, if we naively implement the PF on a distributed computing system, we encounter another difficulty; that is, many data transfers occur randomly between different nodes of the distributed computing system. Such data transfers can be reduced by dividing the ensemble into small subsets (groups). If we limit the resampling within each of the subsets, the data transfers can be done efficiently in parallel. If the ensemble are divided into small subsets, the risk of local sample impoverishment within each of the subsets is enhanced. However, if we change the grouping at each time step, the information held by a node can be propagated to all of the nodes after a finite number of time steps and the local sample impoverishment can be avoided. In the present study, we compare between the above method based on the local resampling of each group and the naive implementation of the PF based on the global resampling of the whole ensemble. The global resampling enables us to achive a slightly better

  20. Efficient implementation and application of the artificial bee colony algorithm to low-dimensional optimization problems

    NASA Astrophysics Data System (ADS)

    von Rudorff, Guido Falk; Wehmeyer, Christoph; Sebastiani, Daniel

    2014-06-01

    We adapt a swarm-intelligence-based optimization method (the artificial bee colony algorithm, ABC) to enhance its parallel scaling properties and to improve the escaping behavior from deep local minima. Specifically, we apply the approach to the geometry optimization of Lennard-Jones clusters. We illustrate the performance and the scaling properties of the parallelization scheme for several system sizes (5-20 particles). Our main findings are specific recommendations for ranges of the parameters of the ABC algorithm which yield maximal performance for Lennard-Jones clusters and Morse clusters. The suggested parameter ranges for these different interaction potentials turn out to be very similar; thus, we believe that our reported values are fairly general for the ABC algorithm applied to chemical optimization problems.

  1. PROGRESS AND PROBLEMS IN THE APPLICATION OF FOCUSED ULTRASOUND FOR BLOOD-BRAIN BARRIER DISRUPTION

    PubMed Central

    Vykhodtseva, Natalia; McDannold, Nathan; Hynynen, Kullervo

    2008-01-01

    Advances in neuroscience have resulted in the development of new diagnostic and therapeutic agents for potential use in the central nervous system (CNS). However, the ability to deliver the majority of these agents to the brain is limited by the blood–brain barrier (BBB), a specialized structure of the blood vessel wall that hampers transport and diffusion from the blood to the brain. Many CNS disorders could be treated with drugs, enzymes, genes, or large-molecule biotechnological products such as recombinant proteins, if they could cross the BBB. This article reviews the problems of the BBB presence in treating the vast majority of CNS diseases and the efforts to circumvent the BBB through the design of new drugs and the development of more sophisticated delivery methods. Recent advances in the development of noninvasive, targeted drug delivery by MRI-guided ultrasound-induced BBB disruption are also summarized. PMID:18511095

  2. Fold prediction problem: the application of new physical and physicochemical-based features.

    PubMed

    Dehzangi, Abdollah; Phon-Amnuaisuk, Somnuk

    2011-02-01

    One of the most important goals in bioinformatics is the ability to predict tertiary structure of a protein from its amino acid sequence. In this paper, new feature groups based on the physical and physicochemical properties of amino acids (size of the amino acids' side chains, predicted secondary structure based on normalized frequency of β-Strands, Turns, and Reverse Turns) are proposed to tackle this task. The proposed features are extracted using a modified feature extraction method adapted from Dubchak et al. To study the effectiveness of the proposed features and the modified feature extraction method, AdaBoost.M1, Multi Layer Perceptron (MLP), and Support Vector Machine (SVM) that have been commonly and successfully applied to the protein folding problem are employed. Our experimental results show that the new feature groups altogether with the modified feature extraction method are capable of enhancing the protein fold prediction accuracy better than the previous works found in the literature.

  3. [Biological problems of origin and development of various physiological functions (theory and application)].

    PubMed

    Ivanov, K P

    2001-01-01

    The author presents some idea about origin and development of some physiological functions: outer breathing, breath function of blood, blood circulation, thermoregulation, energy supply. The conclusions about main directions of evolution of these functions and duration of their development in phylogeny were drawn. The author gave some examples of abrupt changes of development of these functions in different groups of animals and discussed possible reasons of such changes. General quantitative estimation of the results of evolution of these functions from the position of their summArized efficiency was done. Quantitative characteristics of optimization and efficiency limits of physiological functions were suggested on the base of new data in general biology and comparative physiology. The author put toward the hypothesis about conventional "mistakes" of evolution and showed deep biological reasons of some seriOus illness. The examples of some applied problems in biology, physiology and medicine that can be solved with the data on evolution of physiological functions are presented. PMID:11548400

  4. Loophole to the universal photon spectrum in electromagnetic cascades and application to the cosmological lithium problem.

    PubMed

    Poulin, Vivian; Serpico, Pasquale Dario

    2015-03-01

    The standard theory of electromagnetic cascades onto a photon background predicts a quasiuniversal shape for the resulting nonthermal photon spectrum. This has been applied to very disparate fields, including nonthermal big bang nucleosynthesis (BBN). However, once the energy of the injected photons falls below the pair-production threshold the spectral shape is much harder, a fact that has been overlooked in past literature. This loophole may have important phenomenological consequences, since it generically alters the BBN bounds on nonthermal relics; for instance, it allows us to reopen the possibility of purely electromagnetic solutions to the so-called "cosmological lithium problem," which were thought to be excluded by other cosmological constraints. We show this with a proof-of-principle example and a simple particle physics model, compared with previous literature.

  5. Solving the self-interaction problem in Kohn-Sham density functional theory: Application to atoms

    NASA Astrophysics Data System (ADS)

    Däne, M.; Gonis, A.; Nicholson, D. M.; Stocks, G. M.

    2015-04-01

    In previous work, we proposed a computational methodology that addresses the elimination of the self-interaction error from the Kohn-Sham formulation of the density functional theory. We demonstrated how the exchange potential can be obtained, and presented results of calculations for atomic systems up to Kr carried out within a Cartesian coordinate system. In this paper, we provide complete details of this self-interaction free method formulated in spherical coordinates based on the explicit equidensity basis ansatz. We prove analytically that derivatives obtained using this method satisfy the Virial theorem for spherical orbitals, where the problem can be reduced to one dimension. We present the results of calculations of ground-state energies of atomic systems throughout the periodic table carried out within the exchange-only mode.

  6. Solving the Self-Interaction Problem in Kohn-Sham Density Functional Theory. Application to Atoms

    SciTech Connect

    Daene, M.; Gonis, A.; Nicholson, D. M.; Stocks, G. M.

    2014-10-14

    Previously, we proposed a computational methodology that addresses the elimination of the self-interaction error from the Kohn–Sham formulation of the density functional theory. We demonstrated how the exchange potential can be obtained, and presented results of calculations for atomic systems up to Kr carried out within a Cartesian coordinate system. In our paper, we provide complete details of this self-interaction free method formulated in spherical coordinates based on the explicit equidensity basis ansatz. We also prove analytically that derivatives obtained using this method satisfy the Virial theorem for spherical orbitals, where the problem can be reduced to one dimension. We present the results of calculations of ground-state energies of atomic systems throughout the periodic table carried out within the exchange-only mode.

  7. Solving the Self-Interaction Problem in Kohn-Sham Density Functional Theory. Application to Atoms

    DOE PAGESBeta

    Daene, M.; Gonis, A.; Nicholson, D. M.; Stocks, G. M.

    2014-10-14

    Previously, we proposed a computational methodology that addresses the elimination of the self-interaction error from the Kohn–Sham formulation of the density functional theory. We demonstrated how the exchange potential can be obtained, and presented results of calculations for atomic systems up to Kr carried out within a Cartesian coordinate system. In our paper, we provide complete details of this self-interaction free method formulated in spherical coordinates based on the explicit equidensity basis ansatz. We also prove analytically that derivatives obtained using this method satisfy the Virial theorem for spherical orbitals, where the problem can be reduced to one dimension. Wemore » present the results of calculations of ground-state energies of atomic systems throughout the periodic table carried out within the exchange-only mode.« less

  8. Applications of Transport/Reaction Codes to Problems in Cell Modeling

    SciTech Connect

    MEANS, SHAWN A.; RINTOUL, MARK DANIEL; SHADID, JOHN N.

    2001-11-01

    We demonstrate two specific examples that show how our exiting capabilities in solving large systems of partial differential equations associated with transport/reaction systems can be easily applied to outstanding problems in computational biology. First, we examine a three-dimensional model for calcium wave propagation in a Xenopus Laevis frog egg and verify that a proposed model for the distribution of calcium release sites agrees with experimental results as a function of both space and time. Next, we create a model of the neuron's terminus based on experimental observations and show that the sodium-calcium exchanger is not the route of sodium's modulation of neurotransmitter release. These state-of-the-art simulations were performed on massively parallel platforms and required almost no modification of existing Sandia codes.

  9. Loophole to the universal photon spectrum in electromagnetic cascades and application to the cosmological lithium problem.

    PubMed

    Poulin, Vivian; Serpico, Pasquale Dario

    2015-03-01

    The standard theory of electromagnetic cascades onto a photon background predicts a quasiuniversal shape for the resulting nonthermal photon spectrum. This has been applied to very disparate fields, including nonthermal big bang nucleosynthesis (BBN). However, once the energy of the injected photons falls below the pair-production threshold the spectral shape is much harder, a fact that has been overlooked in past literature. This loophole may have important phenomenological consequences, since it generically alters the BBN bounds on nonthermal relics; for instance, it allows us to reopen the possibility of purely electromagnetic solutions to the so-called "cosmological lithium problem," which were thought to be excluded by other cosmological constraints. We show this with a proof-of-principle example and a simple particle physics model, compared with previous literature. PMID:25793793

  10. Application of local policy to prevent alcohol problems: experiences from a community trial.

    PubMed

    Holder, H D; Reynolds, R I

    1997-06-01

    Alcohol policy conventionally has been established at the national or regional, state and provincial levels. Alcohol policy at any level is not actually limited to the regulation and control of alcohol production, wholesale distribution, and retail sales. There are a number of alternatives for setting alcohol policies within a local community. Building upon existing national and state/provincial laws, policy makers at the community level can set priorities for allocating resources and enforcing laws related to drinking and driving, underage alcohol sales, alcohol serving practices of bars and restaurants and geographical density of alcohol outlets in the community. This paper concludes from the Community Trials Project that policies established at the local level can reduce alcohol problems. PMID:9231451

  11. Cladistic analysis of genotype data-application to GAW15 Problem 3

    PubMed Central

    Jung, Hsuan; Zhao, Keyan; Marjoram, Paul

    2007-01-01

    Given the increasing size of modern genetic data sets and, in particular, the move towards genome-wide studies, there is merit in considering analyses that gain computational efficiency by being more heuristic in nature. With this in mind, we present results of cladistic analyses methods on the Genetic Analysis Workshop 15 Problem 3 simulated data (answers known). Our analysis attempts to capture similarities between individuals using a series of trees, and then looks for regions in which mutations on those trees can successfully explain a phenotype of interest. Existing varieties of such algorithms assume haplotypes are known, or have been inferred, an assumption that is often unrealistic for genome-wide data. We therefore present an extension of these methods that can successfully analyze genotype, rather than haplotype, data. PMID:18466467

  12. Cladistic analysis of genotype data-application to GAW15 Problem 3.

    PubMed

    Jung, Hsuan; Zhao, Keyan; Marjoram, Paul

    2007-01-01

    Given the increasing size of modern genetic data sets and, in particular, the move towards genome-wide studies, there is merit in considering analyses that gain computational efficiency by being more heuristic in nature. With this in mind, we present results of cladistic analyses methods on the Genetic Analysis Workshop 15 Problem 3 simulated data (answers known). Our analysis attempts to capture similarities between individuals using a series of trees, and then looks for regions in which mutations on those trees can successfully explain a phenotype of interest. Existing varieties of such algorithms assume haplotypes are known, or have been inferred, an assumption that is often unrealistic for genome-wide data. We therefore present an extension of these methods that can successfully analyze genotype, rather than haplotype, data.

  13. Sensitivity Analysis of Boundary Value Problems: Application to Nonlinear Reaction-Diffusion Systems

    NASA Astrophysics Data System (ADS)

    Reuven, Yakir; Smooke, Mitchell D.; Rabitz, Herschel

    1986-05-01

    A direct and very efficient approach for obtaining sensitivities of two-point boundary value problems solved by Newton's method is studied. The link between the solution method and the sensitivity equations is investigated together with matters of numerical accuracy and efficiency. This approach is employed in the analysis of a model three species, unimolecular, steady-state, premixed laminar flame. The numerical accuracy of the sensitivities is verified and their values are utilized for interpretation of the model results. It is found that parameters associated directly with the temperature play a dominant role. The system's Green's functions relating dependent variables are also controlled strongly by the temperature. In addition, flame speed sensitivities are calculated and shown to be a special class of derived sensitivity coefficients. Finally, some suggestions for the physical interpretation of sensitivities in model analysis are given.

  14. Coarsening Strategies for Unstructured Multigrid Techniques with Application to Anisotropic Problems

    NASA Technical Reports Server (NTRS)

    Morano, E.; Mavriplis, D. J.; Venkatakrishnan, V.

    1996-01-01

    Over the years, multigrid has been demonstrated as an efficient technique for solving inviscid flow problems. However, for viscous flows, convergence rates often degrade. This is generally due to the required use of stretched meshes (i.e. the aspect-ratio AR = (delta)y/(delta)x much less than 1) in order to capture the boundary layer near the body. Usual techniques for generating a sequence of grids that produce proper convergence rates on isotropic meshes are not adequate for stretched meshes. This work focuses on the solution of Laplace's equation, discretized through a Galerkin finite-element formulation on unstructured stretched triangular meshes. A coarsening strategy is proposed and results are discussed.

  15. Coarsening strategies for unstructured multigrid techniques with application to anisotropic problems

    NASA Technical Reports Server (NTRS)

    Morano, E.; Mavriplis, D. J.; Venkatakrishnan, V.

    1995-01-01

    Over the years, multigrid has been demonstrated as an efficient technique for solving inviscid flow problems. However, for viscous flows, convergence rates often degrade. This is generally due to the required use of stretched meshes (i.e., the aspect-ratio AR = delta y/delta x is much less than 1) in order to capture the boundary layer near the body. Usual techniques for generating a sequence of grids that produce proper convergence rates on isotopic meshes are not adequate for stretched meshes. This work focuses on the solution of Laplace's equation, discretized through a Galerkin finite-element formulation on unstructured stretched triangular meshes. A coarsening strategy is proposed and results are discussed.

  16. A multi-physics and multi-scale lumped parameter model of cardiac contraction of the left ventricle: a conceptual model from the protein to the organ scale.

    PubMed

    Bhattacharya-Ghosh, Benjamin; Schievano, Silvia; Díaz-Zuccarini, Vanessa

    2012-10-01

    In cardiovascular computational physiology the importance of understanding cardiac contraction as a multi-scale process is of paramount importance to understand causality across different scales. Within this study, a multi-scale and multi-physics model of the left ventricle that connects the process of cardiac excitation and contraction from the protein to the organ level is presented in a novel way. The model presented here includes the functional description of a cardiomyocyte (cellular scale), which explains the dynamic behaviour of the calcium concentration within the cell whilst an action potential develops. The cell domain is coupled to a domain that determines the kinetics of the sliding filament mechanism (protein level), which is at the basis of cardiac contraction. These processes are then linked to the generation of muscular force and from there to the generation of pressure inside the ventricle. This multi-scale model presents a coherent and unified way to describe cardiac contraction from the protein to the organ level.

  17. Remote sensing application for identifying wetland sites on Cyprus: problems and prospects

    NASA Astrophysics Data System (ADS)

    Markogianni, Vassilik; Tzirkalli, Elli; Gücel, Salih; Dimitriou, Elias; Zogaris, Stamatis

    2014-08-01

    Wetland features in seasonally semi-arid islands pose particular difficulties in identification, inventory and conservation assessment. Our survey presents an application of utilizing images of a newly launched sensor, Landsat 8, to rapidly identify inland water bodies and produce a screening-level island-wide inventory of wetlands for the first time in Cyprus. The method treats all lentic water bodies (artificial and natural) and areas holding semi-aquatic vegetation as wetland sites. The results show that 179 sites are delineated by the remote sensing application and when this is supplemented by expert-guided identification and ground surveys during favourable wet-season conditions the total number of inventoried wetland sites is 315. The number of wetland sites is surprisingly large since it does not include micro-wetlands (under 2000 m2 or 0.2 ha) or widespread narrow lotic and riparian stream reaches. In Cyprus, a number of different wetland types occur and often in temporary or ephemerally flooded conditions and they are usually of very small areal extent. Many wetlands are artificial or semi-artificial water bodies, and numerous natural small wetland features are often degraded by anthropogenic changes or exist as remnant patches and are therefore heavily modified compared to their original natural state. The study proves that there is an urgent need for integrated and multidisciplinary study and monitoring of wetlands cover due to either climate change effects and/or anthropogenic interventions. Small wetlands are particularly vulnerable while many artificial wetlands are not managed for biodiversity values. The remote sensing and GIS application are efficient tools for this initial screening-level inventory. The need for baseline inventory information collection in support of wetland conservation is multi-scalar and requires an adaptive protocol to guide effective conservation planning.

  18. Applications of digital image processing techniques to problems of data registration and correlation

    NASA Technical Reports Server (NTRS)

    Green, W. B.

    1978-01-01

    An overview is presented of the evolution of the computer configuration at JPL's Image Processing Laboratory (IPL). The development of techniques for the geometric transformation of digital imagery is discussed and consideration is given to automated and semiautomated image registration, and the registration of imaging and nonimaging data. The increasing complexity of image processing tasks at IPL is illustrated with examples of various applications from the planetary program and earth resources activities. It is noted that the registration of existing geocoded data bases with Landsat imagery will continue to be important if the Landsat data is to be of genuine use to the user community.

  19. [Problems and prospects of gene therapeutics and DNA vaccines development and application].

    PubMed

    Kibirev, Ia A; Drobkov, B I; Marakulin, I V

    2010-01-01

    The review is summarized foreign publications devoted to different aspects of DNA vaccines and gene therapeutics' biological safety. In spite of incomprehension in their action, numerous prototype DNA-based biopharmaceuticals are in advanced stages of human clinical trials. This review is focused on some safety concerns of gene formulations vaccines relate to toxic effects, vertical transmission possibility, genome integration complications, immunologic and immunopathologic effects and environmental spread. It is noted that necessity of national regulatory documents development related to gene therapy medicinal products is significant condition of their application to medical practice.

  20. Applications of phase-locking loops to synchronization problems in space communications links

    NASA Astrophysics Data System (ADS)

    Maral, G.; Bousquet, M.

    1981-12-01

    Components and methods of assuring the synchronization of carriers and bits in space communications where the signal to noise ratios are low are presented. Closed loop systems are described which function by phase estimation through satisfaction of maximum likelihood criteria. Applications are discussed for a loop type carrier with feedback decision, a Costas loop and an x-squared nonlinear synchronizer, and early/late gate synchronizers. Additional consideration is given to data transition tracking loops and nonlinear synchronizers, where a nonlinear algorithm filter processes the signal in the baseband. Future implementation of microprocessors for entirely numerical synchronization is indicated.

  1. Applications of Bayesian Statistics to Problems in Gamma-Ray Bursts

    NASA Technical Reports Server (NTRS)

    Meegan, Charles A.

    1997-01-01

    This presentation will describe two applications of Bayesian statistics to Gamma Ray Bursts (GRBS). The first attempts to quantify the evidence for a cosmological versus galactic origin of GRBs using only the observations of the dipole and quadrupole moments of the angular distribution of bursts. The cosmological hypothesis predicts isotropy, while the galactic hypothesis is assumed to produce a uniform probability distribution over positive values for these moments. The observed isotropic distribution indicates that the Bayes factor for the cosmological hypothesis over the galactic hypothesis is about 300. Another application of Bayesian statistics is in the estimation of chance associations of optical counterparts with galaxies. The Bayesian approach is preferred to frequentist techniques here because the Bayesian approach easily accounts for galaxy mass distributions and because one can incorporate three disjoint hypotheses: (1) bursts come from galactic centers, (2) bursts come from galaxies in proportion to luminosity, and (3) bursts do not come from external galaxies. This technique was used in the analysis of the optical counterpart to GRB970228.

  2. Application of Micro-XRF for Nuclear Materials Characterization and Problem Solving

    SciTech Connect

    Worley, Christopher G.; Tandon, Lav; Martinez, Patrick T.; Decker, Diana L.; Schwartz, Daniel S.

    2012-08-02

    Micro-X-ray fluorescence (MXRF) used for >> 20 years To date MXRF has been underutilized for nuclear materials (NM) spatially-resolved elemental characterization. Scanning electron microscopy (SEM) with EDX much more common for NM characterization at a micro scale. But MXRF fills gap for larger 10's microns to cm{sup 2} scales. Will present four interesting NM applications using MXRF. Demonstrated unique value of MXRF for various plutonium applications. Although SEM has much higher resolution, MXRF clearly better for these larger scale samples (especially non-conducting samples). MXRF useful to quickly identify insoluble particles in Pu/Np oxide. MXRF vital to locating HEPA filter Pu particles over cm{sup 2} areas which were then extracted for SEM morphology and particle size distribution analysis. MXRF perfect for surface swipes which are far too large for practical SEM imaging, and loose residue would contaminate SEM vacuum chamber. MXRF imaging of ER Plutonium metal warrants further studies to explore metal elemental heterogeneity.

  3. Optimal Assignment Problem Applications of Finite Mathematics to Business and Economics. [and] Difference Equations with Applications. Applications of Difference Equations to Economics and Social Sciences. [and] Selected Applications of Mathematics to Finance and Investment. Applications of Elementary Algebra to Finance. [and] Force of Interest. Applications of Calculus to Finance. UMAP Units 317, 322, 381, 382.

    ERIC Educational Resources Information Center

    Gale, David; And Others

    Four units make up the contents of this document. The first examines applications of finite mathematics to business and economies. The user is expected to learn the method of optimization in optimal assignment problems. The second module presents applications of difference equations to economics and social sciences, and shows how to: 1) interpret…

  4. Finite-volume application of high order ENO schemes to multi-dimensional boundary-value problems

    NASA Technical Reports Server (NTRS)

    Casper, Jay; Dorrepaal, J. Mark

    1990-01-01

    The finite volume approach in developing multi-dimensional, high-order accurate essentially non-oscillatory (ENO) schemes is considered. In particular, a two dimensional extension is proposed for the Euler equation of gas dynamics. This requires a spatial reconstruction operator that attains formal high order of accuracy in two dimensions by taking account of cross gradients. Given a set of cell averages in two spatial variables, polynomial interpolation of a two dimensional primitive function is employed in order to extract high-order pointwise values on cell interfaces. These points are appropriately chosen so that correspondingly high-order flux integrals are obtained through each interface by quadrature, at each point having calculated a flux contribution in an upwind fashion. The solution-in-the-small of Riemann's initial value problem (IVP) that is required for this pointwise flux computation is achieved using Roe's approximate Riemann solver. Issues to be considered in this two dimensional extension include the implementation of boundary conditions and application to general curvilinear coordinates. Results of numerical experiments are presented for qualitative and quantitative examination. These results contain the first successful application of ENO schemes to boundary value problems with solid walls.

  5. Application of the N-quantum approximation to the proton radius problem

    NASA Astrophysics Data System (ADS)

    Cowen, Steven

    This thesis is organized into three parts: 1. Introduction and bound state calculations of electronic and muonic hydrogen, 2. Bound states in motion, and 3.Treatment of soft photons. In the first part, we apply the N-Quantum Approximation (NQA) to electronic and muonic hydrogen and search for any new corrections to energy levels that could account for the 0.31 meV discrepancy of the proton radius problem. We derive a bound state equation and compare our numerical solutions and wave functions to those of the Dirac equation. We find NQA Lamb shift diagrams and calculate the associated energy shift contributions. We do not find any new corrections large enough to account for the discrepancy. In part 2, we discuss the effects of motion on bound states using the NQA. We find classical Lorentz contraction of the lowest order NQA wave function. Finally, in part 3, we develop a clothing transformation for interacting fields in order to produce the correct asymptotic limits. We find the clothing eliminates a trilinear interacting Hamiltonian term and produces a quadrilinear soft photon interaction term.

  6. Non-obvious Problems in Clark Electrode Application at Elevated Temperature and Ways of Their Elimination

    PubMed Central

    Miniaev, M. V.; Belyakova, M. B.; Kostiuk, N. V.; Leshchenko, D. V.; Fedotova, T. A.

    2013-01-01

    Well-known cause of frequent failures of closed oxygen sensors is the appearance of gas bubbles in the electrolyte. The problem is traditionally associated with insufficient sealing of the sensor that is not always true. Study of a typical temperature regime of measurement system based on Clark sensor showed that spontaneous release of the gas phase is a natural effect caused by periodic warming of the sensor to a temperature of the test liquid. The warming of the sensor together with the incubation medium causes oversaturation of electrolyte by dissolved gases and the allocation of gas bubbles. The lower rate of sensor heating in comparison with the medium reduces but does not eliminate the manifestation of this effect. It is experimentally established, that with each cycle of heating of measuring system up to 37°C followed by cooling the volume of gas phase in the electrolyte (KCl; 60 g/L; 400 μL) increased by 0.6 μL approximately. Thus, during just several cycles it can dramatically degrade the characteristics of the sensor. A method was developed in which the oxygen sensor is heated in contact with the liquid, (depleted of dissolved gases), allowing complete exclusion of the above-mentioned effect. PMID:23984188

  7. Low Dimensional Tools for Flow-Structure Interaction Problems: Application to Micro Air Vehicles

    NASA Technical Reports Server (NTRS)

    Schmit, Ryan F.; Glauser, Mark N.; Gorton, Susan A.

    2003-01-01

    A low dimensional tool for flow-structure interaction problems based on Proper Orthogonal Decomposition (POD) and modified Linear Stochastic Estimation (mLSE) has been proposed and was applied to a Micro Air Vehicle (MAV) wing. The method utilizes the dynamic strain measurements from the wing to estimate the POD expansion coefficients from which an estimation of the velocity in the wake can be obtained. For this experiment the MAV wing was set at five different angles of attack, from 0 deg to 20 deg. The tunnel velocities varied from 44 to 58 ft/sec with corresponding Reynolds numbers of 46,000 to 70,000. A stereo Particle Image Velocimetry (PIV) system was used to measure the wake of the MAV wing simultaneously with the signals from the twelve dynamic strain gauges mounted on the wing. With 20 out of 2400 POD modes, a reasonable estimation of the flow flow was observed. By increasing the number of POD modes, a better estimation of the flow field will occur. Utilizing the simultaneously sampled strain gauges and flow field measurements in conjunction with mLSE, an estimation of the flow field with lower energy modes is reasonable. With these results, the methodology for estimating the wake flow field from just dynamic strain gauges is validated.

  8. Application of household production theory to selected natural-resource problems in less-developed countries

    SciTech Connect

    Mercer, D.E.

    1991-01-01

    The objectives are threefold: (1) to perform an analytical survey of household production theory as it relates to natural-resource problems in less-developed countries, (2) to develop a household production model of fuelwood decision making, (3) to derive a theoretical framework for travel-cost demand studies of international nature tourism. The model of household fuelwood decision making provides a rich array of implications and predictions for empirical analysis. For example, it is shown that fuelwood and modern fuels may be either substitutes or complements depending on the interaction of the gross-substitution and income-expansion effects. Therefore, empirical analysis should precede adoption of any inter-fuel substitution policies such as subsidizing kerosene. The fuelwood model also provides a framework for analyzing the conditions and factors determining entry and exit by households into the wood-burning subpopulation, a key for designing optimal household energy policies in the Third World. The international nature tourism travel cost model predicts that the demand for nature tourism is an aggregate of the demand for the individual activities undertaken during the trip.

  9. Numerical Simulation of Liquid-Structure Interaction Problems in a Tank for Aerospace Applications

    NASA Astrophysics Data System (ADS)

    Bucchignani, E.; Pezzella, G.; Matrone, A.

    2009-01-01

    The current perspectives in the aerospace require a particular care for the analysis of several phenomena involving the coupling between the mechanical behaviour and other physics fields such as the fluid- structure interaction problem. This issue is particularly felt within the Reusable Launch Vehicle (RLV) design since, during reentry, such kind of vehicles carries large quantities of Main Engine Cut Off (MECO) residual propellants. The management of the residual propellant remaining in the reusable stage after MECO during a nominal mission is a crucial point for the design with respect to: dimensioning and weight, landing safety issues, and post landing procedures. The goal of this paper is the unsteady numerical simulation of a RLV-like tank configuration, filled with propellant, such as liquid Oxygen (LO2) and/or liquid Hydrogen (LH2), subject to a typical reentry loading environment. The flowfield pressure and the stress field in the tank structure have been evaluated considering the motion of an incompressible fluid with a mobile free surface, in a tank with deforming walls under the action of the liquid pressure. An unsteady Finite Element formulation is used, instead, for modelling the tank. The coupling algorithm, based on a staggered method, belongs to the class of the partition treatment techniques, which allow to solve the fluid and structural fields by means of two distinct models.

  10. Lunar scout missions: Galileo encounter results and application to scientific problems and exploration requirements

    NASA Technical Reports Server (NTRS)

    Head, J. W.; Belton, M.; Greeley, R.; Pieters, C.; Mcewen, A.; Neukum, G.; Mccord, T.

    1993-01-01

    The Lunar Scout Missions (payload: x-ray fluorescence spectrometer, high-resolution stereocamera, neutron spectrometer, gamma-ray spectrometer, imaging spectrometer, gravity experiment) will provide a global data set for the chemistry, mineralogy, geology, topography, and gravity of the Moon. These data will in turn provide an important baseline for the further scientific exploration of the Moon by all-purpose landers and micro-rovers, and sample return missions from sites shown to be of primary interest from the global orbital data. These data would clearly provide the basis for intelligent selection of sites for the establishment of lunar base sites for long-term scientific and resource exploration and engineering studies. The two recent Galileo encounters with the Moon (December, 1990 and December, 1992) illustrate how modern technology can be applied to significant lunar problems. We emphasize the regional results of the Galileo SSI to show the promise of geologic unit definition and characterization as an example of what can be done with the global coverage to be obtained by the Lunar Scout Missions.

  11. A New Ghost Cell/Level Set Method for Moving Boundary Problems: Application to Tumor Growth

    PubMed Central

    Macklin, Paul

    2011-01-01

    In this paper, we present a ghost cell/level set method for the evolution of interfaces whose normal velocity depend upon the solutions of linear and nonlinear quasi-steady reaction-diffusion equations with curvature-dependent boundary conditions. Our technique includes a ghost cell method that accurately discretizes normal derivative jump boundary conditions without smearing jumps in the tangential derivative; a new iterative method for solving linear and nonlinear quasi-steady reaction-diffusion equations; an adaptive discretization to compute the curvature and normal vectors; and a new discrete approximation to the Heaviside function. We present numerical examples that demonstrate better than 1.5-order convergence for problems where traditional ghost cell methods either fail to converge or attain at best sub-linear accuracy. We apply our techniques to a model of tumor growth in complex, heterogeneous tissues that consists of a nonlinear nutrient equation and a pressure equation with geometry-dependent jump boundary conditions. We simulate the growth of glioblastoma (an aggressive brain tumor) into a large, 1 cm square of brain tissue that includes heterogeneous nutrient delivery and varied biomechanical characteristics (white matter, gray matter, cerebrospinal fluid, and bone), and we observe growth morphologies that are highly dependent upon the variations of the tissue characteristics—an effect observed in real tumor growth. PMID:21331304

  12. A Probabilistic Framework for Risk Analysis: Application to Groundwater Problems (Invited)

    NASA Astrophysics Data System (ADS)

    Sanchez-Vila, X.; Tartakovsky, D. M.; Bolster, D.; Fernandez-Garcia, D.

    2009-12-01

    Many groundwater related problems involve artificial/engineered systems build within a complex natural geological medium. Examples include water supply, tunnels, and remediation efforts. Potential failure of a groundwater system can be defined as insufficient quality or quantity of water available for a given use at a given time. Such failures can have negative economic, social, and political consequences, including death of population in extreme cases. Thus, proper risk analysis is essential for groundwater systems. System failure might be a consequence of a malfunction of the engineered part (e.g. a pump breaks, a valve leaks), but more often it is due to improper design caused by a lack of characterization of the subsurface due to the inherent heterogeneity of natural systems. The resulting uncertainty, which is both structural and parametric, suggests the use of probabilistic risk assessment (PRA) techniques. PRA facilitates comprehensive uncertainty quantification for complex interdisciplinary subsurface phenomena. To illustrate this we present a frequent example of pollution caused by a NAPL source, where failure arises from a combination of dissolution, transport, and bioremediation processes leading to undetected pollution escaping a monitored site and reaching potential receptors through different pathways. The probability of failure is computed by combining independent and conditional probabilities of failure of each process. Individual probabilities can be evaluated either analytically or numerically or, barring both, can be inferred from expert opinion.

  13. Determination of the zincate diffusion coefficient and its application to alkaline battery problems

    NASA Technical Reports Server (NTRS)

    May, C. E.; Kautz, Harold E.

    1978-01-01

    The diffusion coefficient for the zincate ion at 24 C was found to be 9.9 X 10 to the minus 7th power squared cm per sec + or - 30 percent in 45 percent potassium hydroxide and 1.4 x 10 to the minus 7 squared cm per sec + or - 25 percent in 40 percent sodium hydroxide. Comparison of these values with literature values at different potassium hydroxide concentrations show that the Stokes-Einstein equation is obeyed. The diffusion coefficient is characteristic of the zincate ion (not the cation) and independent of its concentration. Calculations with the measured value of the diffusion coefficient show that the zinc concentration in an alkaline zincate half cell becomes uniform throughout in tens of hours by diffusion alone. Diffusion equations are derived which are applicable to finite size chambers. Details and discussion of the experimental method are also given.

  14. Application of remote sensing to selected problems within the state of California

    NASA Technical Reports Server (NTRS)

    Colwell, R. N. (Principal Investigator); Benson, A. S.; Estes, J. E.; Johnson, C.

    1981-01-01

    Specific case studies undertaken to demonstrate the usefulness of remote sensing technology to resource managers in California are highlighted. Applications discussed include the mapping and quantization of wildland fire fuels in Mendocino and Shasta Counties as well as in the Central Valley; the development of a digital spectral/terrain data set for Colusa County; the Forsythe Planning Experiment to maximize the usefulness of inputs from LANDSAT and geographic information systems to county planning in Mendocino County; the development of a digital data bank for Big Basin State Park in Santa Cruz County; the detection of salinity related cotton canopy reflectance differences in the Central Valley; and the surveying of avocado acreage and that of other fruits and nut crops in Southern California. Special studies include the interpretability of high altitude, large format photography of forested areas for coordinated resource planning using U-2 photographs of the NASA Bucks Lake Forestry test site in the Plumas National Forest in the Sierra Nevada Mountains.

  15. Vectorization of the time-dependent Boltzmann transport equation: Application to deep penetration problems

    NASA Astrophysics Data System (ADS)

    Cobos, Agustín C.; Poma, Ana L.; Alvarez, Guillermo D.; Sanz, Darío E.

    2016-10-01

    We introduce an alternative method to calculate the steady state solution of the angular photon flux after a numerical evolution of the time-dependent Boltzmann transport equation (BTE). After a proper discretization the transport equation was converted into an ordinary system of differential equations that can be iterated as a weighted Richardson algorithm. As a different approach, in this work the time variable regulates the iteration process and convergence criteria is based on physical parameters. Positivity and convergence was assessed from first principles and a modified Courant-Friedrichs-Lewy condition was devised to guarantee convergence. The Penelope Monte Carlo method was used to test the convergence and accuracy of our approach for different phase space discretizations. Benchmarking was performed by calculation of total fluence and photon spectra in different one-dimensional geometries irradiated with 60Co and 6 MV photon beams and radiological applications were devised.

  16. Determination of the zincate diffusion coefficient and its application to alkaline battery problems

    NASA Technical Reports Server (NTRS)

    May, C. E.; Kautz, H. E.

    1978-01-01

    The diffusion coefficient for the zincate ion at 24 C was found to be 9.9 x 10 to the -7th power sq cm/sec + or - 30% in 45% potassium hydroxide and 1.4 x 10 to the -7th power sq cm/sec + or - 25% in 40% sodium hydroxide. Comparison of these values with literature values at different potassium hydroxide concentrations show that the Stokes-Einstein equation is obeyed. The diffusion coefficient is characteristic of the zincate ion (not the cation) and independent of its concentration. Calculations with the measured value of the diffusion coefficient show that the zinc concentration in an alkaline zincate half-cell becomes uniform throughout in tens of hours by diffusion alone. Diffusion equations are derived which are applicable to finite-size chambers. Details and discussion of the experimental method are also given.

  17. Progress in Application of Generalized Wigner Distribution to Growth and Other Problems

    NASA Astrophysics Data System (ADS)

    Einstein, T. L.; Morales-Cifuentes, Josue; Pimpinelli, Alberto; Gonzalez, Diego Luis

    We recap the use of the (single-parameter) Generalized Wigner Distribution (GWD) to analyze capture-zone distributions associated with submonolayer epitaxial growth. We discuss recent applications to physical systems, as well as key simulations. We pay particular attention to how this method compares with other methods to assess the critical nucleus size characterizing growth. The following talk discusses a particular case when special insight is needed to reconcile the various methods. We discuss improvements that can be achieved by going to a 2-parameter fragmentation approach. At a much larger scale we have applied this approach to various distributions in socio-political phenomena (areas of secondary administrative units [e.g., counties] and distributions of subway stations). Work at UMD supported by NSF CHE 13-05892.

  18. Submicron X-Ray Diffraction and Its Applications to Problems in Materials and Environmental Science

    NASA Astrophysics Data System (ADS)

    Tamura, N.; Spolenak, R.; Valek, B. C.; Manceau, A.; Chang, N. M.

    2002-08-01

    The availability of high brilliance 3rd generation synchrotron sources together with progress in achromatic focusing optics allow to add submicron spatial resolution to the conventional century-old X-ray diffraction technique. The new capabilities include the possibility to map in-situ, grain orientations, crystalline phase distribution and full strain/stress tensors at a very local level, by combining white and monochromatic X-ray microbeam diffraction. This is particularly relevant for high technology industry where the understanding of material properties at a microstructural level becomes increasingly important. After describing the latest advances in the submicron X-ray diffraction techniques at the ALS, we will give some examples of its application in material science for the measurement of strain/stress in metallic thin films and interconnects. Its use in the field of environmental science will also be discussed.

  19. Some Unsolved Problems, Questions, and Applications of the Brightsen Nucleon Cluster Model

    NASA Astrophysics Data System (ADS)

    Smarandache, Florentin

    2010-10-01

    Brightsen Model is opposite to the Standard Model, and it was build on John Weeler's Resonating Group Structure Model and on Linus Pauling's Close-Packed Spheron Model. Among Brightsen Model's predictions and applications we cite the fact that it derives the average number of prompt neutrons per fission event, it provides a theoretical way for understanding the low temperature / low energy reactions and for approaching the artificially induced fission, it predicts that forces within nucleon clusters are stronger than forces between such clusters within isotopes; it predicts the unmatter entities inside nuclei that result from stable and neutral union of matter and antimatter, and so on. But these predictions have to be tested in the future at the new CERN laboratory.

  20. Applications of fractured continuum model to enhanced geothermal system heat extraction problems.

    PubMed

    Kalinina, Elena A; Klise, Katherine A; McKenna, Sean A; Hadgu, Teklu; Lowry, Thomas S

    2014-01-01

    This paper describes the applications of the fractured continuum model to the different enhanced geothermal systems reservoir conditions. The capability of the fractured continuum model to generate fracture characteristics expected in enhanced geothermal systems reservoir environments are demonstrated for single and multiple sets of fractures. Fracture characteristics are defined by fracture strike, dip, spacing, and aperture. The paper demonstrates how the fractured continuum model can be extended to represent continuous fractured features, such as long fractures, and the conditions in which the fracture density varies within the different depth intervals. Simulations of heat transport using different fracture settings were compared with regard to their heat extraction effectiveness. The best heat extraction was obtained in the case when fractures were horizontal. A conventional heat extraction scheme with vertical wells was compared to an alternative scheme with horizontal wells. The heat extraction with the horizontal wells was significantly better than with the vertical wells when the injector was at the bottom.

  1. The application of the research work of James Clerk Maxwell in electromagnetics to industrial frequency problems.

    PubMed

    Lowther, D A; Freeman, E M

    2008-05-28

    Faraday's work inspired the development of electrical motors and generators. Until Maxwell pointed out the significance of Ampere's Law, there was no rigorous design method for magnetic devices. His interpretation strongly influenced the creation, by others, of the 'magnetic circuit' approach, which became the seminal design technique. This, utilizing the concept of reluctance, led to the design method for magnetic machines that is still widely in use today. The direct solution of the Maxwell equations (less the displacement current term) had to await the development of modern continuum methods to yield the field everywhere in, and around, the devices of interest, and this then permitted the application of the Maxwell stress tensor. This final refinement yielded forces and torques, and this resulted in the accurate prediction of electrical machine performance.

  2. Nonlinear Dynamics and Control of Large Arrays of Coupled Oscillators: Application to Fluid-Elastic Problems

    SciTech Connect

    Moon, Francis C.

    2002-04-01

    Large numbers of fluid elastic structures are part of many power plant systems and vibration of these systems sometimes are responsible for plant shut downs. Earlier research at Cornell in this area had centered on nonlinear dynamics of fluid-elastic systems with low degrees of freedom. The focus of current research is the study of the dynamics of thousands of closely arrayed structures in a cross flow under both fluid and impact forces. This research is relevant to two areas: (1) First, fluid-structural problems continue to be important in the power industry, especially in heat exchange systems where up to thousands of pipe-like structures interact with a fluid medium. [Three years ago in Japan for example, there was a shut down of the Monju nuclear power plant due to a failure attributed to flow induced vibrations.] (2) The second area of relevance is to nonlinear systems and complexity phenomena; issues such as spatial temporal dynamics, localization and coherent patterns entropy measures as well as other complexity issues. Early research on flow induced vibrations in tube row and array structures in cross flow goes back to Roberts in 1966 and Connors in 1970. These studies used linear models as have many of the later work in the 1980's. Nonlinear studies of cross flow induced vibrations have been undertaken in the last decade. The research at Cornell sponsored by DOE has explored nonlinear phenomena in fluid-structure problems. In the work at Cornell we have documented a subcritical Hopf bifurcation for flow around a single row of flexible tubes and have developed an analytical model based on nonlinear system identification techniques. (Thothadri, 1998, Thothadri and Moon, 1998, 1999). These techniques have been applied to a wind tunnel experiment with a row of seven cylinders in a cross flow. These system identification methods have been used to calculate fluid force models that have replicated certain quantitative vibration limit cycle behavior of the

  3. Local and global approaches to the problem of Poincaré recurrences. Applications in nonlinear dynamics

    NASA Astrophysics Data System (ADS)

    Anishchenko, V. S.; Boev, Ya. I.; Semenova, N. I.; Strelkova, G. I.

    2015-07-01

    We review rigorous and numerical results on the statistics of Poincaré recurrences which are related to the modern development of the Poincaré recurrence problem. We analyze and describe the rigorous results which are achieved both in the classical (local) approach and in the recently developed global approach. These results are illustrated by numerical simulation data for simple chaotic and ergodic systems. It is shown that the basic theoretical laws can be applied to noisy systems if the probability measure is ergodic and stationary. Poincaré recurrences are studied numerically in nonautonomous systems. Statistical characteristics of recurrences are analyzed in the framework of the global approach for the cases of positive and zero topological entropy. We show that for the positive entropy, there is a relationship between the Afraimovich-Pesin dimension, Lyapunov exponents and the Kolmogorov-Sinai entropy either without and in the presence of external noise. The case of zero topological entropy is exemplified by numerical results for the Poincare recurrence statistics in the circle map. We show and prove that the dependence of minimal recurrence times on the return region size demonstrates universal properties for the golden and the silver ratio. The behavior of Poincaré recurrences is analyzed at the critical point of Feigenbaum attractor birth. We explore Poincaré recurrences for an ergodic set which is generated in the stroboscopic section of a nonautonomous oscillator and is similar to a circle shift. Based on the obtained results we show how the Poincaré recurrence statistics can be applied for solving a number of nonlinear dynamics issues. We propose and illustrate alternative methods for diagnosing effects of external and mutual synchronization of chaotic systems in the context of the local and global approaches. The properties of the recurrence time probability density can be used to detect the stochastic resonance phenomenon. We also discuss how

  4. Application of Semi Active Control Techniques to the Damping Suppression Problem of Solar Sail Booms

    NASA Technical Reports Server (NTRS)

    Adetona, O.; Keel, L. H.; Whorton, M. S.

    2007-01-01

    Solar sails provide a propellant free form for space propulsion. These are large flat surfaces that generate thrust when they are impacted by light. When attached to a space vehicle, the thrust generated can propel the space vehicle to great distances at significant speeds. For optimal performance the sail must be kept from excessive vibration. Active control techniques can provide the best performance. However, they require an external power-source that may create significant parasitic mass to the solar sail. However, solar sails require low mass for optimal performance. Secondly, active control techniques typically require a good system model to ensure stability and performance. However, the accuracy of solar sail models validated on earth for a space environment is questionable. An alternative approach is passive vibration techniques. These do not require an external power supply, and do not destabilize the system. A third alternative is referred to as semi-active control. This approach tries to get the best of both active and passive control, while avoiding their pitfalls. In semi-active control, an active control law is designed for the system, and passive control techniques are used to implement it. As a result, no external power supply is needed so the system is not destabilize-able. Though it typically underperforms active control techniques, it has been shown to out-perform passive control approaches and can be unobtrusively installed on a solar sail boom. Motivated by this, the objective of this research is to study the suitability of a Piezoelectric (PZT) patch actuator/sensor based semi-active control system for the vibration suppression problem of solar sail booms. Accordingly, we develop a suitable mathematical and computer model for such studies and demonstrate the capabilities of the proposed approach with computer simulations.

  5. Covariant Image Representation with Applications to Classification Problems in Medical Imaging

    PubMed Central

    Seo, Dohyung; Ho, Jeffrey; Vemuri, Baba C.

    2016-01-01

    Images are often considered as functions defined on the image domains, and as functions, their (intensity) values are usually considered to be invariant under the image domain transforms. This functional viewpoint is both influential and prevalent, and it provides the justification for comparing images using functional Lp-norms. However, with the advent of more advanced sensing technologies and data processing methods, the definition and the variety of images has been broadened considerably, and the long-cherished functional paradigm for images is becoming inadequate and insufficient. In this paper, we introduce the formal notion of covariant images and study two types of covariant images that are important in medical image analysis, symmetric positive-definite tensor fields and Gaussian mixture fields, images whose sample values covary i.e., jointly vary with image domain transforms rather than being invariant to them. We propose a novel similarity measure between a pair of covariant images considered as embedded shapes (manifolds) in the ambient space, a Cartesian product of the image and its sample-value domains. The similarity measure is based on matching the two embedded low-dimensional shapes, and both the extrinsic geometry of the ambient space and the intrinsic geometry of the shapes are incorporated in computing the similarity measure. Using this similarity as an affinity measure in a supervised learning framework, we demonstrate its effectiveness on two challenging classification problems: classification of brain MR images based on patients’ age and (Alzheimer’s) disease status and seizure detection from high angular resolution diffusion magnetic resonance scans of rat brains. PMID:27182122

  6. Development And Application Of Non-Hydrostatic Model To The Coastal Engineering Problems

    NASA Astrophysics Data System (ADS)

    Maderych, V.; Brovchenko, I.; Fenical, S.; Nikishov, V.; Terletska, K.

    2007-12-01

    The 3D non-hydrostatic free surface model developed by Kanarska and Maderich (2003) for stratified flows was further improved and has been used to simulate coastal processes. In the model the surface elevation, hydrostatic and non-hydrostatic components of pressure and velocity are calculated at sequential stages. Unlike most non-hydrostatic models, the 2-D depth-averaged momentum and continuity equations were integrated explicitly, whereas the 3-D equations were solved semi-implicitly at subsequent stages. The RANS and subgrid- scale eddy viscosity and diffusivity parameterization were implemented in the model to parameterize small-scale mixing. The model was applied to three coastal engineering problems. First, we used the model coupled with a 3D Lagrangian sediment transport model to predict scour caused by propeller jets of slowly maneuvering ships. The results of the simulations show good agreement with laboratory experiments and field ADCP measurements with tug boats. Second, the model was applied, while nested into the hydrostatic far-field counterpart model, for near-field simulation of cooling water discharge through submerged outfalls. Third, laboratory experiments and simulations were performed to estimate effects of large-amplitude internal solitary waves (ISW) on submerged structures and coastal bottom sediments. In the first series of experiments and simulations, the interaction of ISW-depressions with a rectangular bottom obstacle was investigated. In the second series, the ISW-depression was studied passing through a smooth local lateral constriction. The third series of laboratory experiments and simulations was conducted to investigate the dynamics of ISW of depressions reflecting from a steep slope. Contribution of V. Maderych in this work was supported by Hankuk University of Foreign Studies Research Fund of 2007.

  7. The Application of a Technique for Vector Correlation to Problems in Meteorology and Oceanography.

    NASA Astrophysics Data System (ADS)

    Breaker, L. C.; Gemmill, W. H.; Crosby, D. S.

    1994-11-01

    In a recent study, Crosby et al. proposed a definition for vector correlation that has not been commonly used in meteorology or oceanography. This definition has both a firm theoretical basis and a rather complete set of desirable statistical properties. In this study, the authors apply the definition to practical problems arising in meteorology and oceanography. In the first of two case studies, vector correlations were calculated between subsurface currents for five locations along the southeastern shore of Lake Erie. Vector correlations for one sample size were calculated for all current meter combinations, first including the seiche frequency and then with the seiche frequency removed. Removal of the seiche frequency, which was easily detected in the current spectra, had only a small effect on the vector correlations. Under reasonable assumptions, the vector correlations were in most cases statistically significant and revealed considerable fine structure in the vector correlation sequences. In some cases, major variations in vector correlation coincided with changes in surface wind. The vector correlations for the various current meter combinations decreased rapidly with increasing spatial separation. For one current meter combination, canonical correlations were also calculated; the first canonical correlation tended to retain the underlying trend, whereas the second canonical correlation retained the peaks in the vector correlations.In the second case study, vector correlations were calculated between marine surface winds derived from the National Meteorological Center's Global Data Assimilation System and observed winds acquired from the network of National Data Buoy Center buoys that are located off the continental United States and in the Gulf of Alaska. Results of this comparison indicated that 1) there was a significant decrease in correlation between the predicted and observed winds with increasing forecast interval out to 72 h, 2) the technique

  8. Application of a COTS Resource Optimization Framework to the SSN Sensor Tasking Domain - Part I: Problem Definition

    NASA Astrophysics Data System (ADS)

    Tran, T.

    With the onset of the SmallSat era, the RSO catalog is expected to see continuing growth in the near future. This presents a significant challenge to the current sensor tasking of the SSN. The Air Force is in need of a sensor tasking system that is robust, efficient, scalable, and able to respond in real-time to interruptive events that can change the tracking requirements of the RSOs. Furthermore, the system must be capable of using processed data from heterogeneous sensors to improve tasking efficiency. The SSN sensor tasking can be regarded as an economic problem of supply and demand: the amount of tracking data needed by each RSO represents the demand side while the SSN sensor tasking represents the supply side. As the number of RSOs to be tracked grows, demand exceeds supply. The decision-maker is faced with the problem of how to allocate resources in the most efficient manner. Braxton recently developed a framework called Multi-Objective Resource Optimization using Genetic Algorithm (MOROUGA) as one of its modern COTS software products. This optimization framework took advantage of the maturing technology of evolutionary computation in the last 15 years. This framework was applied successfully to address the resource allocation of an AFSCN-like problem. In any resource allocation problem, there are five key elements: (1) the resource pool, (2) the tasks using the resources, (3) a set of constraints on the tasks and the resources, (4) the objective functions to be optimized, and (5) the demand levied on the resources. In this paper we explain in detail how the design features of this optimization framework are directly applicable to address the SSN sensor tasking domain. We also discuss our validation effort as well as present the result of the AFSCN resource allocation domain using a prototype based on this optimization framework.

  9. Airborne multispectral and hyperspectral remote sensing: Examples of applications to the study of environmental and engineering problems

    SciTech Connect

    Bianchi, R.; Marino, C.M.

    1997-10-01

    The availability of a new aerial survey capability carried out by the CNR/LARA (National Research Council - Airborne Laboratory for the Environmental Research) by a new spectroradiometer AA5000 MIVIS (Multispectral Infrared and Visible Imaging Spectrometer) on board a CASA 212/200 aircraft, enable the scientists to obtain innovative data sets, for different approach to the definitions and the understanding of a variety of environmental and engineering problems. The 102 MIVIS channels spectral bandwidths are chosen to meet the needs of scientific research for advanced applications of remote sensing data. In such configuration MIVIS can offer significant contributions to problem solving in wide sectors such as geologic exploration, agricultural crop studies, forestry, land use mapping, idrogeology, oceanography and others. LARA in 1994-96 has been active over different test-sites in joint-venture with JPL, Pasadena, different European Institutions and Italian University and Research Institutes. These aerial surveys allow the national and international scientific community to approach the use of Hyperspectral Remote Sensing in environmental problems of very large interest. The sites surveyed in Italy, France and Germany include a variety of targets such as quarries, landfills, karst cavities areas, landslides, coastlines, geothermal areas, etc. The deployments gathered up to now more than 300 GBytes of MIVIS data in more than 30 hours of VLDS data recording. The purpose of this work is to present and to comment the procedures and the results at research and at operational level of the past campaigns with special reference to the study of environmental and engineering problems.

  10. State, Parameter, and Unknown Input Estimation Problems in Active Automotive Safety Applications

    NASA Astrophysics Data System (ADS)

    Phanomchoeng, Gridsada

    A variety of driver assistance systems such as traction control, electronic stability control (ESC), rollover prevention and lane departure avoidance systems are being developed by automotive manufacturers to reduce driver burden, partially automate normal driving operations, and reduce accidents. The effectiveness of these driver assistance systems can be significant enhanced if the real-time values of several vehicle parameters and state variables, namely tire-road friction coefficient, slip angle, roll angle, and rollover index, can be known. Since there are no inexpensive sensors available to measure these variables, it is necessary to estimate them. However, due to the significant nonlinear dynamics in a vehicle, due to unknown and changing plant parameters, and due to the presence of unknown input disturbances, the design of estimation algorithms for this application is challenging. This dissertation develops a new approach to observer design for nonlinear systems in which the nonlinearity has a globally (or locally) bounded Jacobian. The developed approach utilizes a modified version of the mean value theorem to express the nonlinearity in the estimation error dynamics as a convex combination of known matrices with time varying coefficients. The observer gains are then obtained by solving linear matrix inequalities (LMIs). A number of illustrative examples are presented to show that the developed approach is less conservative and more useful than the standard Lipschitz assumption based nonlinear observer. The developed nonlinear observer is utilized for estimation of slip angle, longitudinal vehicle velocity, and vehicle roll angle. In order to predict and prevent vehicle rollovers in tripped situations, it is necessary to estimate the vertical tire forces in the presence of unknown road disturbance inputs. An approach to estimate unknown disturbance inputs in nonlinear systems using dynamic model inversion and a modified version of the mean value theorem is

  11. Multi-lead ECG electrode array for clinical application of electrocardiographic inverse problem.

    PubMed

    Hintermuller, Christoph; Fischer, Gerald; Seger, Michael; Pfeifer, Bernhard; Hanser, Friedrich; Modre, Robert; Tilg, Bernhard

    2004-01-01

    Methods for noninvasive imaging of electric function of the heart might become clinical standard procedure the next years. Thus, the overall procedure has to meet clinical requirements as easy and fast application. In this study we propose a new electrode array which improves the information content in the ECG map, considering clinical constraints such as easy to apply and compatibility with routine leads. A major challenge is the development of an electrode array which yields a high information content even for a large interindividual variation in torso shape. For identifying regions of high information content we introduce the concept of a locally applied virtual electrode array. As a result of our analysis we constructed a new electrode array consisting of two L-shaped regular spaced parts and compared it to the electrode array we use for clinical studies upon activation time imaging. We assume that one side effect caused by the regular shape and spacing of the new array be that the reconstruction of electrodes placed on the patients back is simplified. It may be sufficient to record a few characteristic electrode positions and merge them with a model of the posterior array.

  12. Application of quantitative HTGC and HTGC-MS to paraffin-based production problems

    SciTech Connect

    Wavrek, D.A.; Dahdah, N.F.

    1996-12-31

    Crude oils with high pour points and undesired flow properties have been documented in a variety of geologic provinces; a condition that is frequently attributed to paraffin or wax. Previous attempts to characterize this high molecular weight (HMW; nC40+) material was limited to bulk methods, although recent advances in analytical technologies allow this fraction to be separated into individual components by high temperature gas chromatography (HTGC) and mass spectrometry (HTGC-MS). The latter technique is particularly powerful for compound identification, whereas the quantitative aspects provide a predictive parameter for mass balance applications. HTGC data from a global database indicate that a diverse assemblage of organic compounds contribute to the C40+ fraction. Furthermore, this fraction can exhibit quantitative variation over several orders of magnitude within genetically-related oils. Individual case studies demonstrate that this variation can be due to natural in-situ reservoir processes (phase separation, gravity segregation) or be attributed to anthropogenic petroleum recovery activities. This technology can be used to develop more efficient production strategies and allow more accurate forecasting of recovery volumes and costs.

  13. Application of quantitative HTGC and HTGC-MS to paraffin-based production problems

    SciTech Connect

    Wavrek, D.A.; Dahdah, N.F. )

    1996-01-01

    Crude oils with high pour points and undesired flow properties have been documented in a variety of geologic provinces; a condition that is frequently attributed to paraffin or wax. Previous attempts to characterize this high molecular weight (HMW; nC40+) material was limited to bulk methods, although recent advances in analytical technologies allow this fraction to be separated into individual components by high temperature gas chromatography (HTGC) and mass spectrometry (HTGC-MS). The latter technique is particularly powerful for compound identification, whereas the quantitative aspects provide a predictive parameter for mass balance applications. HTGC data from a global database indicate that a diverse assemblage of organic compounds contribute to the C40+ fraction. Furthermore, this fraction can exhibit quantitative variation over several orders of magnitude within genetically-related oils. Individual case studies demonstrate that this variation can be due to natural in-situ reservoir processes (phase separation, gravity segregation) or be attributed to anthropogenic petroleum recovery activities. This technology can be used to develop more efficient production strategies and allow more accurate forecasting of recovery volumes and costs.

  14. Image Problems Deplete the Number of Women in Academic Applicant Pools

    NASA Astrophysics Data System (ADS)

    Sears, Anna L. W.

    Despite near numeric parity in graduate schools, women and men in science and mathematics may not perceive the same opportunities for career success. Instead, female doctoral students' career ambitions may often be influenced by perceptions of irreconcilable conflicts between personal and academic goals. This article reports the results of a career goals survey of math and science doctoral students at the University of California, Davis. Fewer women than men began their doctoral programs seeking academic research careers. Of those who initially favored academic research, twice as many women as men downgraded these ambitions during graduate school. Women were more likely to feel geographically constrained by family ties and to express concern about balancing work and family, long work hours, and tenure clock inflexibility. These results partially explain why the percentage of women in academic applicant pools is often well below the number of Ph.D. recipients. The current barriers to gender equity thus cannot be completely ameliorated by increasing the number of women in the pipeline or by altered hiring practices, but changes must be undertaken to make academic research careers more flexible, family friendly, and attractive to women.

  15. Implantable electrochemical sensors for biomedical and clinical applications: progress, problems, and future possibilities.

    PubMed

    Li, Chang Ming; Dong, Hua; Cao, Xiaodong; Luong, John H T; Zhang, Xueji

    2007-01-01

    Biosensors are of great interest for their ability to monitor clinically important analytes such as blood gases, electrolytes, and metabolites. A classic example is to monitor the dynamics of blood-glucose levels for treating diabetes. However, the current practice, based on a three decade old technology, requires a drop of blood on a test strip, which is in dire need of replacement. The increasing demands and interests in developing implantable glucose sensors for treating diabetes has led to notable progress in this area, and various electrochemical sensors have been developed for intravascular and subcutaneous applications. However, implantations are plagued by biofouling, tissue destruction and infection around the implanted sensors and the response signals must be interpreted in terms of blood or plasma concentrations for clinical utility, rather than tissue fluid levels. This review focuses on the potentials and pitfalls of implantable electrochemical sensors and presents our opinions about future possibilities of such implantable devices with respect to biocompatibility issues, long-term calibration, and other aging effects on the sensors.

  16. Activated carbons derived from oil palm empty-fruit bunches: application to environmental problems.

    PubMed

    Alam, Md Zahangir; Muyibi, Suleyman A; Mansor, Mariatul F; Wahid, Radziah

    2007-01-01

    Activated carbons derived from oil palm empty fruit bunches (EFB) were investigated to find the suitability of its application for removal of phenol in aqueous solution through adsorption process. Two types of activation namely; thermal activation at 300, 500 and 800 degrees C and physical activation at 150 degrees C (boiling treatment) were used for the production of the activated carbons. A control (untreated EFB) was used to compare the adsorption capacity of the activated carbons produced from these processes. The results indicated that the activated carbon derived at the temperature of 800 degrees C showed maximum absorption capacity in the aqueous solution of phenol. Batch adsorption studies showed an equilibrium time of 6 h for the activated carbon at 800 degrees C. It was observed that the adsorption capacity was higher at lower values of pH (2-3) and higher value of initial concentration of phenol (200-300 mg/L). The equilibrium data fitted better with the Freundlich adsorption isotherm compared to the Langmuir. Kinetic studies of phenol adsorption onto activated carbons were also studied to evaluate the adsorption rate. The estimated cost for production of activated carbon from EFB was shown in lower price (USD 0.50/kg of activated carbon) compared the activated carbon from other sources and processes.

  17. Models, solution, methods and their applicability of dynamic location problems (DLPs) (a gap analysis for further research)

    NASA Astrophysics Data System (ADS)

    Seyedhosseini, Seyed Mohammad; Makui, Ahmad; Shahanaghi, Kamran; Torkestani, Sara Sadat

    2016-05-01

    Determining the best location to be profitable for the facility's lifetime is the important decision of public and private firms, so this is why discussion about dynamic location problems (DLPs) is a critical significance. This paper presented a comprehensive review from 1968 up to most recent on published researches about DLPs and classified them into two parts. First, mathematical models developed based on different characteristics: type of parameters (deterministic, probabilistic or stochastic), number and type of objective function, numbers of commodity and modes, relocation time, number of relocation and relocating facilities, time horizon, budget and capacity constraints and their applicability. In second part, It have been also presented solution algorithms, main specification, applications and some real-world case studies of DLPs. At the ends, we concluded that in the current literature of DLPs, distribution systems and production-distribution systems with simple assumption of the tackle to the complexity of these models studied more than any other fields, as well as the concept of variety of services (hierarchical network), reliability, sustainability, relief management, waiting time for services (queuing theory) and risk of facility disruption need for further investigation. All of the available categories based on different criteria, solution methods and applicability of them, gaps and analysis which have been done in this paper suggest the ways for future research.

  18. Application of the TEMPEST computer code to canister-filling heat transfer problems

    SciTech Connect

    Farnsworth, R.K.; Faletti, D.W.; Budden, M.J.

    1988-03-01

    Pacific Northwest Laboratory (PNL) researchers used the TEMPEST computer code to simulate thermal cooldown behavior of nuclear waste glass after it was poured into steel canisters for long-term storage. The objective of this work was to determine the accuracy and applicability of the TEMPEST code when used to compute canister thermal histories. First, experimental data were obtained to provide the basis for comparing TEMPEST-generated predictions. Five canisters were instrumented with appropriately located radial and axial thermocouples. The canister were filled using the pilot-scale ceramic melter (PSCM) at PNL. Each canister was filled in either a continous or a batch filling mode. One of the canisters was also filled within a turntable simulant (a group of cylindrical shells with heat transfer resistances similar to those in an actual melter turntable). This was necessary to provide a basis for assessing the ability of the TEMPEST code to also model the transient cooling of canisters in a melter turntable. The continous-fill model, Version M, was found to predict temperatures with more accuracy. The turntable simulant experiment demonstrated that TEMPEST can adequately model the asymmetric temperature field caused by the turntable geometry. Further, TEMPEST can acceptably predict the canister cooling history within a turntable, despite code limitations in computing simultaneous radiation and convection heat transfer between shells, along with uncertainty in stainless-steel surface emissivities. Based on the successful performance of TEMPEST Version M, development was initiated to incorporate 1) full viscous glass convection, 2) a dynamically adaptive grid that automatically follows the glass/air interface throughout the transient, and 3) a full enclosure radiation model to allow radiation heat transfer to non-nearest neighbor cells. 5 refs., 47 figs., 17 tabs.

  19. Application of partially-coupled hydro-mechanical schemes to multiphase flow problems

    NASA Astrophysics Data System (ADS)

    Tillner, Elena; Kempka, Thomas

    2016-04-01

    Utilization of subsurface reservoirs by fluid storage or production generally triggers pore pressure changes and volumetric strains in reservoirs and cap rocks. The assessment of hydro-mechanical effects can be undertaken using different process coupling strategies. The fully-coupled geomechanics and flow simulation, constituting a monolithic system of equations, is rarely applied for simulations involving multiphase fluid flow due to the high computational efforts required. Pseudo-coupled simulations are driven by static tabular data on porosity and permeability changes as function of pore pressure or mean stress, resulting in a rather limited flexibility when encountering complex subsurface utilization schedules and realistic geological settings. Partially-coupled hydro-mechanical simulations can be distinguished into one-way and iterative two-way coupled schemes, whereby the latter one is based on calculations of flow and geomechanics, taking into account the iterative exchange of coupling parameters between the two respective numerical simulators until convergence is achieved. In contrast, the one-way coupling scheme is determined by the provision of pore pressure changes calculated by the flow simulator to the geomechanical simulator neglecting any feedback. In the present study, partially-coupled two-way schemes are discussed in view of fully-coupled single-phase flow and geomechanics, and their applicability to multiphase flow simulations. For that purpose, we introduce a comparison study between the different coupling schemes, using selected benchmarks to identify the main requirements for the partially-coupled approach to converge with the numerical solution of the fully-coupled one.

  20. Multi-initial-conditions and Multi-physics Ensembles in the Weather Research and Forecasting Model to Improve Coastal Stratocumulus Forecasts for Solar Power Integration

    NASA Astrophysics Data System (ADS)

    Yang, H.

    2015-12-01

    used to create a multi-parameter and multi-physics ensemble. The ensemble forecast system is implemented operationally for San Diego Gas & Electric Company to improve system operations.

  1. The conformal transformation of an airfoil into a straight line and its application to the inverse problem of airfoil theory

    NASA Technical Reports Server (NTRS)

    Mutterperl, William

    1944-01-01

    A method of conformal transformation is developed that maps an airfoil into a straight line, the line being chosen as the extended chord line of the airfoil. The mapping is accomplished by operating directly with the airfoil ordinates. The absence of any preliminary transformation is found to shorten the work substantially over that of previous methods. Use is made of the superposition of solutions to obtain a rigorous counterpart of the approximate methods of thin-airfoils theory. The method is applied to the solution of the direct and inverse problems for arbitrary airfoils and pressure distributions. Numerical examples are given. Applications to more general types of regions, in particular to biplanes and to cascades of airfoils, are indicated. (author)

  2. Development of a multiple-parameter nonlinear perturbation procedure for transonic turbomachinery flows: Preliminary application to design/optimization problems

    NASA Technical Reports Server (NTRS)

    Stahara, S. S.; Elliott, J. P.; Spreiter, J. R.

    1983-01-01

    An investigation was conducted to continue the development of perturbation procedures and associated computational codes for rapidly determining approximations to nonlinear flow solutions, with the purpose of establishing a method for minimizing computational requirements associated with parametric design studies of transonic flows in turbomachines. The results reported here concern the extension of the previously developed successful method for single parameter perturbations to simultaneous multiple-parameter perturbations, and the preliminary application of the multiple-parameter procedure in combination with an optimization method to blade design/optimization problem. In order to provide as severe a test as possible of the method, attention is focused in particular on transonic flows which are highly supercritical. Flows past both isolated blades and compressor cascades, involving simultaneous changes in both flow and geometric parameters, are considered. Comparisons with the corresponding exact nonlinear solutions display remarkable accuracy and range of validity, in direct correspondence with previous results for single-parameter perturbations.

  3. A Conceptual Framework Mapping the Application of Information Search Strategies to Well and Ill-Structured Problem Solving

    ERIC Educational Resources Information Center

    Laxman, Kumar

    2010-01-01

    Problem-based learning (PBL) is an instructional approach that is organized around the investigation and resolution of problems. Problems are neither uniform nor similar. Jonassen (1997, 2000) in his design theory of problem solving has categorized problems into two broad types--well-structured and ill-structured. He has also described a host of…

  4. Choices, choices: the application of multi-criteria decision analysis to a food safety decision-making problem.

    PubMed

    Fazil, A; Rajic, A; Sanchez, J; McEwen, S

    2008-11-01

    In the food safety arena, the decision-making process can be especially difficult. Decision makers are often faced with social and fiscal pressures when attempting to identify an appropriate balance among several choices. Concurrently, policy and decision makers in microbial food safety are under increasing pressure to demonstrate that their policies and decisions are made using transparent and accountable processes. In this article, we present a multi-criteria decision analysis approach that can be used to address the problem of trying to select a food safety intervention while balancing various criteria. Criteria that are important when selecting an intervention were determined, as a result of an expert consultation, to include effectiveness, cost, weight of evidence, and practicality associated with the interventions. The multi-criteria decision analysis approach we present is able to consider these criteria and arrive at a ranking of interventions. It can also provide a clear justification for the ranking as well as demonstrate to stakeholders, through a scenario analysis approach, how to potentially converge toward common ground. While this article focuses on the problem of selecting food safety interventions, the range of applications in the food safety arena is truly diverse and can be a significant tool in assisting decisions that need to be coherent, transparent, and justifiable. Most importantly, it is a significant contributor when there is a need to strike a fine balance between various potentially competing alternatives and/or stakeholder groups.

  5. PSO: A powerful algorithm to solve geophysical inverse problems: Application to a 1D-DC resistivity case

    NASA Astrophysics Data System (ADS)

    Fernández Martínez, Juan L.; García Gonzalo, Esperanza; Fernández Álvarez, José P.; Kuzma, Heidi A.; Menéndez Pérez, César O.

    2010-05-01

    PSO is an optimization technique inspired by the social behavior of individuals in nature (swarms) that has been successfully used in many different engineering fields. In addition, the PSO algorithm can be physically interpreted as a stochastic damped mass-spring system. This analogy has served to introduce the PSO continuous model and to deduce a whole family of PSO algorithms using different finite-differences schemes. These algorithms are characterized in terms of convergence by their respective first and second order stability regions. The performance of these new algorithms is first checked using synthetic functions showing a degree of ill-posedness similar to that found in many geophysical inverse problems having their global minimum located on a very narrow flat valley or surrounded by multiple local minima. Finally we present the application of these PSO algorithms to the analysis and solution of a VES inverse problem associated with a seawater intrusion in a coastal aquifer in southern Spain. PSO family members are successfully compared to other well known global optimization algorithms (binary genetic algorithms and simulated annealing) in terms of their respective convergence curves and the sea water intrusion depth posterior histograms.

  6. Plane Poiseuille-Couette problem in micro-electro-mechanical systems applications with gas-rarefaction effects

    NASA Astrophysics Data System (ADS)

    Cercignani, Carlo; Lampis, Maria; Lorenzani, Silvia

    2006-08-01

    Rarefied gas flows in micro-electro-mechanical systems (MEMS) devices, calculated from the linearized Bhatnagar-Gross-Krook model equation [P. L. Bhatnagar, E. P. Gross, and M. Krook, Phys. Rev. 94, 511 (1954)], are studied in a wide range of Knudsen numbers. Both plane Poiseuille and Couette flows are investigated numerically by extending a finite difference technique first introduced by Cercignani and Daneri [J. Appl. Phys. 34, 3509 (1963)]. Moreover, a variational approach, applied to the integrodifferential form of the linearized Boltzmann equation [C. Cercignani, J. Stat. Phys. 1, 297 (1969)], is used to solve in a unified manner the plane Poiseuille-Couette problem by means of the computation of only one functional. General boundary conditions of Maxwell's type have been considered, assuming both symmetric and nonsymmetric molecular interaction between gas-solid interfaces, in order to take into account possible differences in the accommodation coefficients on the walls of MEMS devices. Based on the analysis presented in this paper, an accurate database valid in the entire Knudsen regime can be created for the Poiseuille-Couette problem, to be used in micromechanical applications.

  7. Application of artificial intelligence in the marine industry: problem definition and analysis. Final report. Volume 2. Technical report. Report for October 1985-February 1987

    SciTech Connect

    Dillingham, J.T.; Perakis, A.N.

    1987-02-25

    The problem of how to best apply state-of-the-art computer technology, especially using the tools of Artificial Intelligence and Expert Systems (AI/ES) to assist in the solution of several important marine operations problems is addressed. An introduction to AI and ES technology is first presented, including an overview and history, a review of recommended readings, a discussion of when a problem is an appropriate candidate for AI/ES application, available strategies, architectures and ES development tools, and estimates of their associated costs. A cost/benefit analysis of several potential applications in marine operations is conducted. Two of these applications, namely that of optimal container stowage and ship monitoring are examined in detail. Descriptions and formulations of these problems are presented, and estimates of expected monetary benefits are given. Some existing hardware and software tools which are presently in use, or which are now available, are described.

  8. Application of a Java-based, univel geometry, neutral particle Monte Carlo code to the searchlight problem

    SciTech Connect

    Charles A. Wemple; Joshua J. Cogliati

    2005-04-01

    A univel geometry, neutral particle Monte Carlo transport code, written entirely in the Java programming language, is under development for medical radiotherapy applications. The code uses ENDF-VI based continuous energy cross section data in a flexible XML format. Full neutron-photon coupling, including detailed photon production and photonuclear reactions, is included. Charged particle equilibrium is assumed within the patient model so that detailed transport of electrons produced by photon interactions may be neglected. External beam and internal distributed source descriptions for mixed neutron-photon sources are allowed. Flux and dose tallies are performed on a univel basis. A four-tap, shift-register-sequence random number generator is used. Initial verification and validation testing of the basic neutron transport routines is underway. The searchlight problem was chosen as a suitable first application because of the simplicity of the physical model. Results show excellent agreement with analytic solutions. Computation times for similar numbers of histories are comparable to other neutron MC codes written in C and FORTRAN.

  9. On the development of the quantitative texture analysis and its application in solving problems of the Earth sciences

    NASA Astrophysics Data System (ADS)

    Ivankina, T. I.; Matthies, S.

    2015-05-01

    A history of texture analysis (TA) evolution is shown, beginning from the first experimental and theoretical attempts to find and characterize preferred orientations of crystal lattices of grains in real polycrystalline samples. Stages of formation of TA theoretical apparatus, its basic elements, and also application of its capabilities for quantitatively describing anisotropic properties of textured samples are discussed. Attention is also paid to the limitations and difficulties associated with the analysis. The application of the quantitative TA apparatus is demonstrated by example describing elastic properties of textured materials up to multiphase samples containing pores and cracks. A wide scope of TA includes the analysis based on neutron scattering which has been effectively developed at the Frank Laboratory of Neutron Physics. A practical opportunity to determine the bulk crystallographic textures of single-phase and multiphase materials is offered by the use of modern neutron diffractometers, including the SKAT diffractometer at the IBR-2 pulsed reactor. This is especially important for studying samples of natural rocks. The examples given show how the neutron scattering data for the quantitative TA are used in combination with other physical and petrological methods for solving fundamental problems of geology and geophysics based on the analysis of a structure and properties of the Earth's lithosphere matter. The review includes a detailed list of references of original works concerning the TA elaboration, overview publications and monographs, and also information on the most popular TA-related software.

  10. On Index Structures in Hybrid Metaheuristics for Routing Problems with Hard Feasibility Checks: An Application to the 2-Dimensional Loading Vehicle Routing Problem

    NASA Astrophysics Data System (ADS)

    Strodl, Johannes; Doerner, Karl F.; Tricoire, Fabien; Hartl, Richard F.

    In this paper we study the impact of different index structures used within hybrid solution approaches for vehicle routing problems with hard feasibility checks. We examine the case of the vehicle routing problem with two-dimensional loading constraints, which combines the loading of freight into the vehicles and the routing of the vehicles to satisfy the demands of the customers. The problem is solved by a variable neighborhood search for the routing part, in which we embed an exact procedure for the loading subproblem. The contribution of the paper is threefold: i) Four different index mechanisms for managing the subproblems are implemented and tested. It is shown that simple index structures tend to lead to better solutions than more powerful albeit complex ones, when using the same runtime limits. ii) The problem of balancing the CPU budget between exploration of different solutions and exact solution of the loading subproblem is investigated; experiments show that solving exactly hard subproblems can lead to better solution quality over the whole solution process. iii) New best results are presented on existing benchmark instances.

  11. The application of tomographic reconstruction techniques to ill-conditioned inverse problems in atmospheric science and biomedical imaging

    NASA Astrophysics Data System (ADS)

    Hart, Vern Philip, II

    A methodology is presented for creating tomographic reconstructions from various projection data, and the relevance of the results to applications in atmospheric science and biomedical imaging is analyzed. The fundamental differences between transform and iterative methods are described and the properties of the imaging configurations are addressed. The presented results are particularly suited for highly ill-conditioned inverse problems in which the imaging data are restricted as a result of poor angular coverage, limited detector arrays, or insufficient access to an imaging region. The class of reconstruction algorithms commonly used in sparse tomography, the algebraic reconstruction techniques, is presented, analyzed, and compared. These algorithms are iterative in nature and their accuracy depends significantly on the initialization of the algorithm, the so-called initial guess. A considerable amount of research was conducted into novel initialization techniques as a means of improving the accuracy. The main body of this paper is comprised of three smaller papers, which describe the application of the presented methods to atmospheric and medical imaging modalities. The first paper details the measurement of mesospheric airglow emissions at two camera sites operated by Utah State University. Reconstructions of vertical airglow emission profiles are presented, including three-dimensional models of the layer formed using a novel fanning technique. The second paper describes the application of the method to the imaging of polar mesospheric clouds (PMCs) by NASA's Aeronomy of Ice in the Mesosphere (AIM) satellite. The contrasting elements of straight-line and diffusive tomography are also discussed in the context of ill-conditioned imaging problems. A number of developing modalities in medical tomography use near infrared light, which interacts strongly with biological tissue and results in significant optical scattering. In order to perform tomography on the

  12. Developing the Fundamental Theorem of Calculus. Applications of Calculus to Work, Area, and Distance Problems. [and] Atmospheric Pressure in Relation to Height and Temperature. Applications of Calculus to Atmospheric Pressure. [and] The Gradient and Some of Its Applications. Applications of Multivariate Calculus to Physics. [and] Kepler's Laws and the Inverse Square Law. Applications of Calculus to Physics. UMAP Units 323, 426, 431, 473.

    ERIC Educational Resources Information Center

    Lindstrom, Peter A.; And Others

    This document consists of four units. The first of these views calculus applications to work, area, and distance problems. It is designed to help students gain experience in: 1) computing limits of Riemann sums; 2) computing definite integrals; and 3) solving elementary area, distance, and work problems by integration. The second module views…

  13. A new recurrent neural network for solving convex quadratic programming problems with an application to the k-winners-take-all problem.

    PubMed

    Hu, Xiaolin; Zhang, Bo

    2009-04-01

    In this paper, a new recurrent neural network is proposed for solving convex quadratic programming (QP) problems. Compared with existing neural networks, the proposed one features global convergence property under weak conditions, low structural complexity, and no calculation of matrix inverse. It serves as a competitive alternative in the neural network family for solving linear or quadratic programming problems. In addition, it is found that by some variable substitution, the proposed network turns out to be an existing model for solving minimax problems. In this sense, it can be also viewed as a special case of the minimax neural network. Based on this scheme, a k-winners-take-all ( k-WTA) network with O(n) complexity is designed, which is characterized by simple structure, global convergence, and capability to deal with some ill cases. Numerical simulations are provided to validate the theoretical results obtained. More importantly, the network design method proposed in this paper has great potential to inspire other competitive inventions along the same line. PMID:19228555

  14. A new recurrent neural network for solving convex quadratic programming problems with an application to the k-winners-take-all problem.

    PubMed

    Hu, Xiaolin; Zhang, Bo

    2009-04-01

    In this paper, a new recurrent neural network is proposed for solving convex quadratic programming (QP) problems. Compared with existing neural networks, the proposed one features global convergence property under weak conditions, low structural complexity, and no calculation of matrix inverse. It serves as a competitive alternative in the neural network family for solving linear or quadratic programming problems. In addition, it is found that by some variable substitution, the proposed network turns out to be an existing model for solving minimax problems. In this sense, it can be also viewed as a special case of the minimax neural network. Based on this scheme, a k-winners-take-all ( k-WTA) network with O(n) complexity is designed, which is characterized by simple structure, global convergence, and capability to deal with some ill cases. Numerical simulations are provided to validate the theoretical results obtained. More importantly, the network design method proposed in this paper has great potential to inspire other competitive inventions along the same line.

  15. Electromagnetic Extended Finite Elements for High-Fidelity Multimaterial Problems LDRD Final Report

    SciTech Connect

    Siefert, Christopher; Bochev, Pavel Blagoveston; Kramer, Richard Michael Jack; Voth, Thomas Eugene; Cox, James

    2014-09-01

    Surface effects are critical to the accurate simulation of electromagnetics (EM) as current tends to concentrate near material surfaces. Sandia EM applications, which include exploding bridge wires for detonator design, electromagnetic launch of flyer plates for material testing and gun design, lightning blast-through for weapon safety, electromagnetic armor, and magnetic flux compression generators, all require accurate resolution of surface effects. These applications operate in a large deformation regime, where body-fitted meshes are impractical and multimaterial elements are the only feasible option. State-of-the-art methods use various mixture models to approximate the multi-physics of these elements. The empirical nature of these models can significantly compromise the accuracy of the simulation in this very important surface region. We propose to substantially improve the predictive capability of electromagnetic simulations by removing the need for empirical mixture models at material surfaces. We do this by developing an eXtended Finite Element Method (XFEM) and an associated Conformal Decomposition Finite Element Method (CDFEM) which satisfy the physically required compatibility conditions at material interfaces. We demonstrate the effectiveness of these methods for diffusion and diffusion-like problems on node, edge and face elements in 2D and 3D. We also present preliminary work on h -hierarchical elements and remap algorithms.

  16. Microwave assisted preparation of magnesium phosphate cement (MPC) for orthopedic applications: a novel solution to the exothermicity problem.

    PubMed

    Zhou, Huan; Agarwal, Anand K; Goel, Vijay K; Bhaduri, Sarit B

    2013-10-01

    There are two interesting features of this paper. First, we report herein a novel microwave assisted technique to prepare phosphate based orthopedic cements, which do not generate any exothermicity during setting. The exothermic reactions during the setting of phosphate cements can cause tissue damage during the administration of injectable compositions and hence a solution to the problem is sought via microwave processing. This solution through microwave exposure is based on a phenomenon that microwave irradiation can remove all water molecules from the alkaline earth phosphate cement paste to temporarily stop the setting reaction while preserving the active precursor phase in the formulation. The setting reaction can be initiated a second time by adding aqueous medium, but without any exothermicity. Second, a special emphasis is placed on using this technique to synthesize magnesium phosphate cements for orthopedic applications with their enhanced mechanical properties and possible uses as drug and protein delivery vehicles. The as-synthesized cements were evaluated for the occurrences of exothermic reactions, setting times, presence of Mg-phosphate phases, compressive strength levels, microstructural features before and after soaking in (simulated body fluid) SBF, and in vitro cytocompatibility responses. The major results show that exposure to microwaves solves the exothermicity problem, while simultaneously improving the mechanical performance of hardened cements and reducing the setting times. As expected, the cements are also found to be cytocompatible. Finally, it is observed that this process can be applied to calcium phosphate cements system (CPCs) as well. Based on the results, this microwave exposure provides a novel technique for the processing of injectable phosphate bone cement compositions.

  17. Microwave assisted preparation of magnesium phosphate cement (MPC) for orthopedic applications: a novel solution to the exothermicity problem.

    PubMed

    Zhou, Huan; Agarwal, Anand K; Goel, Vijay K; Bhaduri, Sarit B

    2013-10-01

    There are two interesting features of this paper. First, we report herein a novel microwave assisted technique to prepare phosphate based orthopedic cements, which do not generate any exothermicity during setting. The exothermic reactions during the setting of phosphate cements can cause tissue damage during the administration of injectable compositions and hence a solution to the problem is sought via microwave processing. This solution through microwave exposure is based on a phenomenon that microwave irradiation can remove all water molecules from the alkaline earth phosphate cement paste to temporarily stop the setting reaction while preserving the active precursor phase in the formulation. The setting reaction can be initiated a second time by adding aqueous medium, but without any exothermicity. Second, a special emphasis is placed on using this technique to synthesize magnesium phosphate cements for orthopedic applications with their enhanced mechanical properties and possible uses as drug and protein delivery vehicles. The as-synthesized cements were evaluated for the occurrences of exothermic reactions, setting times, presence of Mg-phosphate phases, compressive strength levels, microstructural features before and after soaking in (simulated body fluid) SBF, and in vitro cytocompatibility responses. The major results show that exposure to microwaves solves the exothermicity problem, while simultaneously improving the mechanical performance of hardened cements and reducing the setting times. As expected, the cements are also found to be cytocompatible. Finally, it is observed that this process can be applied to calcium phosphate cements system (CPCs) as well. Based on the results, this microwave exposure provides a novel technique for the processing of injectable phosphate bone cement compositions. PMID:23910345

  18. A New Coarsening Operator for the Optimal Preconditioning of the Dual and Primal Domain Decomposition Methods: Application to Problems with Severe Coefficient Jumps

    NASA Technical Reports Server (NTRS)

    Farhat, Charbel; Rixen, Daniel

    1996-01-01

    We present an optimal preconditioning algorithm that is equally applicable to the dual (FETI) and primal (Balancing) Schur complement domain decomposition methods, and which successfully addresses the problems of subdomain heterogeneities including the effects of large jumps of coefficients. The proposed preconditioner is derived from energy principles and embeds a new coarsening operator that propagates the error globally and accelerates convergence. The resulting iterative solver is illustrated with the solution of highly heterogeneous elasticity problems.

  19. Application of ice cube prior to subcutaneous injection of heparin in pain perception and ecchymosis of patients with cardiovascular problems.

    PubMed

    Batra, Gaytri

    2014-01-01

    In this experimental study of patients with cardiovascular problems, conducted at Safdarjung Hospital, New Delhi, purposive sampling technique was done from cardiology ward and CCU to obtain adequate samples. The sample comprised of 30 experimental group patients and 30 control group patients. The conceptual framework was based on the system model proposed by Ludwig Van Bertalanffy in 1957. Quasi experimental research approach was adopted for the study with post-test only control group design. The independent variable for the study was the ice cube application for 3 min and the dependent variables were pain perception and ecchymosis. The tools used for data collection were, structured interview schedule for sample characteristics, numerical rating scale for pain for subjective assessment, transparent ruler scale to measure the total surface area of ecchymosis, and for treatment ice-cubes in latex glove for giving cold compress. Subjects were asked to rate pain by showing the flash chart of standard pain rating scale immediately after the needle was withdrawn and ecchymosis was observed 48 hrs after the day of injection. The obtained difference between experimental and control group ecchymosis score, and pain perception score was statistically significant as evident from t-value at 0.05 level of significance. PMID:25799797

  20. A new fractional Chebyshev FDM: an application for solving the fractional differential equations generated by optimisation problem

    NASA Astrophysics Data System (ADS)

    Khader, M. M.

    2015-10-01

    In this paper, we introduce a new numerical technique which we call fractional Chebyshev finite difference method. The algorithm is based on a combination of the useful properties of Chebyshev polynomial approximation and finite difference method. We implement this technique to solve numerically the non-linear programming problem which are governed by fractional differential equations (FDEs). The proposed technique is based on using matrix operator expressions which applies to the differential terms. The operational matrix method is derived in our approach in order to approximate the Caputo fractional derivatives. This operational matrix method can be regarded as a non-uniform finite difference scheme. The error bound for the fractional derivatives is introduced. The application of the method to the generated FDEs leads to algebraic systems which can be solved by an appropriate method. Two numerical examples are provided to confirm the accuracy and the effectiveness of the proposed method. A comparison with the fourth-order Runge-Kutta method is given.

  1. Application of ice cube prior to subcutaneous injection of heparin in pain perception and ecchymosis of patients with cardiovascular problems.

    PubMed

    Batra, Gaytri

    2014-01-01

    In this experimental study of patients with cardiovascular problems, conducted at Safdarjung Hospital, New Delhi, purposive sampling technique was done from cardiology ward and CCU to obtain adequate samples. The sample comprised of 30 experimental group patients and 30 control group patients. The conceptual framework was based on the system model proposed by Ludwig Van Bertalanffy in 1957. Quasi experimental research approach was adopted for the study with post-test only control group design. The independent variable for the study was the ice cube application for 3 min and the dependent variables were pain perception and ecchymosis. The tools used for data collection were, structured interview schedule for sample characteristics, numerical rating scale for pain for subjective assessment, transparent ruler scale to measure the total surface area of ecchymosis, and for treatment ice-cubes in latex glove for giving cold compress. Subjects were asked to rate pain by showing the flash chart of standard pain rating scale immediately after the needle was withdrawn and ecchymosis was observed 48 hrs after the day of injection. The obtained difference between experimental and control group ecchymosis score, and pain perception score was statistically significant as evident from t-value at 0.05 level of significance.

  2. Progress on The GEMS (Gravity Electro-Magnetism-Strong) Theory of Field Unification and Its Application to Space Problems

    SciTech Connect

    Brandenburg, J. E.

    2008-01-21

    Progress on the GEMS (Gravity Electro-Magnetism-Strong), theory is presented as well as its application to space problems. The GEMS theory is now validated through the Standard Model of physics. Derivation of the value of the Gravitation constant based on the observed variation of {alpha} with energy: results in the formula G congruent with ({Dirac_h}/2{pi})c/M{sub {eta}}{sub c}{sup 2} exp(-1/(1.61{alpha})), where {alpha} is the fine structure constant,({Dirac_h}/2{pi}), is Planck's constant, c, is the speed of light, and M{sub {eta}}{sub c} is the mass of the {eta}{sub cc} Charmonium meson that is shown to be identical to that derived from the GEM postulates. Covariant formulation of the GEM theory is now possible through definition of the spacetime metric tensor as a portion of the EM stress tensor normalized by its own trace: g{sub ab} = 4(F{sup c}{sub a}F{sub cb})/(F{sup ab}F{sub ab}), it is found that this results in a massless ground state vacuum and a Newtonian gravitation potential {phi} = 1/2 E{sup 2}/B{sup 2}. It is also found that a Lorentz or flat-space metric is recovered in the limit of a full spectrum ZPF.

  3. The Multi-index Mittag-Leffler Functions and Their Applications for Solving Fractional Order Problems in Applied Analysis

    NASA Astrophysics Data System (ADS)

    Kiryakova, V. S.; Luchko, Yu. F.

    2010-11-01

    During the last few decades, differential equations and systems of fractional order (that is arbitrary one, not necessarily integer) begun to play an important role in modeling of various phenomena of physical, engineering, automatization, biological and biomedical, chemical, earth, economics, social relations, etc. nature. The so-called Special Functions of Fractional Calculus (SF of FC) provide an important tool of Fractional Calculus (FC) and Applied Analysis (AA). In particular, they are often used to represent the solutions of fractional differential equations in explicit form. Among the most popular representatives of the SF of FC are: the Mittag-Leffler (ML) function, the Wright generalized hypergeometric function pΨq, the more general Fox H-function, and the Inayat-HussainH-function. The classical Special Functions (called also SF of Mathematical Physics), including the orthogonal polynomials, and the pFq-hypergeometric functions fall in this scheme as examples of the simpler Meijer G-function. In this survey talk, we overview the properties and some applications of an important class of SF of FC, introduced for the first time in our works. For integer m>1 and arbitrary real (or complex, under suitable restrictions) indices ρ1,…,ρm>0 and μ1,…,μm, we define the multi-index (vector-index) Mittag-Leffler functions by: E(1/ρi),(μi)(z) = E)1/ρi),(μi)(m)(z) = ∑ K=0∞zk/Γ(μ1+kρ1)…Γ(μm+k/ρm) = 1Ψm[(1,1)(μ1,1/ρi)1m;z] = H1,m+11,1[-z‖(0,1)(0,1),(1-μi,1/ρi)1m]. We propose also a list of examples of SF of FC that are E(1/ρi),(μi)-functions and play important role in pure mathematics and in solving problems from natural, applied and social sciences, and state

  4. Applications of the Advanced Light Source to problems in the earth, soil, and environmental sciences report of the workshop

    SciTech Connect

    Not Available

    1992-10-01

    This report discusses the following topics: ALS status and research opportunities; advanced light source applications to geological materials; applications in the soil and environmental sciences; x-ray microprobe analysis; potential applications of the ALS in soil and environmental sciences; and x-ray spectroscopy using soft x-rays: applications to earth materials.

  5. One-Way Coupling of an Advanced CFD Multi-Physics Model to FEA for Predicting Stress-Strain in the Solidifying Shell during Continuous Casting of Steel

    NASA Astrophysics Data System (ADS)

    Svensson, Johan; Ramírez López, Pavel E.; Jalali, Pooria N.; Cervantes, Michel

    2015-06-01

    One of the main targets for Continuous Casting (CC) modelling is the actual prediction of defects during transient events. However, the majority of CC models are based on a statistical approach towards flow and powder performance, which is unable to capture the subtleties of small variations in casting conditions during real industrial operation or the combined effects of such changes leading eventually to defects. An advanced Computational Fluid Dynamics (CFD) model; which accounts for transient changes on lubrication during casting due to turbulent flow dynamics and mould oscillation has been presented on MCWASP XIV (Austria) to address these issues. The model has been successfully applied to the industrial environment to tackle typical problems such as lack of lubrication or unstable flows. However, a direct application to cracking had proven elusive. The present paper describes how results from this advanced CFD-CC model have been successfully coupled to structural Finite Element Analysis (FEA) for prediction of stress-strains as a function of irregular lubrication conditions in the mould. The main challenge for coupling was the extraction of the solidified shell from CFD calculations (carried out with a hybrid structured mesh) and creating a geometry by using iso-surfaces, re-meshing and mapping loads (e.g. temperature, pressure and external body forces), which served as input to mechanical stress-strain calculations. Preliminary results for CC of slabs show that the temperature distribution within the shell causes shrinkage and thermal deformation; which are in turn, the main source of stress. Results also show reasonable stress levels of 10-20 MPa in regions, where the shell is thin and exposed to large temperature gradients. Finally, predictions are in good agreement with prior works where stresses indicate compression at the slab surface, while tension is observed at the interior; generating a characteristic stress-strain state during solidification in CC.

  6. An analytical approach to the problem of inverse optimization with additive objective functions: an application to human prehension

    PubMed Central

    Pesin, Yakov B.; Niu, Xun; Latash, Mark L.

    2010-01-01

    We consider the problem of what is being optimized in human actions with respect to various aspects of human movements and different motor tasks. From the mathematical point of view this problem consists of finding an unknown objective function given the values at which it reaches its minimum. This problem is called the inverse optimization problem. Until now the main approach to this problems has been the cut-and-try method, which consists of introducing an objective function and checking how it reflects the experimental data. Using this approach, different objective functions have been proposed for the same motor action. In the current paper we focus on inverse optimization problems with additive objective functions and linear constraints. Such problems are typical in human movement science. The problem of muscle (or finger) force sharing is an example. For such problems we obtain sufficient conditions for uniqueness and propose a method for determining the objective functions. To illustrate our method we analyze the problem of force sharing among the fingers in a grasping task. We estimate the objective function from the experimental data and show that it can predict the force-sharing pattern for a vast range of external forces and torques applied to the grasped object. The resulting objective function is quadratic with essentially non-zero linear terms. PMID:19902213

  7. For multidisciplinary research on the application of remote sensing to water resources problems. [including crop yield, watershed soils, and vegetation mapping in Wisconsin

    NASA Technical Reports Server (NTRS)

    Kiefer, R. W. (Principal Investigator)

    1979-01-01

    Research on the application of remote sensing to problems of water resources was concentrated on sediments and associated nonpoint source pollutants in lakes. Further transfer of the technology of remote sensing and the refinement of equipment and programs for thermal scanning and the digital analysis of images were also addressed.

  8. Applications of a Time Sequence Mechanism in the Simulation Cases of a Web-Based Medical Problem-Based Learning System

    ERIC Educational Resources Information Center

    Chen, Lih-Shyang; Cheng, Yuh-Ming; Weng, Sheng-Feng; Chen, Yong-Guo; Lin, Chyi-Her

    2009-01-01

    The prevalence of Internet applications nowadays has led many medical schools and centers to incorporate computerized Problem-Based Learning (PBL) methods into their training curricula. However, many of these PBL systems do not truly reflect the situations which practitioners may actually encounter in a real medical environment, and hence their…

  9. Problem-Based Learning across the Curriculum: Exploring the Efficacy of a Cross-Curricular Application of Preparation for Future Learning

    ERIC Educational Resources Information Center

    Swan, Karen; Vahey, Philip; van 't Hooft, Mark; Kratcoski, Annette; Rafanan, Ken; Stanford, Tina; Yarnall, Louise; Cook, Dale

    2013-01-01

    The research reported in this paper explores the applicability and efficacy of a variant of problem-based learning, the Preparation for Future Learning (PFL) approach, to teaching and learning within the context of a cross-curricular, middle school data literacy unit called "Thinking with Data" (TWD). A quasi-experimental design was used…

  10. Application of artificial intelligence in the marine industry: problem definition and analysis. Final report. Volume 1. Executive summary. Report for October 1985-February 1987

    SciTech Connect

    Dillingham, J.T.; Perakis, A.N.

    1987-02-25

    The problem of how to best apply state-of-the-art computer technology, especially using the tools of Artificial Intelligence and Expert Systems (AI/ES) to assist in the solution of several important marine operations problems is addressed. An introduction to AI and ES technology is first presented, including an overview and history, a review of recommended readings, a discussion of when a problem is an appropriate candidate for AI/ES application, available strategies, architectures and ES development tools, and estimates of their associated costs. A cost/benefit analysis of several potential applications in marine operations is conducted. Two of these applications, namely that of optimal container stowage and ship monitoring are examined in detail. Descriptions and formulations of these problems are presented, and estimates of expected monetary benefits are given. Some existing hardware and software tools which are presently in use, or which are now available, are described. Use of these tools for the above applications may improve the overall efficiency and the economic benefits of fleet operations.

  11. Parallel workflow manager for non-parallel bioinformatic applications to solve large-scale biological problems on a supercomputer.

    PubMed

    Suplatov, Dmitry; Popova, Nina; Zhumatiy, Sergey; Voevodin, Vladimir; Švedas, Vytas

    2016-04-01

    Rapid expansion of online resources providing access to genomic, structural, and functional information associated with biological macromolecules opens an opportunity to gain a deeper understanding of the mechanisms of biological processes due to systematic analysis of large datasets. This, however, requires novel strategies to optimally utilize computer processing power. Some methods in bioinformatics and molecular modeling require extensive computational resources. Other algorithms have fast implementations which take at most several hours to analyze a common input on a modern desktop station, however, due to multiple invocations for a large number of subtasks the full task requires a significant computing power. Therefore, an efficient computational solution to large-scale biological problems requires both a wise parallel implementation of resource-hungry methods as well as a smart workflow to manage multiple invocations of relatively fast algorithms. In this work, a new computer software mpiWrapper has been developed to accommodate non-parallel implementations of scientific algorithms within the parallel supercomputing environment. The Message Passing Interface has been implemented to exchange information between nodes. Two specialized threads - one for task management and communication, and another for subtask execution - are invoked on each processing unit to avoid deadlock while using blocking calls to MPI. The mpiWrapper can be used to launch all conventional Linux applications without the need to modify their original source codes and supports resubmission of subtasks on node failure. We show that this approach can be used to process huge amounts of biological data efficiently by running non-parallel programs in parallel mode on a supercomputer. The C++ source code and documentation are available from http://biokinet.belozersky.msu.ru/mpiWrapper . PMID:27122320

  12. Parallel workflow manager for non-parallel bioinformatic applications to solve large-scale biological problems on a supercomputer.

    PubMed

    Suplatov, Dmitry; Popova, Nina; Zhumatiy, Sergey; Voevodin, Vladimir; Švedas, Vytas

    2016-04-01

    Rapid expansion of online resources providing access to genomic, structural, and functional information associated with biological macromolecules opens an opportunity to gain a deeper understanding of the mechanisms of biological processes due to systematic analysis of large datasets. This, however, requires novel strategies to optimally utilize computer processing power. Some methods in bioinformatics and molecular modeling require extensive computational resources. Other algorithms have fast implementations which take at most several hours to analyze a common input on a modern desktop station, however, due to multiple invocations for a large number of subtasks the full task requires a significant computing power. Therefore, an efficient computational solution to large-scale biological problems requires both a wise parallel implementation of resource-hungry methods as well as a smart workflow to manage multiple invocations of relatively fast algorithms. In this work, a new computer software mpiWrapper has been developed to accommodate non-parallel implementations of scientific algorithms within the parallel supercomputing environment. The Message Passing Interface has been implemented to exchange information between nodes. Two specialized threads - one for task management and communication, and another for subtask execution - are invoked on each processing unit to avoid deadlock while using blocking calls to MPI. The mpiWrapper can be used to launch all conventional Linux applications without the need to modify their original source codes and supports resubmission of subtasks on node failure. We show that this approach can be used to process huge amounts of biological data efficiently by running non-parallel programs in parallel mode on a supercomputer. The C++ source code and documentation are available from http://biokinet.belozersky.msu.ru/mpiWrapper .

  13. Image reconstruction and subsurface detection by the application of Tikhonov regularization to inverse problems in hyperspectral images

    NASA Astrophysics Data System (ADS)

    Jiminez-Rodriguez, Luis O.; Rodriguez-Diaz, Eladio; Velez-Reyes, Miguel; DiMarzio, Charles A.

    2003-05-01

    Hyperspectral Remote Sensing has the potential to be used as an effective coral monitoring system from space. The problems to be addressed in hyperspectral imagery of coastal waters are related to the medium, clutter, and the object to be detected. In coastal waters the variability due to the interaction between the coast and the sea can bring significant disparity in the optical properties of the water column and the sea bottom. In terms of the medium, there is high scattering and absorption. Related to clutter we have the ocean floor, dissolved salt and gases, and dissolved organic matter. The object to be detected, in this case the coral reefs, has a weak signal, with temporal and spatial variation. In real scenarios the absorption and backscattering coefficients have spatial variation due to different sources of variability (river discharge, different depths of shallow waters, water currents) and temporal fluctuations. The retrieval of information about an object beneath some medium with high scattering and absorption properties requires the development of mathematical models and processing tools in the area of inversion, image reconstruction and detection. This paper presents the development of algorithms for retrieving information and its application to the recognition and classification of coral reefs under water with particles that provide high absorption and scattering. The data was gathered using a high resolution imaging spectrometer (hyperspectral) sensor. A mathematical model that simplifies the radiative transfer equation was used to quantify the interaction between the object of interest, the medium and the sensor. Tikhonov method of regularization was used in the inversion process to estimate the bottom albedo, ρ, of the ocean floor using a priori information. The a priori information is in the form of measured spectral signatures of objects of interest, such as sand, corals, and sea grass.

  14. Hill Problem Analytical Theory to the Order Four. Application to the Computation of Frozen Orbits around Planetary Satellites

    NASA Technical Reports Server (NTRS)

    Lara, Martin; Palacian, Jesus F.

    2007-01-01

    Frozen orbits of the Hill problem are determined in the double averaged problem, where short and long period terms are removed by means of Lie transforms. The computation of initial conditions of corresponding quasi periodic solutions in the non-averaged problem is straightforward for the perturbation method used provides the explicit equations of the transformation that connects the averaged and non-averaged models. A fourth order analytical theory reveals necessary for the accurate computation of quasi periodic, frozen orbits.

  15. Applications of the Space-Time Conservation Element and Solution Element (CE/SE) Method to Computational Aeroacoustic Benchmark Problems

    NASA Technical Reports Server (NTRS)

    Wang, Xiao-Yen; Himansu, Ananda; Chang, Sin-Chung; Jorgenson, Philip C. E.

    2000-01-01

    The Internal Propagation problems, Fan Noise problem, and Turbomachinery Noise problems are solved using the space-time conservation element and solution element (CE/SE) method. The problems in internal propagation problems address the propagation of sound waves through a nozzle. Both the nonlinear and linear quasi 1D Euler equations are solved. Numerical solutions are presented and compared with the analytical solution. The fan noise problem concerns the effect of the sweep angle on the acoustic field generated by the interaction of a convected gust with a cascade of 3D flat plates. A parallel version of the 3D CE/SE Euler solver is developed and employed to obtain numerical solutions for a family of swept flat plates. Numerical solutions for sweep angles of 0, 5, 10, and 15 deg are presented. The turbomachinery problems describe the interaction of a 2D vortical gust with a cascade of flat-plate airfoils with/without a downstream moving grid. The 2D nonlinear Euler Equations are solved and the converged numerical solutions are presented and compared with the corresponding analytical solution. All the comparisons demonstrate that the CE/SE method is capable of solving aeroacoustic problems with/without shock waves in a simple and efficient manner. Furthermore, the simple non-reflecting boundary condition used in the CE/SE method which is not based on the characteristic theory works very well in 1D, 2D and 3D problems.

  16. A parallel multi-domain solution methodology applied to nonlinear thermal transport problems in nuclear fuel pins

    SciTech Connect

    Philip, Bobby; Berrill, Mark A.; Allu, Srikanth; Hamilton, Steven P.; Sampath, Rahul S.; Clarno, Kevin T.; Dilts, Gary A.

    2015-01-26

    We describe an efficient and nonlinearly consistent parallel solution methodology for solving coupled nonlinear thermal transport problems that occur in nuclear reactor applications over hundreds of individual 3D physical subdomains. Efficiency is obtained by leveraging knowledge of the physical domains, the physics on individual domains, and the couplings between them for preconditioning within a Jacobian Free Newton Krylov method. Details of the computational infrastructure that enabled this work, namely the open source Advanced Multi-Physics (AMP) package developed by the authors are described. The details of verification and validation experiments, and parallel performance analysis in weak and strong scaling studies demonstrating the achieved efficiency of the algorithm are presented. Moreover, numerical experiments demonstrate that the preconditioner developed is independent of the number of fuel subdomains in a fuel rod, which is particularly important when simulating different types of fuel rods. Finally, we demonstrate the power of the coupling methodology by considering problems with couplings between surface and volume physics and coupling of nonlinear thermal transport in fuel rods to an external radiation transport code.

  17. A parallel multi-domain solution methodology applied to nonlinear thermal transport problems in nuclear fuel pins

    DOE PAGESBeta

    Philip, Bobby; Berrill, Mark A.; Allu, Srikanth; Hamilton, Steven P.; Sampath, Rahul S.; Clarno, Kevin T.; Dilts, Gary A.

    2015-01-26

    We describe an efficient and nonlinearly consistent parallel solution methodology for solving coupled nonlinear thermal transport problems that occur in nuclear reactor applications over hundreds of individual 3D physical subdomains. Efficiency is obtained by leveraging knowledge of the physical domains, the physics on individual domains, and the couplings between them for preconditioning within a Jacobian Free Newton Krylov method. Details of the computational infrastructure that enabled this work, namely the open source Advanced Multi-Physics (AMP) package developed by the authors are described. The details of verification and validation experiments, and parallel performance analysis in weak and strong scaling studies demonstratingmore » the achieved efficiency of the algorithm are presented. Moreover, numerical experiments demonstrate that the preconditioner developed is independent of the number of fuel subdomains in a fuel rod, which is particularly important when simulating different types of fuel rods. Finally, we demonstrate the power of the coupling methodology by considering problems with couplings between surface and volume physics and coupling of nonlinear thermal transport in fuel rods to an external radiation transport code.« less

  18. A Parallel Multi-Domain Solution Methodology Applied to Nonlinear Thermal Transport Problems in Nuclear Fuel Pins

    SciTech Connect

    Philip, Bobby; Berrill, Mark A; Allu, Srikanth; Hamilton, Steven P; Clarno, Kevin T; Dilts, Gary

    2014-08-01

    This paper describes an efficient and nonlinearly consistent parallel solution methodology for solving coupled nonlinear thermal transport problems that occur in nuclear reactor applications over hundreds of individual 3D physical subdomains. Efficiency is obtained by leveraging knowledge of the physical domains, the physics on individual domains, and the couplings between them for preconditioning within a Jacobian Free Newton Krylov method. Details of the computational infrastructure that enabled this work, namely the open source Advanced Multi-Physics (AMP) package developed by the authors is described. Details of verification and validation experiments, and parallel performance analysis in weak and strong scaling studies demonstrating the achieved efficiency of the algorithm are presented. Furthermore, numerical experiments demonstrate that the preconditioner developed is independent of the number of fuel subdomains in a fuel rod, which is particularly important when simulating different types of fuel rods. Finally, we demonstrate the power of the coupling methodology by considering problems with couplings between surface and volume physics and coupling of nonlinear thermal transport in fuel rods to an external radiation transport code.

  19. A parallel multi-domain solution methodology applied to nonlinear thermal transport problems in nuclear fuel pins

    NASA Astrophysics Data System (ADS)

    Philip, Bobby; Berrill, Mark A.; Allu, Srikanth; Hamilton, Steven P.; Sampath, Rahul S.; Clarno, Kevin T.; Dilts, Gary A.

    2015-04-01

    This paper describes an efficient and nonlinearly consistent parallel solution methodology for solving coupled nonlinear thermal transport problems that occur in nuclear reactor applications over hundreds of individual 3D physical subdomains. Efficiency is obtained by leveraging knowledge of the physical domains, the physics on individual domains, and the couplings between them for preconditioning within a Jacobian Free Newton Krylov method. Details of the computational infrastructure that enabled this work, namely the open source Advanced Multi-Physics (AMP) package developed by the authors is described. Details of verification and validation experiments, and parallel performance analysis in weak and strong scaling studies demonstrating the achieved efficiency of the algorithm are presented. Furthermore, numerical experiments demonstrate that the preconditioner developed is independent of the number of fuel subdomains in a fuel rod, which is particularly important when simulating different types of fuel rods. Finally, we demonstrate the power of the coupling methodology by considering problems with couplings between surface and volume physics and coupling of nonlinear thermal transport in fuel rods to an external radiation transport code.

  20. Evaluation of Internet-Based Technology for Supporting Self-Care: Problems Encountered by Patients and Caregivers When Using Self-Care Applications

    PubMed Central

    van Gemert-Pijnen, Julia; Boer, Henk; Steehouder, Michaël F; Seydel, Erwin R

    2008-01-01

    Background Prior studies have shown that many patients are interested in Internet-based technology that enables them to control their own care. As a result, innovative eHealth services are evolving rapidly, including self-assessment tools and secure patient-caregiver email communication. It is interesting to explore how these technologies can be used for supporting self-care. Objective The aim of this study was to determine user-centered criteria for successful application of Internet-based technology used in primary care for supporting self-care. Methods We conducted scenario-based tests combined with in-depth interviews among 14 caregivers and 14 patients/consumers to describe the use of various self-care applications and the accompanying user problems. We focused on the user-friendliness of the applications, the quality of care provided by the applications, and the implementation of the applications in practice. Results Problems with the user-friendliness of the self-care applications concerned inadequate navigation structures and search options and lack of feedback features. Patients want to retrieve health information with as little effort as possible; however, the navigation and search functionalities of the applications appeared incapable of handling patients’ health complaints efficiently. Among caregivers, the lack of feedback and documentation possibilities caused inconvenience. Caregivers wanted to know how patients acted on their advice, but the applications did not offer an adequate feedback feature. Quality of care problems were mainly related to insufficient tailoring of information to patients’ needs and to efficiency problems. Patients expected personalized advice to control their state of health, but the applications failed to deliver this. Language (semantics) also appeared as an obstacle to providing appropriate and useful self-care advice. Caregivers doubted the reliability of the computer-generated information and the efficiency and

  1. Seeing around a Ball: Complex, Technology-Based Problems in Calculus with Applications in Science and Engineering-Redux

    ERIC Educational Resources Information Center

    Winkel, Brian

    2008-01-01

    A complex technology-based problem in visualization and computation for students in calculus is presented. Strategies are shown for its solution and the opportunities for students to put together sequences of concepts and skills to build for success are highlighted. The problem itself involves placing an object under water in order to actually see…

  2. Measuring health-related problem solving among African Americans with multiple chronic conditions: application of Rasch analysis.

    PubMed

    Fitzpatrick, Stephanie L; Hill-Briggs, Felicia

    2015-10-01

    Identification of patients with poor chronic disease self-management skills can facilitate treatment planning, determine effectiveness of interventions, and reduce disease complications. This paper describes the use of a Rasch model, the Rating Scale Model, to examine psychometric properties of the 50-item Health Problem-Solving Scale (HPSS) among 320 African American patients with high risk for cardiovascular disease. Items on the positive/effective HPSS subscales targeted patients at low, moderate, and high levels of positive/effective problem solving, whereas items on the negative/ineffective problem solving subscales mostly targeted those at moderate or high levels of ineffective problem solving. Validity was examined by correlating factor scores on the measure with clinical and behavioral measures. Items on the HPSS show promise in the ability to assess health-related problem solving among high risk patients. However, further revisions of the scale are needed to increase its usability and validity with large, diverse patient populations in the future.

  3. Application of threshold concepts to ecological management problems: occupancy of Golden Eagles in Denali National Park, Alaska: Chapter 5

    USGS Publications Warehouse

    Eaton, Mitchell J.; Martin, Julien; Nichols, James D.; McIntyre, Carol; McCluskie, Maggie C.; Schmutz, Joel A.; Lubow, Bruce L.; Runge, Michael C.; Edited by Guntenspergen, Glenn R.

    2014-01-01

    In this chapter, we demonstrate the application of the various classes of thresholds, detailed in earlier chapters and elsewhere, via an actual but simplified natural resource management case study. We intend our example to provide the reader with the ability to recognize and apply the theoretical concepts of utility, ecological and decision thresholds to management problems through a formalized decision-analytic process. Our case study concerns the management of human recreational activities in Alaska’s Denali National Park, USA, and the possible impacts of such activities on nesting Golden Eagles, Aquila chrysaetos. Managers desire to allow visitors the greatest amount of access to park lands, provided that eagle nesting-site occupancy is maintained at a level determined to be acceptable by the managers themselves. As these two management objectives are potentially at odds, we treat minimum desired occupancy level as a utility threshold which, then, serves to guide the selection of annual management alternatives in the decision process. As human disturbance is not the only factor influencing eagle occupancy, we model nesting-site dynamics as a function of both disturbance and prey availability. We incorporate uncertainty in these dynamics by considering several hypotheses, including a hypothesis that site occupancy is affected only at a threshold level of prey abundance (i.e., an ecological threshold effect). By considering competing management objectives and accounting for two forms of thresholds in the decision process, we are able to determine the optimal number of annual nesting-site restrictions that will produce the greatest long-term benefits for both eagles and humans. Setting a utility threshold of 75 occupied sites, out of a total of 90 potential nesting sites, the optimization specified a decision threshold at approximately 80 occupied sites. At the point that current occupancy falls below 80 sites, the recommended decision is to begin restricting

  4. Applications of high-resolution spatial discretization scheme and Jacobian-free Newton–Krylov method in two-phase flow problems

    SciTech Connect

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2015-09-01

    The majority of the existing reactor system analysis codes were developed using low-order numerical schemes in both space and time. In many nuclear thermal–hydraulics applications, it is desirable to use higher-order numerical schemes to reduce numerical errors. High-resolution spatial discretization schemes provide high order spatial accuracy in smooth regions and capture sharp spatial discontinuity without nonphysical spatial oscillations. In this work, we adapted an existing high-resolution spatial discretization scheme on staggered grids in two-phase flow applications. Fully implicit time integration schemes were also implemented to reduce numerical errors from operator-splitting types of time integration schemes. The resulting nonlinear system has been successfully solved using the Jacobian-free Newton–Krylov (JFNK) method. The high-resolution spatial discretization and high-order fully implicit time integration numerical schemes were tested and numerically verified for several two-phase test problems, including a two-phase advection problem, a two-phase advection with phase appearance/disappearance problem, and the water faucet problem. Numerical results clearly demonstrated the advantages of using such high-resolution spatial and high-order temporal numerical schemes to significantly reduce numerical diffusion and therefore improve accuracy. Our study also demonstrated that the JFNK method is stable and robust in solving two-phase flow problems, even when phase appearance/disappearance exists.

  5. Joint application of AI techniques, PRA and disturbance analysis methodology to problems in the maintenance and design of nuclear power plants

    SciTech Connect

    Okrent, D.

    1989-03-01

    This final report summarizes the accomplishments of a two year research project entitled Joint Application of Artificial Intelligence Techniques, Probabilistic Risk Analysis, and Disturbance Analysis Methodology to Problems in the Maintenance and Design of Nuclear Power Plants. The objective of this project is to develop and apply appropriate combinations of techniques from artificial intelligence, (AI), reliability and risk analysis and disturbance analysis to well-defined programmatic problems of nuclear power plants. Reactor operations issues were added to those of design and maintenance as the project progressed.

  6. Joint application of AI techniques, PRA and disturbance analysis methodology to problems in the maintenance and design of nuclear power plants. Final report

    SciTech Connect

    Okrent, D.

    1989-03-01

    This final report summarizes the accomplishments of a two year research project entitled ``Joint Application of Artificial Intelligence Techniques, Probabilistic Risk Analysis, and Disturbance Analysis Methodology to Problems in the Maintenance and Design of Nuclear Power Plants. The objective of this project is to develop and apply appropriate combinations of techniques from artificial intelligence, (AI), reliability and risk analysis and disturbance analysis to well-defined programmatic problems of nuclear power plants. Reactor operations issues were added to those of design and maintenance as the project progressed.

  7. Problematic Internet use and other risky behaviors in college students: an application of problem-behavior theory.

    PubMed

    De Leo, Joseph Anthony; Wulfert, Edelgard

    2013-03-01

    Given the widespread use of the Internet, researchers have begun to examine the personal and social consequences associated with excessive online involvement. The present study examined college students' problematic Internet use (PIU) behaviors within the framework of Jessor and Jessor's (1977) problem-behavior theory. Its specific aim was to investigate the links between PIU with both internalizing (depression, social anxiety) and externalizing (substance use and other risky behaviors) problems. Relevant variables from the perceived environmental system, the personality system, and the behavioral system were entered in a canonical correlation analysis. The analysis yielded two distinct functions: the first function, titled traditional problem-behavior syndrome, characterized students who are impulsive, hold socially deviant attitudes and show a propensity to use tobacco and illicit drugs. The second function, titled problematic Internet-behavior syndrome, characterized students who are socially anxious, depressed, report conflictive family relations, and show a propensity toward PIU. Thus, PIU did not share the characteristics typically associated with the traditional problem-behavior syndrome consistent with problem-behavior theory, but showed correlates more consistent with internalizing rather than externalizing problems. PMID:23276311

  8. The results of the investigations of Russian Research Center—``Kurchatov Institute'' on molten salt applications to problems of nuclear energy systems

    NASA Astrophysics Data System (ADS)

    Novikov, Vladimir M.

    1995-09-01

    The results of investigations on molten salt (MS) applications to problems of nuclear energy systems that have been conducted in Russian Research ``Kurchatov Institute'' are presented and discussed. The spectrum of these investigations is rather broad and covers the following items: physical characteristics of molten salt nuclear energy systems (MSNES); nuclear and radiation safety of MSNES; construction materials compatible with MS of different compositions; technological aspects of MS loops; in-reactor loop testing. It is shown that main findings of completed program support the conclusion that there are no physical nor technological obstacles on a way of MS application to different nuclear energy systems.

  9. Application of a four-step HMX kinetic model to an impact-induced fraction ignition problems

    SciTech Connect

    Perry, William L; Gunderson, Jake A; Dickson, Peter M

    2010-01-01

    There has been a long history of interest in the decomposition kinetics of HMX and HMX-based formulations due to the widespread use of this explosive in high performance systems. The kinetics allow us to predict, or attempt to predict, the behavior of the explosive when subjected to thermal hazard scenarios that lead to ignition via impact, spark, friction or external heat. The latter, commonly referred to as 'cook off', has been widely studied and contemporary kinetic and transport models accurately predict time and location of ignition for simple geometries. However, there has been relatively little attention given to the problem of localized ignition that results from the first three ignition sources of impact, spark and friction. The use of a zero-order single-rate expression describing the exothermic decomposition of explosives dates to the early work of Frank-Kamanetskii in the late 1930s and continued through the 60's and 70's. This expression provides very general qualitative insight, but cannot provide accurate spatial or timing details of slow cook off ignition. In the 70s, Catalano, et al., noted that single step kinetics would not accurately predict time to ignition in the one-dimensional time to explosion apparatus (ODTX). In the early 80s, Tarver and McGuire published their well-known three step kinetic expression that included an endothermic decomposition step. This scheme significantly improved the accuracy of ignition time prediction for the ODTX. However, the Tarver/McGuire model could not produce the internal temperature profiles observed in the small-scale radial experiments nor could it accurately predict the location of ignition. Those factors are suspected to significantly affect the post-ignition behavior and better models were needed. Brill, et al. noted that the enthalpy change due to the beta-delta crystal phase transition was similar to the assumed endothermic decomposition step in the Tarver/McGuire model. Henson, et al., deduced the

  10. Application of optimization to the inverse problem of finding the worst-case heating configuration in a fire

    SciTech Connect

    Romero, V.J.; Eldred, M.S.; Bohnhoff, W.J.; Outka, D.E.

    1995-07-01

    Thermal optimization procedures have been applied to determine the worst-case heating boundary conditions that a safety device can be credibly subjected to. There are many interesting aspects of this work in the areas of thermal transport, optimization, discrete modeling, and computing. The forward problem involves transient simulations with a nonlinear 3-D finite element model solving a coupled conduction/radiation problem. Coupling to the optimizer requires that boundary conditions in the thermal model be parameterized in terms of the optimization variables. The optimization is carried out over a diverse multi-dimensional parameter space where the forward evaluations are computationally expensive and of unknown duration a priori. The optimization problem is complicated by numerical artifacts resulting from discrete approximation and finite computer precision, as well as theoretical difficulties associated with navigating to a global minimum on a nonconvex objective function having a fold and several local minima. In this paper we report on the solution of the optimization problem, discuss implications of some of the features of this problem on selection of a suitable and efficient optimization algorithm, and share lessons learned, fixes implemented, and research issues identified along the way.

  11. Application of optimization to the inverse problem of finding the worst-case heating configuration in a fire

    NASA Astrophysics Data System (ADS)

    Romero, V. J.; Eldred, M. S.; Bohnhoff, W. J.; Outka, D. E.

    1995-05-01

    Thermal optimization procedures have been applied to determine the worst-case heating boundary conditions that a safety device can be credibly subjected to. There are many interesting aspects of this work in the areas of thermal transport, optimization, discrete modeling, and computing. The forward problem involves transient simulations with a nonlinear 3-D finite element model solving a coupled conduction/radiation problem. Coupling to the optimizer requires that boundary conditions in the thermal model be parameterized in terms of the optimization variables. The optimization is carried out over a diverse multi-dimensional parameter space where the forward evaluations are computationally expensive and of unknown duration a priori. The optimization problem is complicated by numerical artifacts resulting from discrete approximation and finite computer precision, as well as theoretical difficulties associated with navigating to a global minimum on a nonconvex objective function having a fold and several local minima. In this paper we report on the solution of the optimization problem, discuss implications of some of the features of this problem on selection of a suitable and efficient optimization algorithm, and share lessons learned, fixes implemented, and research issues identified along the way.

  12. Application of the logarithmic Hamiltonian algorithm to the circular restricted three-body problem with some post-Newtonian terms

    NASA Astrophysics Data System (ADS)

    Su, Xiang-Ning; Wu, Xin; Liu, Fu-Yao

    2016-01-01

    An implementation of a fourth-order symplectic algorithm to the logarithmic Hamiltonian of the Newtonian circular restricted three-body problem in an inertial frame is detailed. The logarithmic Hamiltonian algorithm produces highly accurate results, comparable to the non-logarithmic one. Its numerical performance is independent of an orbital eccentricity. However, it is not when some post-Newtonian terms are included in this problem. Although the numerical accuracy becomes somewhat poorer as the orbital eccentricity gets larger, it is still much higher than that of the non-logarithmic Hamiltonian algorithm. As a result, the present code can drastically eliminate the overestimation of Lyapunov exponents and the spurious rapid growth of fast Lyapunov indicators for high-eccentricity orbits in the Newtonian or post-Newtonian circular restricted three-body problem.

  13. Exact triple integrals of beam functions. [in application of Galerkin method to heat and mass transfer problems

    NASA Technical Reports Server (NTRS)

    Jhaveri, B. S.; Rosenberger, F.

    1982-01-01

    Definite triple integrals encountered in applying the Galerkin method to the problem of heat and mass transfer across rectangular enclosures are discussed. Rather than evaluating them numerically, the technique described by Reid and Harris (1958) was extended to obtain the exact solution of the integrals. In the process, four linear simultaneous equations with triple integrals as unknowns were obtained. These equations were then solved exactly to obtain the closed form solution. Since closed form representations of this type have been shown to be useful in solving nonlinear hydrodynamic problems by series expansion, the integrals are presented here in general form.

  14. Applications of the direct Trefftz boundary element method to the free-vibration problem of a membrane

    NASA Astrophysics Data System (ADS)

    Chang, Jiang Ren; Liu, Ru Feng; Yeih, Weichung; Kuo, Shyh Rong

    2002-08-01

    In this paper, the direct Trefftz method is applied to solve the free-vibration problem of a membrane. In the direct Trefftz method, there exists no spurious eigenvalue. However, an ill-posed nature of numerical instability encountered in the direct Trefftz method requires some treatments. The Tikhonov's regularization method and generalized singular-value decomposition method are used to deal with such an ill-posed problem. Numerical results show the validity of the current approach. copyright 2002 Acoustical Society of America.

  15. Balance Problems

    MedlinePlus

    ... often, it could be a sign of a balance problem. Balance problems can make you feel unsteady or as ... fall-related injuries, such as hip fracture. Some balance problems are due to problems in the inner ...

  16. The Application of High-Resolution Electron Microscopy to Problems in Solid State Chemistry: The Exploits of a Peeping TEM.

    ERIC Educational Resources Information Center

    Eyring, LeRoy

    1980-01-01

    Describes methods for using the high-resolution electron microscope in conjunction with other tools to reveal the identity and environment of atoms. Problems discussed include the ultimate structure of real crystalline solids including defect structure and the mechanisms of chemical reactions. (CS)

  17. Goal Priming and the Emotional Experience of Students with and without Attention Problems: An Application of the Emotional Stroop Task

    ERIC Educational Resources Information Center

    Sideridis, Georgios; Vansteenkiste, Maarten; Shiakalli, Maria; Georgiou, Maria; Irakleous, Ioanna; Tsigourla, Ioanna; Fragioudaki, Eirini

    2009-01-01

    The primary purpose of the present study is to evaluate the emotional experience of students with (n = 52) and without attention problems (n = 272) during an achievement task. A secondary purpose of the present study is to compare students' emotional response to various stimuli, when motivated by various achievement goals. Participants were…

  18. Algorithm for finding partitionings of hard variants of boolean satisfiability problem with application to inversion of some cryptographic functions.

    PubMed

    Semenov, Alexander; Zaikin, Oleg

    2016-01-01

    In this paper we propose an approach for constructing partitionings of hard variants of the Boolean satisfiability problem (SAT). Such partitionings can be used for solving corresponding SAT instances in parallel. For the same SAT instance one can construct different partitionings, each of them is a set of simplified versions of the original SAT instance. The effectiveness of an arbitrary partitioning is determined by the total time of solving of all SAT instances from it. We suggest the approach, based on the Monte Carlo method, for estimating time of processing of an arbitrary partitioning. With each partitioning we associate a point in the special finite search space. The estimation of effectiveness of the particular partitioning is the value of predictive function in the corresponding point of this space. The problem of search for an effective partitioning can be formulated as a problem of optimization of the predictive function. We use metaheuristic algorithms (simulated annealing and tabu search) to move from point to point in the search space. In our computational experiments we found partitionings for SAT instances encoding problems of inversion of some cryptographic functions. Several of these SAT instances with realistic predicted solving time were successfully solved on a computing cluster and in the volunteer computing project SAT@home. The solving time agrees well with estimations obtained by the proposed method. PMID:27190753

  19. An Exploration of Developing Active Exploring and Problem Solving Skill Lego Robot Course by the Application of Anchored Instruction Theory

    ERIC Educational Resources Information Center

    Chen, Chen-Yuan

    2013-01-01

    In recent years, researches had shown that the development of problem solving skill became important for education, and the educational robots are capable for promoting students not only understand the physical and mathematical concepts, but also have active and constructive learning. Meanwhile, the importance of situation in education is rising,…

  20. Environmental Correlates of Gambling Behavior among College Students: A Partial Application of Problem Behavior Theory to Gambling

    ERIC Educational Resources Information Center

    Wickwire, Emerson M., Jr.; McCausland, Claudia; Whelan, James P.; Luellen, Jason; Meyers, Andrew W.; Studaway, Adrienne

    2008-01-01

    This study explored the relation between gambling behavior among college students and the perceived environment, the component of problem behavior theory (Jessor & Jessor, 1977) that assesses the ways that youth perceive their parents and peers. Two hundred and thirty-three ethnically diverse undergraduates at a large urban public university…