A Global Sensitivity Analysis Methodology for Multi-physics Applications
Tong, C H; Graziani, F R
2007-02-02
Experiments are conducted to draw inferences about an entire ensemble based on a selected number of observations. This applies to both physical experiments as well as computer experiments, the latter of which are performed by running the simulation models at different input configurations and analyzing the output responses. Computer experiments are instrumental in enabling model analyses such as uncertainty quantification and sensitivity analysis. This report focuses on a global sensitivity analysis methodology that relies on a divide-and-conquer strategy and uses intelligent computer experiments. The objective is to assess qualitatively and/or quantitatively how the variabilities of simulation output responses can be accounted for by input variabilities. We address global sensitivity analysis in three aspects: methodology, sampling/analysis strategies, and an implementation framework. The methodology consists of three major steps: (1) construct credible input ranges; (2) perform a parameter screening study; and (3) perform a quantitative sensitivity analysis on a reduced set of parameters. Once identified, research effort should be directed to the most sensitive parameters to reduce their uncertainty bounds. This process is repeated with tightened uncertainty bounds for the sensitive parameters until the output uncertainties become acceptable. To accommodate the needs of multi-physics application, this methodology should be recursively applied to individual physics modules. The methodology is also distinguished by an efficient technique for computing parameter interactions. Details for each step will be given using simple examples. Numerical results on large scale multi-physics applications will be available in another report. Computational techniques targeted for this methodology have been implemented in a software package called PSUADE.
Partitioned coupling strategies for multi-physically coupled radiative heat transfer problems
Wendt, Gunnar; Erbts, Patrick Düster, Alexander
2015-11-01
This article aims to propose new aspects concerning a partitioned solution strategy for multi-physically coupled fields including the physics of thermal radiation. Particularly, we focus on the partitioned treatment of electro–thermo-mechanical problems with an additional fourth thermal radiation field. One of the main goals is to take advantage of the flexibility of the partitioned approach to enable combinations of different simulation software and solvers. Within the frame of this article, we limit ourselves to the case of nonlinear thermoelasticity at finite strains, using temperature-dependent material parameters. For the thermal radiation field, diffuse radiating surfaces and gray participating media are assumed. Moreover, we present a robust and fast partitioned coupling strategy for the fourth field problem. Stability and efficiency of the implicit coupling algorithm are improved drawing on several methods to stabilize and to accelerate the convergence. To conclude and to review the effectiveness and the advantages of the additional thermal radiation field several numerical examples are considered to study the proposed algorithm. In particular we focus on an industrial application, namely the electro–thermo-mechanical modeling of the field-assisted sintering technology.
Partitioned coupling strategies for multi-physically coupled radiative heat transfer problems
NASA Astrophysics Data System (ADS)
Wendt, Gunnar; Erbts, Patrick; Düster, Alexander
2015-11-01
This article aims to propose new aspects concerning a partitioned solution strategy for multi-physically coupled fields including the physics of thermal radiation. Particularly, we focus on the partitioned treatment of electro-thermo-mechanical problems with an additional fourth thermal radiation field. One of the main goals is to take advantage of the flexibility of the partitioned approach to enable combinations of different simulation software and solvers. Within the frame of this article, we limit ourselves to the case of nonlinear thermoelasticity at finite strains, using temperature-dependent material parameters. For the thermal radiation field, diffuse radiating surfaces and gray participating media are assumed. Moreover, we present a robust and fast partitioned coupling strategy for the fourth field problem. Stability and efficiency of the implicit coupling algorithm are improved drawing on several methods to stabilize and to accelerate the convergence. To conclude and to review the effectiveness and the advantages of the additional thermal radiation field several numerical examples are considered to study the proposed algorithm. In particular we focus on an industrial application, namely the electro-thermo-mechanical modeling of the field-assisted sintering technology.
Specification of the Advanced Burner Test Reactor Multi-Physics Coupling Demonstration Problem
Shemon, E. R.; Grudzinski, J. J.; Lee, C. H.; Thomas, J. W.; Yu, Y. Q.
2015-12-21
This document specifies the multi-physics nuclear reactor demonstration problem using the SHARP software package developed by NEAMS. The SHARP toolset simulates the key coupled physics phenomena inside a nuclear reactor. The PROTEUS neutronics code models the neutron transport within the system, the Nek5000 computational fluid dynamics code models the fluid flow and heat transfer, and the DIABLO structural mechanics code models structural and mechanical deformation. The three codes are coupled to the MOAB mesh framework which allows feedback from neutronics, fluid mechanics, and mechanical deformation in a compatible format.
Multi-Physics Demonstration Problem with the SHARP Reactor Simulation Toolkit
Merzari, E.; Shemon, E. R.; Yu, Y. Q.; Thomas, J. W.; Obabko, A.; Jain, Rajeev; Mahadevan, Vijay; Tautges, Timothy; Solberg, Jerome; Ferencz, Robert Mark; Whitesides, R.
2015-12-21
This report describes to employ SHARP to perform a first-of-a-kind analysis of the core radial expansion phenomenon in an SFR. This effort required significant advances in the framework Multi-Physics Demonstration Problem with the SHARP Reactor Simulation Toolkit used to drive the coupled simulations, manipulate the mesh in response to the deformation of the geometry, and generate the necessary modified mesh files. Furthermore, the model geometry is fairly complex, and consistent mesh generation for the three physics modules required significant effort. Fully-integrated simulations of a 7-assembly mini-core test problem have been performed, and the results are presented here. Physics models of a full-core model of the Advanced Burner Test Reactor have also been developed for each of the three physics modules. Standalone results of each of the three physics modules for the ABTR are presented here, which provides a demonstration of the feasibility of the fully-integrated simulation.
DAG Software Architectures for Multi-Scale Multi-Physics Problems at Petascale and Beyond
NASA Astrophysics Data System (ADS)
Berzins, Martin
2015-03-01
The challenge of computations at Petascale and beyond is to ensure how to make possible efficient calculations on possibly hundreds of thousands for cores or on large numbers of GPUs or Intel Xeon Phis. An important methodology for achieving this is at present thought to be that of asynchronous task-based parallelism. The success of this approach will be demonstrated using the Uintah software framework for the solution of coupled fluid-structure interaction problems with chemical reactions. The layered approach of this software makes it possible for the user to specify the physical problems without parallel code, for that specification to be translated into a parallel set of tasks. These tasks are executed using a runtime system that executes tasks asynchronously and sometimes out-of-order. The scalability and portability of this approach will be demonstrated using examples from large scale combustion problems, industrial detonations and multi-scale, multi-physics models. The challenges of scaling such calculations to the next generations of leadership class computers (with more than a hundred petaflops) will be discussed. Thanks to NSF, XSEDE, DOE NNSA, DOE NETL, DOE ALCC and DOE INCITE.
NASA Astrophysics Data System (ADS)
Spiegelman, M. W.; Wilson, C. R.; Van Keken, P. E.
2013-12-01
We announce the release of a new software infrastructure, TerraFERMA, the Transparent Finite Element Rapid Model Assembler for the exploration and solution of coupled multi-physics problems. The design of TerraFERMA is driven by two overarching computational needs in Earth sciences. The first is the need for increased flexibility in both problem description and solution strategies for coupled problems where small changes in model assumptions can often lead to dramatic changes in physical behavior. The second is the need for software and models that are more transparent so that results can be verified, reproduced and modified in a manner such that the best ideas in computation and earth science can be more easily shared and reused. TerraFERMA leverages three advanced open-source libraries for scientific computation that provide high level problem description (FEniCS), composable solvers for coupled multi-physics problems (PETSc) and a science neutral options handling system (SPuD) that allows the hierarchical management of all model options. TerraFERMA integrates these libraries into an easier to use interface that organizes the scientific and computational choices required in a model into a single options file, from which a custom compiled application is generated and run. Because all models share the same infrastructure, models become more reusable and reproducible. TerraFERMA inherits much of its functionality from the underlying libraries. It currently solves partial differential equations (PDE) using finite element methods on simplicial meshes of triangles (2D) and tetrahedra (3D). The software is particularly well suited for non-linear problems with complex coupling between components. We demonstrate the design and utility of TerraFERMA through examples of thermal convection and magma dynamics. TerraFERMA has been tested successfully against over 45 benchmark problems from 7 publications in incompressible and compressible convection, magmatic solitary waves
Final report on LDRD project : coupling strategies for multi-physics applications.
Hopkins, Matthew Morgan; Moffat, Harry K.; Carnes, Brian; Hooper, Russell Warren; Pawlowski, Roger P.
2007-11-01
Many current and future modeling applications at Sandia including ASC milestones will critically depend on the simultaneous solution of vastly different physical phenomena. Issues due to code coupling are often not addressed, understood, or even recognized. The objectives of the LDRD has been both in theory and in code development. We will show that we have provided a fundamental analysis of coupling, i.e., when strong coupling vs. a successive substitution strategy is needed. We have enabled the implementation of tighter coupling strategies through additions to the NOX and Sierra code suites to make coupling strategies available now. We have leveraged existing functionality to do this. Specifically, we have built into NOX the capability to handle fully coupled simulations from multiple codes, and we have also built into NOX the capability to handle Jacobi Free Newton Krylov simulations that link multiple applications. We show how this capability may be accessed from within the Sierra Framework as well as from outside of Sierra. The critical impact from this LDRD is that we have shown how and have delivered strategies for enabling strong Newton-based coupling while respecting the modularity of existing codes. This will facilitate the use of these codes in a coupled manner to solve multi-physic applications.
Development of High-Order Method for Multi-Physics Problems Governed by Hyperbolic Equations
2012-08-01
implicit time marching with large time steps. 4.1 Background The one equation Spalart -Almaras (SA) turbulence model [18-21] in conservative...20] Spalart , P.R., Jou W-H, Strelets, M., Allmaras , S.R.. “Comments on the feasibility of LES for wings, and on a hybrid RANS/LES approach,” In...offer significant advantages for the simulation of complex flows and turbulence in non trivial geometries of interest to practical applications. The
Module-based Hybrid Uncertainty Quantification for Multi-physics Applications: Theory and Software
Tong, Charles; Chen, Xiao; Iaccarino, Gianluca; Mittal, Akshay
2013-10-08
In this project we proposed to develop an innovative uncertainty quantification methodology that captures the best of the two competing approaches in UQ, namely, intrusive and non-intrusive approaches. The idea is to develop the mathematics and the associated computational framework and algorithms to facilitate the use of intrusive or non-intrusive UQ methods in different modules of a multi-physics multi-module simulation model in a way that physics code developers for different modules are shielded (as much as possible) from the chores of accounting for the uncertain ties introduced by the other modules. As the result of our research and development, we have produced a number of publications, conference presentations, and a software product.
Computational Methods for Multi-physics Applications with Fluid-structure Interaction
2010-10-01
blood - flow interaction with an arterial wall or computational aero-elasticity of flexible micro-air vehicles. In the last two decades, domain...Aulisa, E., Manservisi, S. and Seshaiyer, P., A computational multilevel approach for solving 2D Navier - Stokes equations over non-matching grids...problems require studying complex nonlinear interactions between the structural deformation and the flow -field that often arise in applications such as
A theory manual for multi-physics code coupling in LIME.
Belcourt, Noel; Bartlett, Roscoe Ainsworth; Pawlowski, Roger Patrick; Schmidt, Rodney Cannon; Hooper, Russell Warren
2011-03-01
The Lightweight Integrating Multi-physics Environment (LIME) is a software package for creating multi-physics simulation codes. Its primary application space is when computer codes are currently available to solve different parts of a multi-physics problem and now need to be coupled with other such codes. In this report we define a common domain language for discussing multi-physics coupling and describe the basic theory associated with multiphysics coupling algorithms that are to be supported in LIME. We provide an assessment of coupling techniques for both steady-state and time dependent coupled systems. Example couplings are also demonstrated.
NASA Astrophysics Data System (ADS)
Thoma, M.; Grosfeld, K.; Barbi, D.; Determann, J.; Göller, S.; Mayer, C.; Pattyn, F.
2013-06-01
Glaciers and ice caps exhibit currently the largest cryospheric contributions to sea level rise. Modelling the dynamics and mass balance of the major ice sheets is therefore an important issue to investigate the current state and the future response of the cryosphere in response to changing environmental conditions, namely global warming. This requires a powerful, easy-to-use, scalable multi-physics ice dynamics model. Based on the well-known and established ice sheet model of Pattyn (2003) we develop the modular multi-physics thermomechanic ice model RIMBAY, in which we improve the original version in several aspects like a shallow-ice-shallow-shelf coupler and a full 3-D-grounding-line migration scheme based on Schoof's (2007) heuristic analytical approach. We summarise the Full-Stokes equations and several approximations implemented within this model and we describe the different numerical discretisations. The results are cross-validated against previous publications dealing with ice modelling, and some additional artificial set-ups demonstrate the robustness of the different solvers and their internal coupling. RIMBAY is designed for an easy adaption to new scientific issues. Hence, we demonstrate in very different set-ups the applicability and functionality of RIMBAY in Earth system science in general and ice modelling in particular.
NASA Astrophysics Data System (ADS)
Poulet, Thomas; Paesold, Martin; Veveakis, Manolis
2017-03-01
Faults play a major role in many economically and environmentally important geological systems, ranging from impermeable seals in petroleum reservoirs to fluid pathways in ore-forming hydrothermal systems. Their behavior is therefore widely studied and fault mechanics is particularly focused on the mechanisms explaining their transient evolution. Single faults can change in time from seals to open channels as they become seismically active and various models have recently been presented to explain the driving forces responsible for such transitions. A model of particular interest is the multi-physics oscillator of Alevizos et al. (J Geophys Res Solid Earth 119(6), 4558-4582, 2014) which extends the traditional rate and state friction approach to rate and temperature-dependent ductile rocks, and has been successfully applied to explain spatial features of exposed thrusts as well as temporal evolutions of current subduction zones. In this contribution we implement that model in REDBACK, a parallel open-source multi-physics simulator developed to solve such geological instabilities in three dimensions. The resolution of the underlying system of equations in a tightly coupled manner allows REDBACK to capture appropriately the various theoretical regimes of the system, including the periodic and non-periodic instabilities. REDBACK can then be used to simulate the drastic permeability evolution in time of such systems, where nominally impermeable faults can sporadically become fluid pathways, with permeability increases of several orders of magnitude.
Multi-physics design of microvascular materials for active cooling applications
NASA Astrophysics Data System (ADS)
Aragón, Alejandro M.; Smith, Kyle J.; Geubelle, Philippe H.; White, Scott R.
2011-06-01
This paper describes a framework for the design of microvascular polymeric components for active cooling applications. The design of the embedded networks involves complex and competing objectives that are associated with various physical processes. The optimization tool includes a PDE solver based on advanced finite element techniques coupled to a multi-objective constrained genetic algorithm. The resulting Pareto-optimal fronts are investigated in the optimization of these materials for void volume fraction, flow efficiency, maximum temperature, and surface convection objective functions.
Modeling and simulation of multi-physics multi-scale transport phenomenain bio-medical applications
NASA Astrophysics Data System (ADS)
Kenjereš, Saša
2014-08-01
We present a short overview of some of our most recent work that combines the mathematical modeling, advanced computer simulations and state-of-the-art experimental techniques of physical transport phenomena in various bio-medical applications. In the first example, we tackle predictions of complex blood flow patterns in the patient-specific vascular system (carotid artery bifurcation) and transfer of the so-called "bad" cholesterol (low-density lipoprotein, LDL) within the multi-layered artery wall. This two-way coupling between the blood flow and corresponding mass transfer of LDL within the artery wall is essential for predictions of regions where atherosclerosis can develop. It is demonstrated that a recently developed mathematical model, which takes into account the complex multi-layer arterial-wall structure, produced LDL profiles within the artery wall in good agreement with in-vivo experiments in rabbits, and it can be used for predictions of locations where the initial stage of development of atherosclerosis may take place. The second example includes a combination of pulsating blood flow and medical drug delivery and deposition controlled by external magnetic field gradients in the patient specific carotid artery bifurcation. The results of numerical simulations are compared with own PIV (Particle Image Velocimetry) and MRI (Magnetic Resonance Imaging) in the PDMS (silicon-based organic polymer) phantom. A very good agreement between simulations and experiments is obtained for different stages of the pulsating cycle. Application of the magnetic drug targeting resulted in an increase of up to ten fold in the efficiency of local deposition of the medical drug at desired locations. Finally, the LES (Large Eddy Simulation) of the aerosol distribution within the human respiratory system that includes up to eight bronchial generations is performed. A very good agreement between simulations and MRV (Magnetic Resonance Velocimetry) measurements is obtained
Multi-physics CFD simulations in engineering
NASA Astrophysics Data System (ADS)
Yamamoto, Makoto
2013-08-01
Nowadays Computational Fluid Dynamics (CFD) software is adopted as a design and analysis tool in a great number of engineering fields. We can say that single-physics CFD has been sufficiently matured in the practical point of view. The main target of existing CFD software is single-phase flows such as water and air. However, many multi-physics problems exist in engineering. Most of them consist of flow and other physics, and the interactions between different physics are very important. Obviously, multi-physics phenomena are critical in developing machines and processes. A multi-physics phenomenon seems to be very complex, and it is so difficult to be predicted by adding other physics to flow phenomenon. Therefore, multi-physics CFD techniques are still under research and development. This would be caused from the facts that processing speed of current computers is not fast enough for conducting a multi-physics simulation, and furthermore physical models except for flow physics have not been suitably established. Therefore, in near future, we have to develop various physical models and efficient CFD techniques, in order to success multi-physics simulations in engineering. In the present paper, I will describe the present states of multi-physics CFD simulations, and then show some numerical results such as ice accretion and electro-chemical machining process of a three-dimensional compressor blade which were obtained in my laboratory. Multi-physics CFD simulations would be a key technology in near future.
Salko, Robert K.; Schmidt, Rodney C.; Avramova, Maria N.
2014-11-23
This study describes major improvements to the computational infrastructure of the CTF subchannel code so that full-core, pincell-resolved (i.e., one computational subchannel per real bundle flow channel) simulations can now be performed in much shorter run-times, either in stand-alone mode or as part of coupled-code multi-physics calculations. These improvements support the goals of the Department Of Energy Consortium for Advanced Simulation of Light Water Reactors (CASL) Energy Innovation Hub to develop high fidelity multi-physics simulation tools for nuclear energy design and analysis.
NASA Astrophysics Data System (ADS)
Cocheteau, N.; Maurel-Pantel, A.; Lebon, F.; Rosu, I.; Ait-Zaid, S.; Savin de Larclause, I.; Salaun, Y.
2014-06-01
Direct bonding is a well-known process. However in order to use this process in spatial instrument fabrication the mechanical resistance needs to be quantified precisely. In order to improve bonded strength, optimal parameters of the process are found by studying the influence of annealing time, temperature and roughness which are studied using three experimental methods: double shear, cleavage and wedge tests. Those parameters are chosen thanks to the appearance of time/temperature equivalence. All results brought out the implementation of a multi-physic model to predict the mechanical behavior of direct bonding interface.
NASA Astrophysics Data System (ADS)
Jayanthi, Aditya; Coker, Christopher
2016-11-01
In the last decade, CFD simulations have transitioned from the stage where they are used to validate the final designs to the main stream development of products driven by the simulation. However, there are still niche areas of applications liking oiling simulations, where the traditional CFD simulation times are probative to use them in product development and have to rely on experimental methods, which are expensive. In this paper a unique example of Sprocket-Chain simulation will be presented using nanoFluidx a commercial SPH code developed by FluiDyna GmbH and Altair Engineering. The grid less nature of the of SPH method has inherent advantages in the areas of application with complex geometry which pose severe challenge to classical finite volume CFD methods due to complex moving geometries, moving meshes and high resolution requirements leading to long simulation times. The simulations times using nanoFluidx can be reduced from weeks to days allowing the flexibility to run more simulation and can be in used in main stream product development. The example problem under consideration is a classical Multiphysics problem and a sequentially coupled solution of Motion Solve and nanoFluidX will be presented. This abstract is replacing DFD16-2016-000045.
Salko, Robert K; Schmidt, Rodney; Avramova, Maria N
2014-01-01
This paper describes major improvements to the computational infrastructure of the CTF sub-channel code so that full-core sub-channel-resolved simulations can now be performed in much shorter run-times, either in stand-alone mode or as part of coupled-code multi-physics calculations. These improvements support the goals of the Department Of Energy (DOE) Consortium for Advanced Simulations of Light Water (CASL) Energy Innovation Hub to develop high fidelity multi-physics simulation tools for nuclear energy design and analysis. A set of serial code optimizations--including fixing computational inefficiencies, optimizing the numerical approach, and making smarter data storage choices--are first described and shown to reduce both execution time and memory usage by about a factor of ten. Next, a Single Program Multiple Data (SPMD) parallelization strategy targeting distributed memory Multiple Instruction Multiple Data (MIMD) platforms and utilizing domain-decomposition is presented. In this approach, data communication between processors is accomplished by inserting standard MPI calls at strategic points in the code. The domain decomposition approach implemented assigns one MPI process to each fuel assembly, with each domain being represented by its own CTF input file. The creation of CTF input files, both for serial and parallel runs, is also fully automated through use of a pre-processor utility that takes a greatly reduced set of user input over the traditional CTF input file. To run CTF in parallel, two additional libraries are currently needed; MPI, for inter-processor message passing, and the Parallel Extensible Toolkit for Scientific Computation (PETSc), which is leveraged to solve the global pressure matrix in parallel. Results presented include a set of testing and verification calculations and performance tests assessing parallel scaling characteristics up to a full core, sub-channel-resolved model of Watts Bar Unit 1 under hot full-power conditions (193 17x17
Mechanics: Ideas, problems, applications
NASA Astrophysics Data System (ADS)
Ishlinskii, A. Iu.
The book contains the published articles and reports by academician Ishlinskii which deal with the concepts and ideas of modern mechanics, its role in providing a general understanding of the natural phenomena, and its applications to various problems in science and engineering. Attention is given to the methodological aspects of mechanics, to the history of the theories of plasticity, friction, gyroscopic and inertial systems, and inertial navigation, and to mathematical methods in mechanics. The book also contains essays on some famous scientists and engineers.
NASA Astrophysics Data System (ADS)
Khuwaileh, Bassam
High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL
Petascale computation of multi-physics seismic simulations
NASA Astrophysics Data System (ADS)
Gabriel, Alice-Agnes; Madden, Elizabeth H.; Ulrich, Thomas; Wollherr, Stephanie; Duru, Kenneth C.
2017-04-01
Capturing the observed complexity of earthquake sources in concurrence with seismic wave propagation simulations is an inherently multi-scale, multi-physics problem. In this presentation, we present simulations of earthquake scenarios resolving high-detail dynamic rupture evolution and high frequency ground motion. The simulations combine a multitude of representations of model complexity; such as non-linear fault friction, thermal and fluid effects, heterogeneous fault stress and fault strength initial conditions, fault curvature and roughness, on- and off-fault non-elastic failure to capture dynamic rupture behavior at the source; and seismic wave attenuation, 3D subsurface structure and bathymetry impacting seismic wave propagation. Performing such scenarios at the necessary spatio-temporal resolution requires highly optimized and massively parallel simulation tools which can efficiently exploit HPC facilities. Our up to multi-PetaFLOP simulations are performed with SeisSol (www.seissol.org), an open-source software package based on an ADER-Discontinuous Galerkin (DG) scheme solving the seismic wave equations in velocity-stress formulation in elastic, viscoelastic, and viscoplastic media with high-order accuracy in time and space. Our flux-based implementation of frictional failure remains free of spurious oscillations. Tetrahedral unstructured meshes allow for complicated model geometry. SeisSol has been optimized on all software levels, including: assembler-level DG kernels which obtain 50% peak performance on some of the largest supercomputers worldwide; an overlapping MPI-OpenMP parallelization shadowing the multiphysics computations; usage of local time stepping; parallel input and output schemes and direct interfaces to community standard data formats. All these factors enable aim to minimise the time-to-solution. The results presented highlight the fact that modern numerical methods and hardware-aware optimization for modern supercomputers are essential
Fracture Characterization through Multi-Physics Joint Inversion
NASA Astrophysics Data System (ADS)
Finsterle, S.; Edmiston, J. K.; Zhang, Y.
2014-12-01
Natural and man-made fractures tend to significantly impact the behavior of a subsurface system - with both desirable and undesirable consequences. Thus, the description, characterization, and prediction of fractured systems requires careful conceptualization and a defensible modeling approach that is tailored to the objectives of a specific application. We review some of these approaches and the related data needs, and discuss the use of multi-physics joint inversion techniques to identify and characterize the relevant features of the fracture system. In particular, we demonstrate the potential use of a non-isothermal, multiphase flow simulator coupled to a thermo-poro-elastic model for the calculation of observable deformations during injection-production operations. This model is integrated into a joint inversion framework for the estimation of geometrical, hydrogeological, rockmechanical, thermal, and statistical parameters representing the fractured porous medium.
Multi-physics/scale simulations using particles
NASA Astrophysics Data System (ADS)
Koumoutsakos, Petros
2006-03-01
Particle simulations of continuum and discrete phenomena can be formulated by following the motion of interacting particles that carry the physical properties of the systems that is being approximated (continuum) or modeled (discrete) by the particles. We identify the common computational characteristics of particle methods and emphasize their key properties that enable the formulation of a novel, systematic framework for multiscale simulations, that can be applicable to the simulation of diverse physical problems. We present novel multiresolution particle methods for continuum (fluid/solid) simulations, using adaptive mesh refinement and wavelets, by relaxing the grid-free character of particle methods and discuss the coupling of scales in continuum-atomistic flow simulations.
Integration of Advanced Probabilistic Analysis Techniques with Multi-Physics Models
Cetiner, Mustafa Sacit; none,; Flanagan, George F.; Poore III, Willis P.; Muhlheim, Michael David
2014-07-30
An integrated simulation platform that couples probabilistic analysis-based tools with model-based simulation tools can provide valuable insights for reactive and proactive responses to plant operating conditions. The objective of this work is to demonstrate the benefits of a partial implementation of the Small Modular Reactor (SMR) Probabilistic Risk Assessment (PRA) Detailed Framework Specification through the coupling of advanced PRA capabilities and accurate multi-physics plant models. Coupling a probabilistic model with a multi-physics model will aid in design, operations, and safety by providing a more accurate understanding of plant behavior. This represents the first attempt at actually integrating these two types of analyses for a control system used for operations, on a faster than real-time basis. This report documents the development of the basic communication capability to exchange data with the probabilistic model using Reliability Workbench (RWB) and the multi-physics model using Dymola. The communication pathways from injecting a fault (i.e., failing a component) to the probabilistic and multi-physics models were successfully completed. This first version was tested with prototypic models represented in both RWB and Modelica. First, a simple event tree/fault tree (ET/FT) model was created to develop the software code to implement the communication capabilities between the dynamic-link library (dll) and RWB. A program, written in C#, successfully communicates faults to the probabilistic model through the dll. A systems model of the Advanced Liquid-Metal Reactor–Power Reactor Inherently Safe Module (ALMR-PRISM) design developed under another DOE project was upgraded using Dymola to include proper interfaces to allow data exchange with the control application (ConApp). A program, written in C+, successfully communicates faults to the multi-physics model. The results of the example simulation were successfully plotted.
A multi-physical model of actuation response in dielectric gels
NASA Astrophysics Data System (ADS)
Li, Bo; Chang, LongFei; Asaka, Kinji; Chen, Hualing; Li, Dichen
2016-12-01
Actuation deformation of a dielectric gel is attributed to: the solvent diffusion, the electrical polarization and material hyperelasticity. A multi-physical model, coupling electrical and mechanical quantities, is established, based on the thermodynamics. A set of constitutive relations is derived as an equation of state for characterization. The model is applied to specific cases as effective validations. Physical and chemical parameters affect the performance of the gel, showing nonlinear deformation and instability. This model offers guidance for engineering application.
Modelling transport phenomena in a multi-physics context
Marra, Francesco
2015-01-22
Innovative heating research on cooking, pasteurization/sterilization, defrosting, thawing and drying, often focuses on areas which include the assessment of processing time, evaluation of heating uniformity, studying the impact on quality attributes of the final product as well as considering the energy efficiency of these heating processes. During the last twenty years, so-called electro-heating-processes (radio-frequency - RF, microwaves - MW and ohmic - OH) gained a wide interest in industrial food processing and many applications using the above mentioned technologies have been developed with the aim of reducing processing time, improving process efficiency and, in many cases, the heating uniformity. In the area of innovative heating, electro-heating accounts for a considerable portion of both the scientific literature and commercial applications, which can be subdivided into either direct electro-heating (as in the case of OH heating) where electrical current is applied directly to the food or indirect electro-heating (e.g. MW and RF heating) where the electrical energy is firstly converted to electromagnetic radiation which subsequently generates heat within a product. New software packages, which make easier solution of PDEs based mathematical models, and new computers, capable of larger RAM and more efficient CPU performances, allowed an increasing interest about modelling transport phenomena in systems and processes - as the ones encountered in food processing - that can be complex in terms of geometry, composition, boundary conditions but also - as in the case of electro-heating assisted applications - in terms of interaction with other physical phenomena such as displacement of electric or magnetic field. This paper deals with the description of approaches used in modelling transport phenomena in a multi-physics context such as RF, MW and OH assisted heating.
Modelling transport phenomena in a multi-physics context
NASA Astrophysics Data System (ADS)
Marra, Francesco
2015-01-01
Innovative heating research on cooking, pasteurization/sterilization, defrosting, thawing and drying, often focuses on areas which include the assessment of processing time, evaluation of heating uniformity, studying the impact on quality attributes of the final product as well as considering the energy efficiency of these heating processes. During the last twenty years, so-called electro-heating-processes (radio-frequency - RF, microwaves - MW and ohmic - OH) gained a wide interest in industrial food processing and many applications using the above mentioned technologies have been developed with the aim of reducing processing time, improving process efficiency and, in many cases, the heating uniformity. In the area of innovative heating, electro-heating accounts for a considerable portion of both the scientific literature and commercial applications, which can be subdivided into either direct electro-heating (as in the case of OH heating) where electrical current is applied directly to the food or indirect electro-heating (e.g. MW and RF heating) where the electrical energy is firstly converted to electromagnetic radiation which subsequently generates heat within a product. New software packages, which make easier solution of PDEs based mathematical models, and new computers, capable of larger RAM and more efficient CPU performances, allowed an increasing interest about modelling transport phenomena in systems and processes - as the ones encountered in food processing - that can be complex in terms of geometry, composition, boundary conditions but also - as in the case of electro-heating assisted applications - in terms of interaction with other physical phenomena such as displacement of electric or magnetic field. This paper deals with the description of approaches used in modelling transport phenomena in a multi-physics context such as RF, MW and OH assisted heating.
A self-taught artificial agent for multi-physics computational model personalization.
Neumann, Dominik; Mansi, Tommaso; Itu, Lucian; Georgescu, Bogdan; Kayvanpour, Elham; Sedaghat-Hamedani, Farbod; Amr, Ali; Haas, Jan; Katus, Hugo; Meder, Benjamin; Steidl, Stefan; Hornegger, Joachim; Comaniciu, Dorin
2016-12-01
Personalization is the process of fitting a model to patient data, a critical step towards application of multi-physics computational models in clinical practice. Designing robust personalization algorithms is often a tedious, time-consuming, model- and data-specific process. We propose to use artificial intelligence concepts to learn this task, inspired by how human experts manually perform it. The problem is reformulated in terms of reinforcement learning. In an off-line phase, Vito, our self-taught artificial agent, learns a representative decision process model through exploration of the computational model: it learns how the model behaves under change of parameters. The agent then automatically learns an optimal strategy for on-line personalization. The algorithm is model-independent; applying it to a new model requires only adjusting few hyper-parameters of the agent and defining the observations to match. The full knowledge of the model itself is not required. Vito was tested in a synthetic scenario, showing that it could learn how to optimize cost functions generically. Then Vito was applied to the inverse problem of cardiac electrophysiology and the personalization of a whole-body circulation model. The obtained results suggested that Vito could achieve equivalent, if not better goodness of fit than standard methods, while being more robust (up to 11% higher success rates) and with faster (up to seven times) convergence rate. Our artificial intelligence approach could thus make personalization algorithms generalizable and self-adaptable to any patient and any model. Copyright © 2016. Published by Elsevier B.V.
Multi-Physics Analysis of the Fermilab Booster RF Cavity
Awida, M.; Reid, J.; Yakovlev, V.; Lebedev, V.; Khabiboulline, T.; Champion, M.; /Fermilab
2012-05-14
After about 40 years of operation the RF accelerating cavities in Fermilab Booster need an upgrade to improve their reliability and to increase the repetition rate in order to support a future experimental program. An increase in the repetition rate from 7 to 15 Hz entails increasing the power dissipation in the RF cavities, their ferrite loaded tuners, and HOM dampers. The increased duty factor requires careful modelling for the RF heating effects in the cavity. A multi-physic analysis investigating both the RF and thermal properties of Booster cavity under various operating conditions is presented in this paper.
IMPETUS - Interactive MultiPhysics Environment for Unified Simulations.
Ha, Vi Q; Lykotrafitis, George
2016-12-08
We introduce IMPETUS - Interactive MultiPhysics Environment for Unified Simulations, an object oriented, easy-to-use, high performance, C++ program for three-dimensional simulations of complex physical systems that can benefit a large variety of research areas, especially in cell mechanics. The program implements cross-communication between locally interacting particles and continuum models residing in the same physical space while a network facilitates long-range particle interactions. Message Passing Interface is used for inter-processor communication for all simulations.
Problems of applicability of statistical methods in cosmology
Levin, S. F.
2015-12-15
The problems arising from the incorrect formulation of measuring problems of identification for cosmological models and violations of conditions of applicability of statistical methods are considered.
Solid Oxide Fuel Cell - Multi-Physics and GUI
2013-10-10
SOFC-MP is a simulation tool developed at PNNL to evaluate the tightly coupled multi-physical phenomena in SOFCs. The purpose of the tool is to allow SOFC manufacturers to numerically test changes in planar stack design to meet DOE technical targets. The SOFC-MP 2D module is designed for computational efficiency to enable rapid engineering evaluations for operation of tall symmetric stacks. It can quickly compute distributions for the current density, voltage, temperature, and species composition in tall stacks with co-flow or counter-flow orientations. The 3D module computes distributions in entire 3D domain and handles all planner configurations: co-flow, counter-flow, and cross-flow. The detailed data from 3D simulation can be used as input for structural analysis. SOFC-MP GUI integrates both 2D and 3D modules, and it provides user friendly pre-processing and post-processing capabilities.
NASA Astrophysics Data System (ADS)
Morsali, Seyedreza; Daryadel, Soheil; Zhou, Zhong; Behroozfar, Ali; Qian, Dong; Minary-Jolandan, Majid
2017-01-01
Capability to print metals at micro/nanoscale in arbitrary 3D patterns at local points of interest will have applications in nano-electronics and sensors. Meniscus-confined electrodeposition (MCED) is a manufacturing process that enables depositing metals from an electrolyte containing nozzle (pipette) in arbitrary 3D patterns. In this process, a meniscus (liquid bridge or capillary) between the pipette tip and the substrate governs the localized electrodeposition process. Fabrication of metallic microstructures using this process is a multi-physics process in which electrodeposition, fluid dynamics, and mass and heat transfer physics are simultaneously involved. We utilized multi-physics finite element simulation, guided by experimental data, to understand the effect of water evaporation from the liquid meniscus at the tip of the nozzle for deposition of free-standing copper microwires in MCED process.
Lithium-Ion Battery Safety Study Using Multi-Physics Internal Short-Circuit Model (Presentation)
Kim, G-.H.; Smith, K.; Pesaran, A.
2009-06-01
This presentation outlines NREL's multi-physics simulation study to characterize an internal short by linking and integrating electrochemical cell, electro-thermal, and abuse reaction kinetics models.
Nonlinear Multidimensional Assignment Problems Efficient Conic Optimization Methods and Applications
2015-06-24
AFRL-AFOSR-VA-TR-2015-0281 Nonlinear Multidimensional Assignment Problems Efficient Conic Optimization Methods and Applications Hans Mittelmann...2012 - March 2015 4. TITLE AND SUBTITLE Nonlinear Multidimensional Assignment Problems Efficient Conic Optimization Methods and Applications 5a...problems. The size 16 three-dimensional quadratic assignment problem Q3AP from wireless communications was solved using a sophisticated approach
Multi-physics optimization of three-dimensional microvascular polymeric components
NASA Astrophysics Data System (ADS)
Aragón, Alejandro M.; Saksena, Rajat; Kozola, Brian D.; Geubelle, Philippe H.; Christensen, Kenneth T.; White, Scott R.
2013-01-01
This work discusses the computational design of microvascular polymeric materials, which aim at mimicking the behavior found in some living organisms that contain a vascular system. The optimization of the topology of the embedded three-dimensional microvascular network is carried out by coupling a multi-objective constrained genetic algorithm with a finite-element based physics solver, the latter validated through experiments. The optimization is carried out on multiple conflicting objective functions, namely the void volume fraction left by the network, the energy required to drive the fluid through the network and the maximum temperature when the material is subjected to thermal loads. The methodology presented in this work results in a viable alternative for the multi-physics optimization of these materials for active-cooling applications.
Two-Step Multi-Physics Analysis of an Annular Linear Induction Pump for Fission Power Systems
NASA Technical Reports Server (NTRS)
Geng, Steven M.; Reid, Terry V.
2016-01-01
One of the key technologies associated with fission power systems (FPS) is the annular linear induction pump (ALIP). ALIPs are used to circulate liquid-metal fluid for transporting thermal energy from the nuclear reactor to the power conversion device. ALIPs designed and built to date for FPS project applications have not performed up to expectations. A unique, two-step approach was taken toward the multi-physics examination of an ALIP using ANSYS Maxwell 3D and Fluent. This multi-physics approach was developed so that engineers could investigate design variations that might improve pump performance. Of interest was to determine if simple geometric modifications could be made to the ALIP components with the goal of increasing the Lorentz forces acting on the liquid-metal fluid, which in turn would increase pumping capacity. The multi-physics model first calculates the Lorentz forces acting on the liquid metal fluid in the ALIP annulus. These forces are then used in a computational fluid dynamics simulation as (a) internal boundary conditions and (b) source functions in the momentum equations within the Navier-Stokes equations. The end result of the two-step analysis is a predicted pump pressure rise that can be compared with experimental data.
Two-Step Multi-Physics Analysis of an Annular Linear Induction Pump for Fission Power Systems
NASA Technical Reports Server (NTRS)
Geng, Steven M.; Reid, Terry V.
2016-01-01
One of the key technologies associated with fission power systems (FPS) is the annular linear induction pump (ALIP). ALIPs are used to circulate liquid-metal fluid for transporting thermal energy from the nuclear reactor to the power conversion device. ALIPs designed and built to date for FPS project applications have not performed up to expectations. A unique, two-step approach was taken toward the multi-physics examination of an ALIP using ANSYS Maxwell 3D and Fluent. This multi-physics approach was developed so that engineers could investigate design variations that might improve pump performance. Of interest was to determine if simple geometric modifications could be made to the ALIP components with the goal of increasing the Lorentz forces acting on the liquid-metal fluid, which in turn would increase pumping capacity. The multi-physics model first calculates the Lorentz forces acting on the liquid metal fluid in the ALIP annulus. These forces are then used in a computational fluid dynamics simulation as (a) internal boundary conditions and (b) source functions in the momentum equations within the Navier-Stokes equations. The end result of the two-step analysis is a predicted pump pressure rise that can be compared with experimental data.
A multi-physics model for ultrasonically activated soft tissue.
Suvranu De, Rahul
2017-02-01
A multi-physics model has been developed to investigate the effects of cellular level mechanisms on the thermomechanical response of ultrasonically activated soft tissue. Cellular level cavitation effects have been incorporated in the tissue level continuum model to accurately determine the thermodynamic states such as temperature and pressure. A viscoelastic material model is assumed for the macromechanical response of the tissue. The cavitation model based equation-of-state provides the additional pressure arising from evaporation of intracellular and cellular water by absorbing heat due to structural and viscoelastic heating in the tissue, and temperature to the continuum level thermomechanical model. The thermomechanical response of soft tissue is studied for the operational range of frequencies of oscillations and applied loads for typical ultrasonically activated surgical instruments. The model is shown to capture characteristics of ultrasonically activated soft tissue deformation and temperature evolution. At the cellular level, evaporation of water below the boiling temperature under ambient conditions is indicative of protein denaturation around the temperature threshold for coagulation of tissues. Further, with increasing operating frequency (or loading), the temperature rises faster leading to rapid evaporation of tissue cavity water, which may lead to accelerated protein denaturation and coagulation.
Application of boundary integral equations to elastoplastic problems
NASA Technical Reports Server (NTRS)
Mendelson, A.; Albers, L. U.
1975-01-01
The application of boundary integral equations to elastoplastic problems is reviewed. Details of the analysis as applied to torsion problems and to plane problems is discussed. Results are presented for the elastoplastic torsion of a square cross section bar and for the plane problem of notched beams. A comparison of different formulations as well as comparisons with experimental results are presented.
Multi-Physics Simulation of TREAT Kinetics using MAMMOTH
DeHart, Mark; Gleicher, Frederick; Ortensi, Javier; Alberti, Anthony; Palmer, Todd
2015-11-01
With the advent of next generation reactor systems and new fuel designs, the U.S. Department of Energy (DOE) has identified the need for the resumption of transient testing of nuclear fuels. DOE has decided that the Transient Reactor Test Facility (TREAT) at Idaho National Laboratory (INL) is best suited for future testing. TREAT is a thermal neutron spectrum nuclear test facility that is designed to test nuclear fuels in transient scenarios. These specific fuels transient tests range from simple temperature transients to full fuel melt accidents. The current TREAT core is driven by highly enriched uranium (HEU) dispersed in a graphite matrix (1:10000 U-235/C atom ratio). At the center of the core, fuel is removed allowing for the insertion of an experimental test vehicle. TREAT’s design provides experimental flexibility and inherent safety during neutron pulsing. This safety stems from the graphite in the driver fuel having a strong negative temperature coefficient of reactivity resulting from a thermal Maxwellian shift with increased leakage, as well as graphite acting as a temperature sink. Air cooling is available, but is generally used post-transient for heat removal. DOE and INL have expressed a desire to develop a simulation capability that will accurately model the experiments before they are irradiated at the facility, with an emphasis on effective and safe operation while minimizing experimental time and cost. At INL, the Multi-physics Object Oriented Simulation Environment (MOOSE) has been selected as the model development framework for this work. This paper describes the results of preliminary simulations of a TREAT fuel element under transient conditions using the MOOSE-based MAMMOTH reactor physics tool.
A novel medical image data-based multi-physics simulation platform for computational life sciences.
Neufeld, Esra; Szczerba, Dominik; Chavannes, Nicolas; Kuster, Niels
2013-04-06
Simulating and modelling complex biological systems in computational life sciences requires specialized software tools that can perform medical image data-based modelling, jointly visualize the data and computational results, and handle large, complex, realistic and often noisy anatomical models. The required novel solvers must provide the power to model the physics, biology and physiology of living tissue within the full complexity of the human anatomy (e.g. neuronal activity, perfusion and ultrasound propagation). A multi-physics simulation platform satisfying these requirements has been developed for applications including device development and optimization, safety assessment, basic research, and treatment planning. This simulation platform consists of detailed, parametrized anatomical models, a segmentation and meshing tool, a wide range of solvers and optimizers, a framework for the rapid development of specialized and parallelized finite element method solvers, a visualization toolkit-based visualization engine, a Python scripting interface for customized applications, a coupling framework, and more. Core components are cross-platform compatible and use open formats. Several examples of applications are presented: hyperthermia cancer treatment planning, tumour growth modelling, evaluating the magneto-haemodynamic effect as a biomarker and physics-based morphing of anatomical models.
A novel medical image data-based multi-physics simulation platform for computational life sciences
Neufeld, Esra; Szczerba, Dominik; Chavannes, Nicolas; Kuster, Niels
2013-01-01
Simulating and modelling complex biological systems in computational life sciences requires specialized software tools that can perform medical image data-based modelling, jointly visualize the data and computational results, and handle large, complex, realistic and often noisy anatomical models. The required novel solvers must provide the power to model the physics, biology and physiology of living tissue within the full complexity of the human anatomy (e.g. neuronal activity, perfusion and ultrasound propagation). A multi-physics simulation platform satisfying these requirements has been developed for applications including device development and optimization, safety assessment, basic research, and treatment planning. This simulation platform consists of detailed, parametrized anatomical models, a segmentation and meshing tool, a wide range of solvers and optimizers, a framework for the rapid development of specialized and parallelized finite element method solvers, a visualization toolkit-based visualization engine, a Python scripting interface for customized applications, a coupling framework, and more. Core components are cross-platform compatible and use open formats. Several examples of applications are presented: hyperthermia cancer treatment planning, tumour growth modelling, evaluating the magneto-haemodynamic effect as a biomarker and physics-based morphing of anatomical models. PMID:24427518
Applications of artificial intelligence to engineering problems
Adey, R.A.; Sriram, D.
1987-01-01
The conference covered general sessions on AI techniques suitable for engineering applications, e.g. knowledge representation, natural language, probability, design methodologies and constraints. This was followed by sessions covering application in mechanical engineering, civil engineering, electrical engineering, and general engineering. Further sessions covered robotics and tools and techniques for building knowledge based systems.
The Secretary Problem from the Applicant's Point of View
ERIC Educational Resources Information Center
Glass, Darren
2012-01-01
A 1960 "Mathematical Games" column describes the problem, now known as the Secretary Problem, which asks how someone interviewing candidates for a position should maximize the chance of hiring the best applicant. This note looks at how an applicant should respond, if they know the interviewer uses this optimal strategy. We show that all but the…
Design and Application of Learning Environments Based on Integrative Problems
ERIC Educational Resources Information Center
Sanchez, Ivan; Neriz, Liliana; Ramis, Francisco
2008-01-01
This work reports on the results obtained from the application of learning environments on the basis of one integrative problem and a series of other smaller problems that limit the contents to be investigated and learned by the students. This methodology, which is a variation to traditional problem-based learning approaches, is here illustrated…
Design and Application of Learning Environments Based on Integrative Problems
ERIC Educational Resources Information Center
Sanchez, Ivan; Neriz, Liliana; Ramis, Francisco
2008-01-01
This work reports on the results obtained from the application of learning environments on the basis of one integrative problem and a series of other smaller problems that limit the contents to be investigated and learned by the students. This methodology, which is a variation to traditional problem-based learning approaches, is here illustrated…
Applications of NASTRAN to nuclear problems
NASA Technical Reports Server (NTRS)
Spreeuw, E.
1972-01-01
The extent to which suitable solutions may be obtained for one physics problem and two engineering type problems is traced. NASTRAN appears to be a practical tool to solve one-group steady-state neutron diffusion equations. Transient diffusion analysis may be performed after new levels that allow time-dependent temperature calculations are developed. NASTRAN piecewise linear anlaysis may be applied to solve those plasticity problems for which a smooth stress-strain curve can be used to describe the nonlinear material behavior. The accuracy decreases when sharp transitions in the stress-strain relations are involved. Improved NASTRAN usefulness will be obtained when nonlinear material capabilities are extended to axisymmetric elements and to include provisions for time-dependent material properties and creep analysis. Rigid formats 3 and 5 proved to be very convenient for the buckling and normal-mode analysis of a nuclear fuel element.
The Atmospheric Sciences: Problems and Applications.
ERIC Educational Resources Information Center
National Academy of Sciences - National Research Council, Washington, DC. Committee on Atmospheric Sciences.
Over the years, the Committee on Atmospheric Sciences of the National Research Council has published a number of scientific and technical reports dealing with many aspects of the atmospheric sciences. This publication is an attempt to present to a broad audience this information about problems and research in the atmospheric sciences. Chapters…
The Atmospheric Sciences: Problems and Applications.
ERIC Educational Resources Information Center
National Academy of Sciences - National Research Council, Washington, DC. Committee on Atmospheric Sciences.
Over the years, the Committee on Atmospheric Sciences of the National Research Council has published a number of scientific and technical reports dealing with many aspects of the atmospheric sciences. This publication is an attempt to present to a broad audience this information about problems and research in the atmospheric sciences. Chapters…
Linear Programming and Its Application to Pattern Recognition Problems
NASA Technical Reports Server (NTRS)
Omalley, M. J.
1973-01-01
Linear programming and linear programming like techniques as applied to pattern recognition problems are discussed. Three relatively recent research articles on such applications are summarized. The main results of each paper are described, indicating the theoretical tools needed to obtain them. A synopsis of the author's comments is presented with regard to the applicability or non-applicability of his methods to particular problems, including computational results wherever given.
Oncologic applications of biophotonics: prospects and problems.
Chaudhury, N K; Chandra, S; Mathew, T L
2001-01-01
The understanding of various intrinsic photobiophysical processes has prompted researchers to develop different types of biodevices for health care. In the recent past, because of extensive contributions from various groups in the field of biophotonics, several important biomedical applications are emerging in the fields of both diagnostics and therapy. In this brief review, we discuss a few specific applications related to early detection and characterization of premalignant and malignant lesions using optical spectroscopic techniques, namely, fluorescence and Raman, and in management of cancer, the emerging scene of photodynamic therapy.
Multi-Scale, Multi-Physics Membrane Technology
Henshaw, W D
2009-02-19
Our objectives for this 10 week feasibility study were to gain an initial theoretical understanding of the numerical issues involved in modeling fluid-structure interface problems and to develop a prototype software infrastructure based on deforming composite grids to test the new approach on simple problems. For our first test case we considered a two-dimensional fluid-solid piston problem in which one half of the domain is occupied by fluid and the other half by a solid. We determined the exact solution to this problem using the method of characteristics and d'Alembert's solution to the wave equation. We solved this problem using our new numerical approximations and verified the results compared to the exact solution. As a second test case we considered a two dimensional problem consisting of a shock in a fluid that strikes a cylindrically shaped solid.
Quantum game application to spectrum scarcity problems
NASA Astrophysics Data System (ADS)
Zabaleta, O. G.; Barrangú, J. P.; Arizmendi, C. M.
2017-01-01
Recent spectrum-sharing research has produced a strategy to address spectrum scarcity problems. This novel idea, named cognitive radio, considers that secondary users can opportunistically exploit spectrum holes left temporarily unused by primary users. This presents a competitive scenario among cognitive users, making it suitable for game theory treatment. In this work, we show that the spectrum-sharing benefits of cognitive radio can be increased by designing a medium access control based on quantum game theory. In this context, we propose a model to manage spectrum fairly and effectively, based on a multiple-users multiple-choice quantum minority game. By taking advantage of quantum entanglement and quantum interference, it is possible to reduce the probability of collision problems commonly associated with classic algorithms. Collision avoidance is an essential property for classic and quantum communications systems. In our model, two different scenarios are considered, to meet the requirements of different user strategies. The first considers sensor networks where the rational use of energy is a cornerstone; the second focuses on installations where the quality of service of the entire network is a priority.
Data-driven prognosis: a multi-physics approach verified via balloon burst experiment
Chandra, Abhijit; Kar, Oliva
2015-01-01
A multi-physics formulation for data-driven prognosis (DDP) is developed. Unlike traditional predictive strategies that require controlled offline measurements or ‘training’ for determination of constitutive parameters to derive the transitional statistics, the proposed DDP algorithm relies solely on in situ measurements. It uses a deterministic mechanics framework, but the stochastic nature of the solution arises naturally from the underlying assumptions regarding the order of the conservation potential as well as the number of dimensions involved. The proposed DDP scheme is capable of predicting onset of instabilities. Because the need for offline testing (or training) is obviated, it can be easily implemented for systems where such a priori testing is difficult or even impossible to conduct. The prognosis capability is demonstrated here via a balloon burst experiment where the instability is predicted using only online visual observations. The DDP scheme never failed to predict the incipient failure, and no false-positives were issued. The DDP algorithm is applicable to other types of datasets. Time horizons of DDP predictions can be adjusted by using memory over different time windows. Thus, a big dataset can be parsed in time to make a range of predictions over varying time horizons. PMID:27547071
Data-driven prognosis: a multi-physics approach verified via balloon burst experiment.
Chandra, Abhijit; Kar, Oliva
2015-04-08
A multi-physics formulation for data-driven prognosis (DDP) is developed. Unlike traditional predictive strategies that require controlled offline measurements or 'training' for determination of constitutive parameters to derive the transitional statistics, the proposed DDP algorithm relies solely on in situ measurements. It uses a deterministic mechanics framework, but the stochastic nature of the solution arises naturally from the underlying assumptions regarding the order of the conservation potential as well as the number of dimensions involved. The proposed DDP scheme is capable of predicting onset of instabilities. Because the need for offline testing (or training) is obviated, it can be easily implemented for systems where such a priori testing is difficult or even impossible to conduct. The prognosis capability is demonstrated here via a balloon burst experiment where the instability is predicted using only online visual observations. The DDP scheme never failed to predict the incipient failure, and no false-positives were issued. The DDP algorithm is applicable to other types of datasets. Time horizons of DDP predictions can be adjusted by using memory over different time windows. Thus, a big dataset can be parsed in time to make a range of predictions over varying time horizons.
Fractal applications to complex crustal problems
NASA Technical Reports Server (NTRS)
Turcotte, Donald L.
1989-01-01
Complex scale-invariant problems obey fractal statistics. The basic definition of a fractal distribution is that the number of objects with a characteristic linear dimension greater than r satisfies the relation N = about r exp -D where D is the fractal dimension. Fragmentation often satisfies this relation. The distribution of earthquakes satisfies this relation. The classic relationship between the length of a rocky coast line and the step length can be derived from this relation. Power law relations for spectra can also be related to fractal dimensions. Topography and gravity are examples. Spectral techniques can be used to obtain maps of fractal dimension and roughness amplitude. These provide a quantitative measure of texture analysis. It is argued that the distribution of stress and strength in a complex crustal region, such as the Alps, is fractal. Based on this assumption, the observed frequency-magnitude relation for the seismicity in the region can be derived.
Application of energy stability theory to problems in crystal growth
NASA Technical Reports Server (NTRS)
Neitzel, G. P.; Jankowski, D. F.
1990-01-01
The use of energy stability theory to study problems in crystal growth is outlined and justified in terms of convection mechanisms. An application to the float zone process of crystal growth is given as an illustration.
NASA Astrophysics Data System (ADS)
Johnson, S.; Chiaramonte, L.; Cruz, L.; Izadi, G.
2016-12-01
Advances in the accuracy and fidelity of numerical methods have significantly improved our understanding of coupled processes in unconventional reservoirs. However, such multi-physics models are typically characterized by many parameters and require exceptional computational resources to evaluate systems of practical importance, making these models difficult to use for field analyses or uncertainty quantification. One approach to remove these limitations is through targeted complexity reduction and field data constrained parameterization. For the latter, a variety of field data streams may be available to engineers and asset teams, including micro-seismicity from proximate sites, well logs, and 3D surveys, which can constrain possible states of the reservoir as well as the distributions of parameters. We describe one such workflow, using the Argos multi-physics code and requisite geomechanical analysis to parameterize the underlying models. We illustrate with a field study involving a constraint analysis of various field data and details of the numerical optimizations and model reduction to demonstrate how complex models can be applied to operation design in hydraulic fracturing operations, including selection of controllable completion and fluid injection design properties. The implication of this work is that numerical methods are mature and computationally tractable enough to enable complex engineering analysis and deterministic field estimates and to advance research into stochastic analyses for uncertainty quantification and value of information applications.
Analytic semigroups: Applications to inverse problems for flexible structures
NASA Technical Reports Server (NTRS)
Banks, H. T.; Rebnord, D. A.
1990-01-01
Convergence and stability results for least squares inverse problems involving systems described by analytic semigroups are presented. The practical importance of these results is demonstrated by application to several examples from problems of estimation of material parameters in flexible structures using accelerometer data.
Applications of Genetic Methods to NASA Design and Operations Problems
NASA Technical Reports Server (NTRS)
Laird, Philip D.
1996-01-01
We review four recent NASA-funded applications in which evolutionary/genetic methods are important. In the process we survey: the kinds of problems being solved today with these methods; techniques and tools used; problems encountered; and areas where research is needed. The presentation slides are annotated briefly at the top of each page.
Application of the Discontinuous Galerkin Method to Acoustic Scatter Problems
NASA Technical Reports Server (NTRS)
Atkins, H. L.
1997-01-01
The application of the quadrature-free form of the discontinuous Galerkin method to two problems from Category 1 of the Second Computational Aeroacoustics Workshop on Benchmark problems is presented. The method and boundary conditions relevant to this work are described followed by two test problems, both of which involve the scattering of an acoustic wave off a cylinder. The numerical test performed to evaluate mesh-resolution requirements and boundary-condition effectiveness are also described.
Application of the Discontinuous Galerkin Method to Acoustic Scatter Problems
NASA Technical Reports Server (NTRS)
Atkins, H. L.
1997-01-01
The application of the quadrature-free form of the discontinuous Galerkin method to two problems from Category 1 of the Second Computational Aeroacoustics Workshop on Benchmark problems is presented. The method and boundary conditions relevant to this work are described followed by two test problems, both of which involve the scattering of an acoustic wave off a cylinder. The numerical test performed to evaluate mesh-resolution requirements and boundary-condition effectiveness are also described.
CT perfusion: principles, applications, and problems
NASA Astrophysics Data System (ADS)
Lee, Ting-Yim
2004-10-01
The fast scanning speed of current slip-ring CT scanners has enabled the development of perfusion imaging techniques with intravenous injection of contrast medium. In a typical CT perfusion study, contrast medium is injected and rapid scanning at a frequency of 1-2 Hz is used to monitor the first circulation of the injected contrast medium through a 1-2 cm thick slab of tissue. From the acquired time-series of CT images, arteries can be identified within the tissue slab to derive the arterial contrast concentration curve, Ca(t) while each individual voxel produces a tissue residue curve, Q(t) for the corresponding tissue region. Deconvolution between the measured Ca(t) and Q(t) leads to the determination of cerebral blood flow (CBF), cerebral blood volume (CBV) and mean transit time (MTT) in brain studies. In this presentation, an important application of CT perfusion in acute stroke studies - the identification of the ischemic penumbra via the CBF/CBV mismatch and factors affecting the quantitative accuracy of deconvolution, including partial volume averaging, arterial delay and dispersion are discussed.
AI techniques for a space application scheduling problem
NASA Technical Reports Server (NTRS)
Thalman, N.; Sparn, T.; Jaffres, L.; Gablehouse, D.; Judd, D.; Russell, C.
1991-01-01
Scheduling is a very complex optimization problem which can be categorized as an NP-complete problem. NP-complete problems are quite diverse, as are the algorithms used in searching for an optimal solution. In most cases, the best solutions that can be derived for these combinatorial explosive problems are near-optimal solutions. Due to the complexity of the scheduling problem, artificial intelligence (AI) can aid in solving these types of problems. Some of the factors are examined which make space application scheduling problems difficult and presents a fairly new AI-based technique called tabu search as applied to a real scheduling application. the specific problem is concerned with scheduling application. The specific problem is concerned with scheduling solar and stellar observations for the SOLar-STellar Irradiance Comparison Experiment (SOLSTICE) instrument in a constrained environment which produces minimum impact on the other instruments and maximizes target observation times. The SOLSTICE instrument will gly on-board the Upper Atmosphere Research Satellite (UARS) in 1991, and a similar instrument will fly on the earth observing system (Eos).
Application of remote sensing to water resources problems
NASA Technical Reports Server (NTRS)
Clapp, J. L.
1972-01-01
The following conclusions were reached concerning the applications of remote sensing to water resources problems: (1) Remote sensing methods provide the most practical method of obtaining data for many water resources problems; (2) the multi-disciplinary approach is essential to the effective application of remote sensing to water resource problems; (3) there is a correlation between the amount of suspended solids in an effluent discharged into a water body and reflected energy; (4) remote sensing provides for more effective and accurate monitoring, discovery and characterization of the mixing zone of effluent discharged into a receiving water body; and (5) it is possible to differentiate between blue and blue-green algae.
Application of remote sensing to solution of ecological problems
NASA Technical Reports Server (NTRS)
Adelman, A.
1972-01-01
The application of remote sensing techniques to solving ecological problems is discussed. The three phases of environmental ecological management are examined. The differences between discovery and exploitation of natural resources and their ecological management are described. The specific application of remote sensing to water management is developed.
Application of computational aero-acoustics to real world problems
NASA Technical Reports Server (NTRS)
Hardin, Jay C.
1996-01-01
The application of computational aeroacoustics (CAA) to real problems is discussed in relation to the analysis performed with the aim of assessing the application of the various techniques. It is considered that the applications are limited by the inability of the computational resources to resolve the large range of scales involved in high Reynolds number flows. Possible simplifications are discussed. It is considered that problems remain to be solved in relation to the efficient use of the power of parallel computers and in the development of turbulent modeling schemes. The goal of CAA is stated as being the implementation of acoustic design studies on a computer terminal with reasonable run times.
Application of computational aero-acoustics to real world problems
NASA Technical Reports Server (NTRS)
Hardin, Jay C.
1996-01-01
The application of computational aeroacoustics (CAA) to real problems is discussed in relation to the analysis performed with the aim of assessing the application of the various techniques. It is considered that the applications are limited by the inability of the computational resources to resolve the large range of scales involved in high Reynolds number flows. Possible simplifications are discussed. It is considered that problems remain to be solved in relation to the efficient use of the power of parallel computers and in the development of turbulent modeling schemes. The goal of CAA is stated as being the implementation of acoustic design studies on a computer terminal with reasonable run times.
Osiris: A Modern, High-Performance, Coupled, Multi-Physics Code For Nuclear Reactor Core Analysis
Procassini, R J; Chand, K K; Clouse, C J; Ferencz, R M; Grandy, J M; Henshaw, W D; Kramer, K J; Parsons, I D
2007-02-26
To meet the simulation needs of the GNEP program, LLNL is leveraging a suite of high-performance codes to be used in the development of a multi-physics tool for modeling nuclear reactor cores. The Osiris code project, which began last summer, is employing modern computational science techniques in the development of the individual physics modules and the coupling framework. Initial development is focused on coupling thermal-hydraulics and neutral-particle transport, while later phases of the project will add thermal-structural mechanics and isotope depletion. Osiris will be applicable to the design of existing and future reactor systems through the use of first-principles, coupled physics models with fine-scale spatial resolution in three dimensions and fine-scale particle-energy resolution. Our intent is to replace an existing set of legacy, serial codes which require significant approximations and assumptions, with an integrated, coupled code that permits the design of a reactor core using a first-principles physics approach on a wide range of computing platforms, including the world's most powerful parallel computers. A key research activity of this effort deals with the efficient and scalable coupling of physics modules which utilize rather disparate mesh topologies. Our approach allows each code module to use a mesh topology and resolution that is optimal for the physics being solved, and employs a mesh-mapping and data-transfer module to effect the coupling. Additional research is planned in the area of scalable, parallel thermal-hydraulics, high-spatial-accuracy depletion and coupled-physics simulation using Monte Carlo transport.
Research on TRIZ and CAIs Application Problems for Technology Innovation
NASA Astrophysics Data System (ADS)
Li, Xiangdong; Li, Qinghai; Bai, Zhonghang; Geng, Lixiao
In order to realize application of invent problem solve theory (TRIZ) and computer aided innovation software (CAIs) , need to solve some key problems, such as the mode choice of technology innovation, establishment of technology innovation organization network(TION), and achievement of innovative process based on TRIZ and CAIs, etc.. This paper shows that the demands for TRIZ and CAIs according to the characteristics and existing problem of the manufacturing enterprises. Have explained that the manufacturing enterprises need to set up an open TION of enterprise leading type, and achieve the longitudinal cooperation innovation with institution of higher learning. The process of technology innovation based on TRIZ and CAIs has been set up from researching and developing point of view. Application of TRIZ and CAIs in FY Company has been summarized. The application effect of TRIZ and CAIs has been explained using technology innovation of the close goggle valve product.
Harrison, Cyrus; Larsen, Matt; Brugger, Eric
2016-12-05
Strawman is a system designed to explore the in situ visualization and analysis needs of simulation code teams running multi-physics calculations on many-core HPC architectures. It porvides rendering pipelines that can leverage both many-core CPUs and GPUs to render images of simulation meshes.
The application of three-dimensional photoelasticity to impact problems
Kostin, I.C.; Fedorov, A.V.
1995-12-31
A method is proposed for the solution of three-dimensional dynamic problems in geometrically complex structural configurations under impact. The methodology developed employs the generation of photoelastically observable stress wave propagation in a birefringent material applied to the external surfaces of a structure. This work demonstrated the extension of this technique to impact loading. Problems of practical engineering application, such as the gluing of birefringent material to test models were examined experimentally. Pulsed magnetic fields generated by capacitor discharge were employed on typical complex engineering models to demonstrate that the methodology is adequate for solving practical impact problems.
Overview of Krylov subspace methods with applications to control problems
NASA Technical Reports Server (NTRS)
Saad, Youcef
1989-01-01
An overview of projection methods based on Krylov subspaces are given with emphasis on their application to solving matrix equations that arise in control problems. The main idea of Krylov subspace methods is to generate a basis of the Krylov subspace Span and seek an approximate solution the the original problem from this subspace. Thus, the original matrix problem of size N is approximated by one of dimension m typically much smaller than N. Krylov subspace methods have been very successful in solving linear systems and eigenvalue problems and are now just becoming popular for solving nonlinear equations. It is shown how they can be used to solve partial pole placement problems, Sylvester's equation, and Lyapunov's equation.
An application of the matching law to severe problem behavior.
Borrero, John C; Vollmer, Timothy R
2002-01-01
We evaluated problem behavior and appropriate behavior using the matching equation with 4 individuals with developmental disabilities. Descriptive observations were conducted during interactions between the participants and their primary care providers in either a clinical laboratory environment (3 participants) or the participant's home (1 participant). Data were recorded on potential reinforcers, problem behavior, and appropriate behavior. After identifying the reinforcers that maintained each participant's problem behavior by way of functional analysis, the descriptive data were analyzed retrospectively, based on the matching equation. Results showed that the proportional rate of problem behavior relative to appropriate behavior approximately matched the proportional rate of reinforcement for problem behavior for all participants. The results extend prior research because a functional analysis was conducted and because multiple sources of reinforcement (other than attention) were evaluated. Methodological constraints were identified, which may limit the application of the matching law on both practical and conceptual levels. PMID:11936543
Problem solving in magnetic field: Animation in mobile application
NASA Astrophysics Data System (ADS)
Najib, A. S. M.; Othman, A. P.; Ibarahim, Z.
2014-09-01
This paper is focused on the development of mobile application for smart phone, Android, tablet, iPhone, and iPad as a problem solving tool in magnetic field. Mobile application designs consist of animations that were created by using Flash8 software which could be imported and compiled to prezi.com software slide. The Prezi slide then had been duplicated in Power Point format and instead question bank with complete answer scheme was also additionally generated as a menu in the application. Results of the published mobile application can be viewed and downloaded at Infinite Monkey website or at Google Play Store from your gadgets. Statistics of the application from Google Play Developer Console shows the high impact of the application usage in all over the world.
A coupled multi-physics modeling framework for induced seismicity
NASA Astrophysics Data System (ADS)
Karra, S.; Dempsey, D. E.
2015-12-01
There is compelling evidence that moderate-magnitude seismicity in the central and eastern US is on the rise. Many of these earthquakes are attributable to anthropogenic injection of fluids into deep formations resulting in incidents where state regulators have even intervened. Earthquakes occur when a high-pressure fluid (water or CO2) enters a fault, reducing its resistance to shear failure and causing runaway sliding. However, induced seismicity does not manifest as a solitary event, but rather as a sequence of earthquakes evolving in time and space. Additionally, one needs to consider the changes in the permeability due to slip within a fault and the subsequent effects on fluid transport and pressure build-up. A modeling framework that addresses the complex two-way coupling between seismicity and fluid-flow is thus needed. In this work, a new parallel physics-based coupled framework for induced seismicity that couples the slip in faults and fluid flow is presented. The framework couples the highly parallel subsurface flow code PFLOTRAN (www.pflotran.org) and a fast Fourier transform based earthquake simulator QK3. Stresses in the fault are evaluated using Biot's formulation in PFLOTRAN and is used to calculate slip in QK3. Permeability is updated based on the slip in the fault which in turn influences flow. Application of the framework to synthetic examples and datasets from Colorado and Oklahoma will also be discussed.
Innovative applications of genetic algorithms to problems in accelerator physics
NASA Astrophysics Data System (ADS)
Hofler, Alicia; Terzić, Balša; Kramer, Matthew; Zvezdin, Anton; Morozov, Vasiliy; Roblin, Yves; Lin, Fanglei; Jarvis, Colin
2013-01-01
The genetic algorithm (GA) is a powerful technique that implements the principles nature uses in biological evolution to optimize a multidimensional nonlinear problem. The GA works especially well for problems with a large number of local extrema, where traditional methods (such as conjugate gradient, steepest descent, and others) fail or, at best, underperform. The field of accelerator physics, among others, abounds with problems which lend themselves to optimization via GAs. In this paper, we report on the successful application of GAs in several problems related to the existing Continuous Electron Beam Accelerator Facility nuclear physics machine, the proposed Medium-energy Electron-Ion Collider at Jefferson Lab, and a radio frequency gun-based injector. These encouraging results are a step forward in optimizing accelerator design and provide an impetus for application of GAs to other problems in the field. To that end, we discuss the details of the GAs used, include a newly devised enhancement which leads to improved convergence to the optimum, and make recommendations for future GA developments and accelerator applications.
Conceptions of Efficiency: Applications in Learning and Problem Solving
ERIC Educational Resources Information Center
Hoffman, Bobby; Schraw, Gregory
2010-01-01
The purpose of this article is to clarify conceptions, definitions, and applications of learning and problem-solving efficiency. Conceptions of efficiency vary within the field of educational psychology, and there is little consensus as to how to define, measure, and interpret the efficiency construct. We compare three diverse models that differ…
An Application of Calculus: Optimum Parabolic Path Problem
ERIC Educational Resources Information Center
Atasever, Merve; Pakdemirli, Mehmet; Yurtsever, Hasan Ali
2009-01-01
A practical and technological application of calculus problem is posed to motivate freshman students or junior high school students. A variable coefficient of friction is used in modelling air friction. The case in which the coefficient of friction is a decreasing function of altitude is considered. The optimum parabolic path for a flying object…
An Application of Calculus: Optimum Parabolic Path Problem
ERIC Educational Resources Information Center
Atasever, Merve; Pakdemirli, Mehmet; Yurtsever, Hasan Ali
2009-01-01
A practical and technological application of calculus problem is posed to motivate freshman students or junior high school students. A variable coefficient of friction is used in modelling air friction. The case in which the coefficient of friction is a decreasing function of altitude is considered. The optimum parabolic path for a flying object…
The Application of Acceptance and Commitment Therapy to Problem Anger
ERIC Educational Resources Information Center
Eifert, Georg H.; Forsyth, John P.
2011-01-01
The goal of this paper is to familiarize clinicians with the use of Acceptance and Commitment Therapy (ACT) for problem anger by describing the application of ACT to a case of a 45-year-old man struggling with anger. ACT is an approach and set of intervention technologies that support acceptance and mindfulness processes linked with commitment and…
Applications of Wavelets to Radar, Imaging and Related Problems
1993-09-30
variety. We developed several useful results in the specific areas of electromagnetic and acoustic bullets, signal- • design for doppler ultrasound ... ultrasound problems, and computatior ally efficient matched filter processor for the sphere. This has direct application to directional data of various...6 3.6 Feature Extraction for Recognition Tasks in Acoustic and Medical Data .......... 7 3.7 Ultrasound Doppler Velocimetry
The Application of Acceptance and Commitment Therapy to Problem Anger
ERIC Educational Resources Information Center
Eifert, Georg H.; Forsyth, John P.
2011-01-01
The goal of this paper is to familiarize clinicians with the use of Acceptance and Commitment Therapy (ACT) for problem anger by describing the application of ACT to a case of a 45-year-old man struggling with anger. ACT is an approach and set of intervention technologies that support acceptance and mindfulness processes linked with commitment and…
Applications and Problems of Computer Assisted Education in Turkey
ERIC Educational Resources Information Center
Usun, Salih
2006-01-01
This paper focuses on the Computer Assisted Education (CAE) in Turkey; reviews of the related literature; examines the projects, applications and problems on the Computer Assisted Education (CAE) in Turkey compares with the World; exposes the positive and negative aspects of the projects; a number of the suggestion presents on the effective use of…
Conceptions of Efficiency: Applications in Learning and Problem Solving
ERIC Educational Resources Information Center
Hoffman, Bobby; Schraw, Gregory
2010-01-01
The purpose of this article is to clarify conceptions, definitions, and applications of learning and problem-solving efficiency. Conceptions of efficiency vary within the field of educational psychology, and there is little consensus as to how to define, measure, and interpret the efficiency construct. We compare three diverse models that differ…
Applications of polymeric smart materials to environmental problems.
Gray, H N; Bergbreiter, D E
1997-01-01
New methods for the reduction and remediation of hazardous wastes like carcinogenic organic solvents, toxic materials, and nuclear contamination are vital to environmental health. Procedures for effective waste reduction, detection, and removal are important components of any such methods. Toward this end, polymeric smart materials are finding useful applications. Polymer-bound smart catalysts are useful in waste minimization, catalyst recovery, and catalyst reuse. Polymeric smart coatings have been developed that are capable of both detecting and removing hazardous nuclear contaminants. Such applications of smart materials involving catalysis chemistry, sensor chemistry, and chemistry relevant to decontamination methodology are especially applicable to environmental problems. PMID:9114277
Fifth international conference on hyperbolic problems -- theory, numerics, applications: Abstracts
1994-12-31
The conference demonstrated that hyperbolic problems and conservation laws play an important role in many areas including industrial applications and the studying of elasto-plastic materials. Among the various topics covered in the conference, the authors mention: the big bang theory, general relativity, critical phenomena, deformation and fracture of solids, shock wave interactions, numerical simulation in three dimensions, the level set method, multidimensional Riemann problem, application of the front tracking in petroleum reservoir simulations, global solution of the Navier-Stokes equations in high dimensions, recent progress in granular flow, and the study of elastic plastic materials. The authors believe that the new ideas, tools, methods, problems, theoretical results, numerical solutions and computational algorithms presented or discussed at the conference will benefit the participants in their current and future research.
Development of High-Order Methods for Multi-Physics Problems Governed by Hyperbolic Equations
2010-10-01
the conservative variable state vector: U = ρ ρu ρv ρE , and F (U) is the inviscid flux tensor with vector components: f = ρu ρu2 + p ρuv...ρE + p )u , g = ρv ρuv ρv2 + p (ρE + p )v . The specific energy E is the sum of the specific internal energy e and the kinetic energy...the constitutive relations: e = CV T, p = (γ − 1) [ ρE − ρ 2 (u2 + v2) ] . 0.3 Discretization method The governing equations of fluid motion, given
NASA Astrophysics Data System (ADS)
Docktor, Jennifer L.; Dornfeld, Jay; Frodermann, Evan; Heller, Kenneth; Hsu, Leonardo; Jackson, Koblar Alan; Mason, Andrew; Ryan, Qing X.; Yang, Jie
2016-06-01
Problem solving is a complex process valuable in everyday life and crucial for learning in the STEM fields. To support the development of problem-solving skills it is important for researchers and curriculum developers to have practical tools that can measure the difference between novice and expert problem-solving performance in authentic classroom work. It is also useful if such tools can be employed by instructors to guide their pedagogy. We describe the design, development, and testing of a simple rubric to assess written solutions to problems given in undergraduate introductory physics courses. In particular, we present evidence for the validity, reliability, and utility of the instrument. The rubric identifies five general problem-solving processes and defines the criteria to attain a score in each: organizing problem information into a Useful Description, selecting appropriate principles (Physics Approach), applying those principles to the specific conditions in the problem (Specific Application of Physics), using Mathematical Procedures appropriately, and displaying evidence of an organized reasoning pattern (Logical Progression).
Application of tabu search to deterministic and stochastic optimization problems
NASA Astrophysics Data System (ADS)
Gurtuna, Ozgur
During the past two decades, advances in computer science and operations research have resulted in many new optimization methods for tackling complex decision-making problems. One such method, tabu search, forms the basis of this thesis. Tabu search is a very versatile optimization heuristic that can be used for solving many different types of optimization problems. Another research area, real options, has also gained considerable momentum during the last two decades. Real options analysis is emerging as a robust and powerful method for tackling decision-making problems under uncertainty. Although the theoretical foundations of real options are well-established and significant progress has been made in the theory side, applications are lagging behind. A strong emphasis on practical applications and a multidisciplinary approach form the basic rationale of this thesis. The fundamental concepts and ideas behind tabu search and real options are investigated in order to provide a concise overview of the theory supporting both of these two fields. This theoretical overview feeds into the design and development of algorithms that are used to solve three different problems. The first problem examined is a deterministic one: finding the optimal servicing tours that minimize energy and/or duration of missions for servicing satellites around Earth's orbit. Due to the nature of the space environment, this problem is modeled as a time-dependent, moving-target optimization problem. Two solution methods are developed: an exhaustive method for smaller problem instances, and a method based on tabu search for larger ones. The second and third problems are related to decision-making under uncertainty. In the second problem, tabu search and real options are investigated together within the context of a stochastic optimization problem: option valuation. By merging tabu search and Monte Carlo simulation, a new method for studying options, Tabu Search Monte Carlo (TSMC) method, is
[Problems and countermeasures in the application of constructed wetlands].
Huang, Jin-Lou; Chen, Qin; Xu, Lian-Huang
2013-01-01
Constructed wetlands as a wastewater eco-treatment technology are developed in recent decades. It combines sewage treatment with the eco-environment in an efficient way. It treats the sewage effectively, and meanwhile beautifies the environment, creates ecological landscape, and brings benefits to the environment and economics. The unique advantages of constructed wetlands have attracted intensive attention since developed. Constructed wetlands are widely used in treatment of domestic sewage, industrial wastewater, and wastewater from mining and petroleum production. However, many problems are found in the practical application of constructed wetland, e. g. they are vulnerable to changes in climatic conditions and temperature, their substrates are easily saturated and plugged, they are readily affected by plant species, they often occupy large areas, and there are other problems including irrational management, non-standard design, and a single function of ecological service. These problems to a certain extent influence the efficiency of constructed wetlands in wastewater treatment, shorten the life of the artificial wetland, and hinder the application of artificial wetland. The review presents correlation analysis and countermeasures for these problems, in order to improve the efficiency of constructed wetland in wastewater treatment, and provide reference for the application and promotion of artificial wetland.
Application of boundary integral equations to elastoplastic problems
NASA Technical Reports Server (NTRS)
Mendelson, A.; Albers, L. U.
1975-01-01
The application of the boundary integral equation method (BIE) to the elastoplastic torsion problem is considered. It is found that the BIE is very suitable for the elastoplastic analysis of the torsion of prismatic bars. A comparison of the BIE with the finite difference method shows savings for the BIE concerning the number of unknowns which have to be determined and also a much faster convergence rate. Attention is given to the problem of an edge-notched beam in pure bending, taking into account a biharmonic formulation and a displacement formulation.
NASA Astrophysics Data System (ADS)
Crestel, Benjamin; Alexanderian, Alen; Stadler, Georg; Ghattas, Omar
2017-07-01
The computational cost of solving an inverse problem governed by PDEs, using multiple experiments, increases linearly with the number of experiments. A recently proposed method to decrease this cost uses only a small number of random linear combinations of all experiments for solving the inverse problem. This approach applies to inverse problems where the PDE solution depends linearly on the right-hand side function that models the experiment. As this method is stochastic in essence, the quality of the obtained reconstructions can vary, in particular when only a small number of combinations are used. We develop a Bayesian formulation for the definition and computation of encoding weights that lead to a parameter reconstruction with the least uncertainty. We call these weights A-optimal encoding weights. Our framework applies to inverse problems where the governing PDE is nonlinear with respect to the inversion parameter field. We formulate the problem in infinite dimensions and follow the optimize-then-discretize approach, devoting special attention to the discretization and the choice of numerical methods in order to achieve a computational cost that is independent of the parameter discretization. We elaborate our method for a Helmholtz inverse problem, and derive the adjoint-based expressions for the gradient of the objective function of the optimization problem for finding the A-optimal encoding weights. The proposed method is potentially attractive for real-time monitoring applications, where one can invest the effort to compute optimal weights offline, to later solve an inverse problem repeatedly, over time, at a fraction of the initial cost.
Potentials and problems in space applications of smart structures technology
NASA Astrophysics Data System (ADS)
Eaton, D. C.; Bashford, D. P.
1994-09-01
The well known addage 'don't run before you can walk emerging materials. It typically takes ten years before a material is sufficiently well characterized for commercial aerospace application. Much has to be learnt not only about the material properties and their susceptibility to the effects of their working environment but also about the manufacturing process and the most effective configuration related application. No project will accept a product which has no proven reliability and attractive cost effectiveness in its application. The writers firmly believe that smart structures and their related technologies must follow a similar development pattern. Indeed, faced with a range of interdisciplinary problems it seems likely that 'partially smart' techniques may well be the first applications. These will place emphasis on the more readily realizable features for any structural application. Prior use may well have been achieved in other engineering sectors. Because ground based applications are more readily accessible to check and maintain, these are generally the front runners of smart technology usage. Nevertheless, there is a strong potential for the use of smart techniques in space applications if their capabilities can be advantageously introduced when compared with traditional solutions. This paper endeavors to give a critical appraisal of the possibilities and the accompanying problems. A sample overview of related developing space technology is included. The reader is also referred to chapters 90 to 94 in ESA's Structural Materials Handbook (ESA PSS 03 203, issue 1.). It is envisaged that future space applications may include the realization and maintenance of large deployable reflector profiles, the dimensional stability of optical payloads, active noise and vibration control and in orbit health monitoring and control for largely unmanned spacecraft. The possibility of monitoring the health of items such as large cryogenic fuel tanks is a typical longer
Application of essentially nonoscillatory methods to aeroacoustic flow problems
NASA Technical Reports Server (NTRS)
Atkins, Harold L.
1995-01-01
A finite-difference essentially nonoscillatory (ENO) method has been applied to several of the problems prescribed for the workshop sponsored jointly by the Institute for Computer Applications in Science and Engineering and by NASA Langley Research Center entitled 'Benchmark Problems in Computational Aeroacoustics'. The workshop focused on computational challenges specific to aeroacoustics. Among these are long-distance propagation of a short-wavelength disturbance, propagation of small-amplitude disturbances, and nonreflective boundary conditions. The shock capturing-capability inherent to the ENO method effectively eliminates oscillations near shock waves without the need to add and tune dissipation or filter terms. The method-of-lines approach allows the temporal and spatial operators to be chosen separately in accordance with the demands of a particular problem. The ENO method was robust and accurate for all problems in which the propagating wave was resolved with 8 or more points per wavelength. The finite-wave-model boundary condition, a local nonlinear acoustic boundary condition, performed well for the one-dimensional problems. The buffer-domain approach performed well for the two-dimensional test problem. The amplitude of nonphysical reflections were less than 1 percent of the exiting wave's amplitude.
Application of essentially nonoscillatory methods to aeroacoustic flow problems
NASA Technical Reports Server (NTRS)
Atkins, Harold L.
1995-01-01
A finite-difference essentially nonoscillatory (ENO) method has been applied to several of the problems prescribed for the workshop sponsored jointly by the Institute for Computer Applications in Science and Engineering and by NASA Langley Research Center entitled 'Benchmark Problems in Computational Aeroacoustics'. The workshop focused on computational challenges specific to aeroacoustics. Among these are long-distance propagation of a short-wavelength disturbance, propagation of small-amplitude disturbances, and nonreflective boundary conditions. The shock capturing-capability inherent to the ENO method effectively eliminates oscillations near shock waves without the need to add and tune dissipation or filter terms. The method-of-lines approach allows the temporal and spatial operators to be chosen separately in accordance with the demands of a particular problem. The ENO method was robust and accurate for all problems in which the propagating wave was resolved with 8 or more points per wavelength. The finite-wave-model boundary condition, a local nonlinear acoustic boundary condition, performed well for the one-dimensional problems. The buffer-domain approach performed well for the two-dimensional test problem. The amplitude of nonphysical reflections were less than 1 percent of the exiting wave's amplitude.
Arbabi, Vahid; Pouran, Behdad; Weinans, Harrie; Zadpoor, Amir A
2016-09-06
Analytical and numerical methods have been used to extract essential engineering parameters such as elastic modulus, Poisson׳s ratio, permeability and diffusion coefficient from experimental data in various types of biological tissues. The major limitation associated with analytical techniques is that they are often only applicable to problems with simplified assumptions. Numerical multi-physics methods, on the other hand, enable minimizing the simplified assumptions but require substantial computational expertise, which is not always available. In this paper, we propose a novel approach that combines inverse and forward artificial neural networks (ANNs) which enables fast and accurate estimation of the diffusion coefficient of cartilage without any need for computational modeling. In this approach, an inverse ANN is trained using our multi-zone biphasic-solute finite-bath computational model of diffusion in cartilage to estimate the diffusion coefficient of the various zones of cartilage given the concentration-time curves. Robust estimation of the diffusion coefficients, however, requires introducing certain levels of stochastic variations during the training process. Determining the required level of stochastic variation is performed by coupling the inverse ANN with a forward ANN that receives the diffusion coefficient as input and returns the concentration-time curve as output. Combined together, forward-inverse ANNs enable computationally inexperienced users to obtain accurate and fast estimation of the diffusion coefficients of cartilage zones. The diffusion coefficients estimated using the proposed approach are compared with those determined using direct scanning of the parameter space as the optimization approach. It has been shown that both approaches yield comparable results. Copyright © 2016 Elsevier Ltd. All rights reserved.
Progress on PRONGHORN Application to NGNP Related Problems
Dana A. Knoll
2009-08-01
We are developing a multiphysics simulation tool for Very High-Temperature gascooled Reactors (VHTR). The simulation tool, PRONGHORN, takes advantages of the Multiphysics Object-Oriented Simulation library, and is capable of solving multidimensional thermal-fluid and neutronics problems implicitly in parallel. Expensive Jacobian matrix formation is alleviated by the Jacobian-free Newton-Krylov method, and physics-based preconditioning is applied to improve the convergence. The initial development of PRONGHORN has been focused on the pebble bed corec concept. However, extensions required to simulate prismatic cores are underway. In this progress report we highlight progress on application of PRONGHORN to PBMR400 benchmark problems, extension and application of PRONGHORN to prismatic core reactors, and progress on simulations of 3-D transients.
Application of the boundary integral method to immiscible displacement problems
Masukawa, J.; Horne, R.N.
1988-08-01
This paper presents an application of the boundary integral method (BIM) to fluid displacement problems to demonstrate its usefulness in reservoir simulation. A method for solving two-dimensional (2D), piston-like displacement for incompressible fluids with good accuracy has been developed. Several typical example problems with repeated five-spot patterns were solved for various mobility ratios. The solutions were compared with the analytical solutions to demonstrate accuracy. Singularity programming was found to be a major advantage in handling flow in the vicinity of wells. The BIM was found to be an excellent way to solve immiscible displacement problems. Unlike analytic methods, it can accommodate complex boundary shapes and does not suffer from numerical dispersion at the front.
SIAM conference on inverse problems: Geophysical applications. Final technical report
1995-12-31
This conference was the second in a series devoted to a particular area of inverse problems. The theme of this series is to discuss problems of major scientific importance in a specific area from a mathematical perspective. The theme of this symposium was geophysical applications. In putting together the program we tried to include a wide range of mathematical scientists and to interpret geophysics in as broad a sense as possible. Our speaker came from industry, government laboratories, and diverse departments in academia. We managed to attract a geographically diverse audience with participation from five continents. There were talks devoted to seismology, hydrology, determination of the earth`s interior on a global scale as well as oceanographic and atmospheric inverse problems.
Application of simulated annealing to some seismic problems
NASA Astrophysics Data System (ADS)
Velis, Danilo Ruben
Wavelet estimation, ray tracing, and traveltime inversion are fundamental problems in seismic exploration. They can be finally reduced to minimizing a highly nonlinear cost function with respect to a certain set of unknown parameters. I use simulated annealing (SA) to avoid local minima and inaccurate solutions often arising by the use of linearizing methods. I illustrate all applications using numerical and/or real data examples. The first application concerns the 4th-order cumulant matching (CM) method for wavelet estimation. Here the reliability of the derived wavelets depends strongly on the amount of data. Tapering the trace cumulant estimate reduces significantly this dependency, and allows for a trace-by-trace implementation. For this purpose, a hybrid strategy that combines SA and gradient-based techniques provides efficiency and accuracy. In the second application I present SART (SA ray tracing), which is a novel method for solving the two-point ray tracing problem. SART overcomes some well known difficulties in standard methods, such as the selection of new take-off angles, and the multipathing problem. SA finds the take-off angles so that the total traveltime between the endpoints is a global minimum. SART is suitable for tracing direct, reflected, and headwaves, through complex 2-D and 3-D media. I also develop a versatile model representation in terms of a number of regions delimited by curved interfaces. Traveltime tomography is the third SA application. I parameterize the subsurface geology by using adaptive-grid bicubic B-splines for smooth models, or parametric 2-D functions for anomaly bodies. The second approach may find application in archaeological and other near-surface studies. The nonlinear inversion process attempts to minimize the rms error between observed and predicted traveltimes.
Space Life Support Technology Applications to Terrestrial Environmental Problems
NASA Technical Reports Server (NTRS)
Schwartzkopf, Steven H.; Sleeper, Howard L.
1993-01-01
Many of the problems now facing the human race on Earth are, in fact, life support issues. Decline of air Quality as a result of industrial and automotive emissions, pollution of ground water by organic pesticides or solvents, and the disposal of solid wastes are all examples of environmental problems that we must solve to sustain human life. The technologies currently under development to solve the problems of supporting human life for advanced space missions are extraordinarily synergistic with these environmental problems. The development of these technologies (including both physicochemical and bioregenerative types) is increasingly focused on closing the life support loop by removing and recycling contaminants and wastes to produce the materials necessary to sustain human life. By so doing, this technology development effort also focuses automatically on reducing resupply logistics requirements and increasing crew safety through increased self-sufficiency. This paper describes several technologies that have been developed to support human life in space and illustrates the applicability of the technologies to environmental problems including environmental remediation and pollution prevention.
Overview: Applications of numerical optimization methods to helicopter design problems
NASA Technical Reports Server (NTRS)
Miura, H.
1984-01-01
There are a number of helicopter design problems that are well suited to applications of numerical design optimization techniques. Adequate implementation of this technology will provide high pay-offs. There are a number of numerical optimization programs available, and there are many excellent response/performance analysis programs developed or being developed. But integration of these programs in a form that is usable in the design phase should be recognized as important. It is also necessary to attract the attention of engineers engaged in the development of analysis capabilities and to make them aware that analysis capabilities are much more powerful if integrated into design oriented codes. Frequently, the shortcoming of analysis capabilities are revealed by coupling them with an optimization code. Most of the published work has addressed problems in preliminary system design, rotor system/blade design or airframe design. Very few published results were found in acoustics, aerodynamics and control system design. Currently major efforts are focused on vibration reduction, and aerodynamics/acoustics applications appear to be growing fast. The development of a computer program system to integrate the multiple disciplines required in helicopter design with numerical optimization technique is needed. Activities in Britain, Germany and Poland are identified, but no published results from France, Italy, the USSR or Japan were found.
Dynamic Grover search: applications in recommendation systems and optimization problems
NASA Astrophysics Data System (ADS)
Chakrabarty, Indranil; Khan, Shahzor; Singh, Vanshdeep
2017-06-01
In the recent years, we have seen that Grover search algorithm (Proceedings, 28th annual ACM symposium on the theory of computing, pp. 212-219, 1996) by using quantum parallelism has revolutionized the field of solving huge class of NP problems in comparisons to classical systems. In this work, we explore the idea of extending Grover search algorithm to approximate algorithms. Here we try to analyze the applicability of Grover search to process an unstructured database with a dynamic selection function in contrast to the static selection function used in the original work (Grover in Proceedings, 28th annual ACM symposium on the theory of computing, pp. 212-219, 1996). We show that this alteration facilitates us to extend the application of Grover search to the field of randomized search algorithms. Further, we use the dynamic Grover search algorithm to define the goals for a recommendation system based on which we propose a recommendation algorithm which uses binomial similarity distribution space giving us a quadratic speedup over traditional classical unstructured recommendation systems. Finally, we see how dynamic Grover search can be used to tackle a wide range of optimization problems where we improve complexity over existing optimization algorithms.
Advanced graphical user interface for multi-physics simulations using AMST
NASA Astrophysics Data System (ADS)
Hoffmann, Florian; Vogel, Frank
2017-07-01
Numerical modelling of particulate matter has gained much popularity in recent decades. Advanced Multi-physics Simulation Technology (AMST) is a state-of-the-art three dimensional numerical modelling technique combining the eX-tended Discrete Element Method (XDEM) with Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) [1]. One major limitation of this code is the lack of a graphical user interface (GUI) meaning that all pre-processing has to be made directly in a HDF5-file. This contribution presents the first graphical pre-processor developed for AMST.
Application of invariant integrals to elastostatic inverse problems
NASA Astrophysics Data System (ADS)
Goldstein, Robert; Shifrin, Efim; Shushpannikov, Pavel
2008-01-01
A problem of parameters identification for embedded defects in a linear elastic body using results of static tests is considered. A method, based on the use of invariant integrals is developed for solving this problem. A problem for the spherical inclusion parameters identification is considered as an example of the proposed approach application. It is shown that a radius, elastic moduli and coordinates of a spherical inclusion center are determined from one uniaxial tension (compression) test. The explicit formulae, expressing the spherical inclusion parameters by means of the values of corresponding invariant integrals are obtained. The values of the integrals can be calculated from the experimental data if both applied loads and displacements are measured on the surface of the body in the static test. A numerical analysis of the obtained explicit formulae is fulfilled. It is shown that the formulae give a good approximation of the spherical inclusion parameters even in the case when the inclusion is located close enough to the surface of the body. To cite this article: R. Goldstein et al., C. R. Mecanique 336 (2008).
On the Application of the Energy Method to Stability Problems
NASA Technical Reports Server (NTRS)
Marguerre, Karl
1947-01-01
Since stability problems have come into the field of vision of engineers, energy methods have proved to be one of the most powerful aids in mastering them. For finding the especially interesting critical loads special procedures have evolved that depart somewhat from those customary in the usual elasticity theory. A clarification of the connections seemed desirable,especially with regard to the post-critical region, for the treatment of which these special methods are not suited as they are. The present investigation discusses this question-complex (made important by shell construction in aircraft) especially in the classical example of the Euler strut, because in this case - since the basic features are not hidden by difficulties of a mathematical nature - the problem is especially clear. The present treatment differs from that appearing in the Z.f.a.M.M. (1938) under the title "Uber die Behandlung von Stabilittatsproblemen mit Hilfe der energetischen Methode" in that, in order to work out the basic ideas still more clearly,it dispenses with the investigation of behavior at large deflections and of the elastic foundation;in its place the present version gives an elaboration of the 6th section and (in its 7 th and 8th secs.)a new example that shows the applicability of the general criterion to a stability problem that differs from that of Euler in many respects.
The application of artificial intelligence to astronomical scheduling problems
NASA Technical Reports Server (NTRS)
Johnston, Mark D.
1992-01-01
Efficient utilization of expensive space- and ground-based observatories is an important goal for the astronomical community; the cost of modern observing facilities is enormous, and the available observing time is much less than the demand from astronomers around the world. The complexity and variety of scheduling constraints and goals has led several groups to investigate how artificial intelligence (AI) techniques might help solve these kinds of problems. The earliest and most successful of these projects was started at Space Telescope Science Institute in 1987 and has led to the development of the Spike scheduling system to support the scheduling of Hubble Space Telescope (HST). The aim of Spike at STScI is to allocate observations to timescales of days to a week observing all scheduling constraints and maximizing preferences that help ensure that observations are made at optimal times. Spike has been in use operationally for HST since shortly after the observatory was launched in Apr. 1990. Although developed specifically for HST scheduling, Spike was carefully designed to provide a general framework for similar (activity-based) scheduling problems. In particular, the tasks to be scheduled are defined in the system in general terms, and no assumptions about the scheduling timescale are built in. The mechanisms for describing, combining, and propagating temporal and other constraints and preferences are quite general. The success of this approach has been demonstrated by the application of Spike to the scheduling of other satellite observatories: changes to the system are required only in the specific constraints that apply, and not in the framework itself. In particular, the Spike framework is sufficiently flexible to handle both long-term and short-term scheduling, on timescales of years down to minutes or less. This talk will discuss recent progress made in scheduling search techniques, the lessons learned from early HST operations, the application of Spike
A multi-physics study of Li-ion battery material Li1+xTi2O4
NASA Astrophysics Data System (ADS)
Jiang, Tonghu; Falk, Michael; Siva Shankar Rudraraju, Krishna; Garikipati, Krishna; van der Ven, Anton
2013-03-01
Recently, lithium ion batteries have been subject to intense scientific study due to growing demand arising from their utilization in portable electronics, electric vehicles and other applications. Most cathode materials in lithium ion batteries involve a two-phase process during charging and discharging, and the rate of these processes is typically limited by the slow interface mobility. We have undertaken modeling regarding how lithium diffusion in the interface region affects the motion of the phase boundary. We have developed a multi-physics computational method suitable for predicting time evolution of the driven interface. In this method, we calculate formation energies and migration energy barriers by ab initio methods, which are then approximated by cluster expansions. Monte Carlo calculation is further employed to obtain thermodynamic and kinetic information, e.g., anisotropic interfacial energies, and mobilities, which are used to parameterize continuum modeling of the charging and discharging processes. We test this methodology on spinel Li1+xTi2O4. Elastic effects are incorporated into the calculations to determine the effect of variations in modulus and strain on stress concentrations and failure modes within the material. We acknowledge support by the National Science Foundation Cyber Discovery and Innovation Program under Award No. 1027765.
Advanced computations of multi-physics, multi-scale effects in beam dynamics
Amundson, J.F.; Macridin, A.; Spentzouris, P.; Stern, E.G.; /Fermilab
2009-01-01
Current state-of-the-art beam dynamics simulations include multiple physical effects and multiple physical length and/or time scales. We present recent developments in Synergia2, an accelerator modeling framework designed for multi-physics, multi-scale simulations. We summarize recent several recent results in multi-physics beam dynamics, including simulations of three Fermilab accelerators: the Tevatron, the Main Injector and the Debuncher. Early accelerator simulations focused on single-particle dynamics. To a first approximation, the forces on the particles in an accelerator beam are dominated by the external fields due to magnets, RF cavities, etc., so the single-particle dynamics are the leading physical effects. Detailed simulations of accelerators must include collective effects such as the space-charge repulsion of the beam particles, the effects of wake fields in the beam pipe walls and beam-beam interactions in colliders. These simulations require the sort of massively parallel computers that have only become available in recent times. We give an overview of the accelerator framework Synergia2, which was designed to take advantage of the capabilities of modern computational resources and enable simulations of multiple physical effects. We also summarize some recent results utilizing Synergia2 and BeamBeam3d, a tool specialized for beam-beam simulations.
NASA Astrophysics Data System (ADS)
Clark, Martyn; Samaniego, Luis; Freer, Jim
2014-05-01
Multi-model and multi-physics approaches are a popular tool in environmental modelling, with many studies focusing on optimally combining output from multiple model simulations to reduce predictive errors and better characterize predictive uncertainty. However, a careful and systematic analysis of different hydrological models reveals that individual models are simply small permutations of a master modeling template, and inter-model differences are overwhelmed by uncertainty in the choice of the parameter values in the model equations. Furthermore, inter-model differences do not explicitly represent the uncertainty in modeling a given process, leading to many situations where different models provide the wrong results for the same reasons. In other cases, the available morphological data does not support the very fine spatial discretization of the landscape that typifies many modern applications of process-based models. To make the uncertainty characterization problem worse, the uncertain parameter values in process-based models are often fixed (hard-coded), and the models lack the agility necessary to represent the tremendous heterogeneity in natural systems. This presentation summarizes results from a systematic analysis of uncertainty in process-based hydrological models, where we explicitly analyze the myriad of subjective decisions made throughout both the model development and parameter estimation process. Results show that much of the uncertainty is aleatory in nature - given a "complete" representation of dominant hydrologic processes, uncertainty in process parameterizations can be represented using an ensemble of model parameters. Epistemic uncertainty associated with process interactions and scaling behavior is still important, and these uncertainties can be represented using an ensemble of different spatial configurations. Finally, uncertainty in forcing data can be represented using ensemble methods for spatial meteorological analysis. Our systematic
A Geospatial Integrated Problem Solving Environment for Homeland Security Applications
Koch, Daniel B
2010-01-01
Effective planning, response, and recovery (PRR) involving terrorist attacks or natural disasters come with a vast array of information needs. Much of the required information originates from disparate sources in widely differing formats. However, one common attribute the information often possesses is physical location. The organization and visualization of this information can be critical to the success of the PRR mission. Organizing information geospatially is often the most intuitive for the user. In the course of developing a field tool for the U.S. Department of Homeland Security (DHS) Office for Bombing Prevention, a geospatial integrated problem solving environment software framework was developed by Oak Ridge National Laboratory. This framework has proven useful as well in a number of other DHS, Department of Defense, and Department of Energy projects. An overview of the software architecture along with application examples are presented.
The application of CFD to rotary wing flow problems
NASA Technical Reports Server (NTRS)
Caradonna, F. X.
1990-01-01
Rotorcraft aerodynamics is especially rich in unsolved problems, and for this reason the need for independent computational and experimental studies is great. Three-dimensional unsteady, nonlinear potential methods are becoming fast enough to enable their use in parametric design studies. At present, combined CAMRAD/FPR analyses for a complete trimmed rotor soltution can be performed in about an hour on a CRAY Y-MP (or ten minutes, with multiple processors). These computational speeds indicate that in the near future many of the large CFD problems will no longer require a supercomputer. The ability to convect circulation is routine for integral methods, but only recently was it discovered how to do the same with differential methods. It is clear that the differential CFD rotor analyses are poised to enter the engineering workplace. Integral methods already constitute a mainstay. Ultimately, it is the users who will integrate CFD into the entire engineering process and provide a new measure of confidence in design and analysis. It should be recognized that the above classes of analyses do not include several major limiting phenomena which will continue to require empirical treatment because of computational time constraints and limited physical understanding. Such empirical treatment should be included, however, into the developing CFD, engineering level analyses. It is likely that properly constructed flow models containing corrections from physical testing will be able to fill in unavoidable gaps in the experimental data base, both for basic studies and for specific configuration testing. For these kinds of applications, computational cost is not an issue. Finally, it should be recognized that although rotorcraft are probably the most complex of aircraft, the rotorcraft engineering community is very small compared to the fixed-wing community. Likewise, rotorcraft CFD resources can never achieve fixed-wing proportions and must be used wisely. Therefore the fixed
Matrix iteration method for nonlinear eigenvalue problems with applications
NASA Astrophysics Data System (ADS)
Ram, Y. M.
2016-12-01
A simple and intuitive matrix iteration method for solving nonlinear eigenvalue problems is described and demonstrated in detail by two problems: (i) the boundary value problem associated with large deflection of a flexible rod, and (ii) the initial value problem associated with normal mode motion of a double pendulum. The two problems are solved by two approaches, the finite difference approach and a continuous realization approach which is similar in spirit to the Rayleigh-Ritz method.
Multi-physics nuclear reactor simulator for advanced nuclear engineering education
Yamamoto, A.
2012-07-01
Multi-physics nuclear reactor simulator, which aims to utilize for advanced nuclear engineering education, is being introduced to Nagoya Univ.. The simulator consists of the 'macroscopic' physics simulator and the 'microscopic' physics simulator. The former performs real time simulation of a whole nuclear power plant. The latter is responsible to more detail numerical simulations based on the sophisticated and precise numerical models, while taking into account the plant conditions obtained in the macroscopic physics simulator. Steady-state and kinetics core analyses, fuel mechanical analysis, fluid dynamics analysis, and sub-channel analysis can be carried out in the microscopic physics simulator. Simulation calculations are carried out through dedicated graphical user interface and the simulation results, i.e., spatial and temporal behaviors of major plant parameters are graphically shown. The simulator will provide a bridge between the 'theories' studied with textbooks and the 'physical behaviors' of actual nuclear power plants. (authors)
Inverse Problems in Complex Models and Applications to Earth Sciences
NASA Astrophysics Data System (ADS)
Bosch, M. E.
2015-12-01
The inference of the subsurface earth structure and properties requires the integration of different types of data, information and knowledge, by combined processes of analysis and synthesis. To support the process of integrating information, the regular concept of data inversion is evolving to expand its application to models with multiple inner components (properties, scales, structural parameters) that explain multiple data (geophysical survey data, well-logs, core data). The probabilistic inference methods provide the natural framework for the formulation of these problems, considering a posterior probability density function (PDF) that combines the information from a prior information PDF and the new sets of observations. To formulate the posterior PDF in the context of multiple datasets, the data likelihood functions are factorized assuming independence of uncertainties for data originating across different surveys. A realistic description of the earth medium requires modeling several properties and structural parameters, which relate to each other according to dependency and independency notions. Thus, conditional probabilities across model components also factorize. A common setting proceeds by structuring the model parameter space in hierarchical layers. A primary layer (e.g. lithology) conditions a secondary layer (e.g. physical medium properties), which conditions a third layer (e.g. geophysical data). In general, less structured relations within model components and data emerge from the analysis of other inverse problems. They can be described with flexibility via direct acyclic graphs, which are graphs that map dependency relations between the model components. Examples of inverse problems in complex models can be shown at various scales. At local scale, for example, the distribution of gas saturation is inferred from pre-stack seismic data and a calibrated rock-physics model. At regional scale, joint inversion of gravity and magnetic data is applied
Ensemble Smoother implemented in parallel for groundwater problems applications
NASA Astrophysics Data System (ADS)
Leyva, E.; Herrera, G. S.; de la Cruz, L. M.
2013-05-01
Data assimilation is a process that links forecasting models and measurements using the benefits from both sources. The Ensemble Kalman Filter (EnKF) is a data-assimilation sequential-method that was designed to address two of the main problems related to the use of the Extended Kalman Filter (EKF) with nonlinear models in large state spaces, i-e the use of a closure problem and massive computational requirements associated with the storage and subsequent integration of the error covariance matrix. The EnKF has gained popularity because of its simple conceptual formulation and relative ease of implementation. It has been used successfully in various applications of meteorology and oceanography and more recently in petroleum engineering and hydrogeology. The Ensemble Smoother (ES) is a method similar to EnKF, it was proposed by Van Leeuwen and Evensen (1996). Herrera (1998) proposed a version of the ES which we call Ensemble Smoother of Herrera (ESH) to distinguish it from the former. It was introduced for space-time optimization of groundwater monitoring networks. In recent years, this method has been used for data assimilation and parameter estimation in groundwater flow and transport models. The ES method uses Monte Carlo simulation, which consists of generating repeated realizations of the random variable considered, using a flow and transport model. However, often a large number of model runs are required for the moments of the variable to converge. Therefore, depending on the complexity of problem a serial computer may require many hours of continuous use to apply the ES. For this reason, it is required to parallelize the process in order to do it in a reasonable time. In this work we present the results of a parallelization strategy to reduce the execution time for doing a high number of realizations. The software GWQMonitor by Herrera (1998), implements all the algorithms required for the ESH in Fortran 90. We develop a script in Python using mpi4py, in
Jacobi elliptic functions: A review of nonlinear oscillatory application problems
NASA Astrophysics Data System (ADS)
Kovacic, Ivana; Cveticanin, Livija; Zukovic, Miodrag; Rakaric, Zvonko
2016-10-01
This review paper is concerned with the applications of Jacobi elliptic functions to nonlinear oscillators whose restoring force has a monomial or binomial form that involves cubic and/or quadratic nonlinearity. First, geometric interpretations of three basic Jacobi elliptic functions are given and their characteristics are discussed. It is shown then how their different forms can be utilized to express exact solutions for the response of certain free conservative oscillators. These forms are subsequently used as a starting point for a presentation of different quantitative techniques for obtaining an approximate response for free perturbed nonlinear oscillators. An illustrative example is provided. Further, two types of externally forced nonlinear oscillators are reviewed: (i) those that are excited by elliptic-type excitations with different exact and approximate solutions; (ii) those that are damped and excited by harmonic excitations, but their approximate response is expressed in terms of Jacobi elliptic functions. Characteristics of the steady-state response are discussed and certain qualitative differences with respect to the classical Duffing oscillator excited harmonically are pointed out. Parametric oscillations of the oscillators excited by an elliptic-type forcing are considered as well, and the differences with respect to the stability chart of the classical Mathieu equation are emphasized. The adjustment of the Melnikov method to derive the general condition for the onset of homoclinic bifurcations in a system parametrically excited by an elliptic-type forcing is provided and compared with those corresponding to harmonic excitations. Advantages and disadvantages of the use of Jacobi elliptic functions in nonlinear oscillatory application problems are discussed and some suggestions for future work are given.
Advanced Mesh-Enabled Monte carlo capability for Multi-Physics Reactor Analysis
Wilson, Paul; Evans, Thomas; Tautges, Tim
2012-12-24
This project will accumulate high-precision fluxes throughout reactor geometry on a non- orthogonal grid of cells to support multi-physics coupling, in order to more accurately calculate parameters such as reactivity coefficients and to generate multi-group cross sections. This work will be based upon recent developments to incorporate advanced geometry and mesh capability in a modular Monte Carlo toolkit with computational science technology that is in use in related reactor simulation software development. Coupling this capability with production-scale Monte Carlo radiation transport codes can provide advanced and extensible test-beds for these developments. Continuous energy Monte Carlo methods are generally considered to be the most accurate computational tool for simulating radiation transport in complex geometries, particularly neutron transport in reactors. Nevertheless, there are several limitations for their use in reactor analysis. Most significantly, there is a trade-off between the fidelity of results in phase space, statistical accuracy, and the amount of computer time required for simulation. Consequently, to achieve an acceptable level of statistical convergence in high-fidelity results required for modern coupled multi-physics analysis, the required computer time makes Monte Carlo methods prohibitive for design iterations and detailed whole-core analysis. More subtly, the statistical uncertainty is typically not uniform throughout the domain, and the simulation quality is limited by the regions with the largest statistical uncertainty. In addition, the formulation of neutron scattering laws in continuous energy Monte Carlo methods makes it difficult to calculate adjoint neutron fluxes required to properly determine important reactivity parameters. Finally, most Monte Carlo codes available for reactor analysis have relied on orthogonal hexahedral grids for tallies that do not conform to the geometric boundaries and are thus generally not well
Multi-Physics Markov Chain Monte Carlo Methods for Subsurface Flows
NASA Astrophysics Data System (ADS)
Rigelo, J.; Ginting, V.; Rahunanthan, A.; Pereira, F.
2014-12-01
For CO2 sequestration in deep saline aquifers, contaminant transport in subsurface, and oil or gas recovery, we often need to forecast flow patterns. Subsurface characterization is a critical and challenging step in flow forecasting. To characterize subsurface properties we establish a statistical description of the subsurface properties that are conditioned to existing dynamic and static data. A Markov Chain Monte Carlo (MCMC) algorithm is used in a Bayesian statistical description to reconstruct the spatial distribution of rock permeability and porosity. The MCMC algorithm requires repeatedly solving a set of nonlinear partial differential equations describing displacement of fluids in porous media for different values of permeability and porosity. The time needed for the generation of a reliable MCMC chain using the algorithm can be too long to be practical for flow forecasting. In this work we develop fast and effective computational methods for generating MCMC chains in the Bayesian framework for the subsurface characterization. Our strategy consists of constructing a family of computationally inexpensive preconditioners based on simpler physics as well as on surrogate models such that the number of fine-grid simulations is drastically reduced in the generated MCMC chains. In particular, we introduce a huff-puff technique as screening step in a three-stage multi-physics MCMC algorithm to reduce the number of expensive final stage simulations. The huff-puff technique in the algorithm enables a better characterization of subsurface near wells. We assess the quality of the proposed multi-physics MCMC methods by considering Monte Carlo simulations for forecasting oil production in an oil reservoir.
Allouche, Mohamed Hatem; Bussone, Marco; Giacosa, Fausto; Bernard, Frédéric; Barigou, Mostafa
2017-01-01
We propose a mesh-free and discrete (particle-based) multi-physics approach for modelling the hydrodynamics in flexible biological valves. In the first part of this study, the method is successfully validated against both traditional modelling techniques and experimental data. In the second part, it is further developed to account for the formation of solid aggregates in the flow and at the membrane surface. Simulations of various types of aggregates highlight the main benefits of discrete multi-physics and indicate the potential of this approach for coupling the hydrodynamics with phenomena such as clotting and calcification in biological valves. PMID:28384341
Decompositions of information divergences: Recent development, open problems and applications
NASA Astrophysics Data System (ADS)
Stehlík, M.
2012-11-01
What is the optimal statistical decision? And how it is related to the statistical information theory? By trying to answer these difficult questions, we will illustrate the necessity of understanding of structure of information divergences. This may be understand in particular through deconvolutions, leading to an optimal statistical inference. We will illustrate deconvolution of information divergence in the exponential family, which will gave us an optimal tests (optimal in the sense of Bahadur (see [3, 4]). We discuss about the results on the exact density of the I-divergence in the exponential family with gamma distributed observations (see [28]). Since the considered I-divergence is related to the likelihood ratio (LR) statistics, we deal with the exact distribution of the likelihood ratio tests and discuss the optimality of such exact tests. The both tests, the exact LR test of the homogeneity and the exact LR test of the scale parameter, are asymptotically optimal in the Bahadur sense when the observations are distributed exponentially. We also discuss decompositions from a broader perspective. We recall relationship between f-divergence and statistical information in the sense of DeGroot, which was shown in [17]. We formulate an open problem of its generalization. Applications in reliability testing and hydrological prediction are mentioned.
ERIC Educational Resources Information Center
Docktor, Jennifer L.; Dornfeld, Jay; Frodermann, Evan; Heller, Kenneth; Hsu, Leonardo; Jackson, Koblar Alan; Mason, Andrew; Ryan, Qing X.; Yang, Jie
2016-01-01
Problem solving is a complex process valuable in everyday life and crucial for learning in the STEM fields. To support the development of problem-solving skills it is important for researchers and curriculum developers to have practical tools that can measure the difference between novice and expert problem-solving performance in authentic…
ERIC Educational Resources Information Center
Docktor, Jennifer L.; Dornfeld, Jay; Frodermann, Evan; Heller, Kenneth; Hsu, Leonardo; Jackson, Koblar Alan; Mason, Andrew; Ryan, Qing X.; Yang, Jie
2016-01-01
Problem solving is a complex process valuable in everyday life and crucial for learning in the STEM fields. To support the development of problem-solving skills it is important for researchers and curriculum developers to have practical tools that can measure the difference between novice and expert problem-solving performance in authentic…
Application of the maximal covering location problem to habitat reserve site selection: a review
Stephanie A. Snyder; Robert G. Haight
2016-01-01
The Maximal Covering Location Problem (MCLP) is a classic model from the location science literature which has found wide application. One important application is to a fundamental problem in conservation biology, the Maximum Covering Species Problem (MCSP), which identifies land parcels to protect to maximize the number of species represented in the selected sites. We...
NASA Astrophysics Data System (ADS)
Hong, S.; Park, S. K.; Choi, Y.; Myoung, B.
2013-12-01
As the importance of the land surface models (LSMs) has been increasingly magnified due to their pivotal role in the complete Earth environmental system, linking the atmosphere, hydrosphere, and biosphere, modeling accuracy at regional scales has been important to ensure better representations of increased land surface heterogeneities with the increase of spatial resolutions. However, every model has its own weaknesses induced by such problems as the reality of physical schemes by uncertain parameterizing methods and even structural unreality by simplified model designs. One of the major uncertainties is Interrelationships between implemented physical schemes and their impact on simulation accuracy. Using the new version of Noah land surface model with multi-physics option (Noah-MP) that enables to create various scheme combinations, we examined how each scheme in different scheme combinations contributes to better simulations and how their interrelationships vary with uncertain parameter changes. Targeting long term (5 year) monthly surface hydrology of Han River watershed in South Korea, we mainly explored the simulation accuracy of runoff and evapotranspiration, and additionally that of leaf area index in order to see the vegetation impact on surface water partitioning. The result indicates that the primary contributor for simulation accuracies were the schemes of surface heat exchange coefficient. These schemes are very sensitive to vegetation amount due to their different treatment of heat transfer between on bare and vegetated surface. Showing that further improvement through uncertain parameter calibration, this study also demonstrated that the combination of analyses of scheme interrelationships and parameter calibration promises improved model calibration. In addition, revealing remained uncertainty about the vegetation effect on surface energy and water partitioning, this study also showed that the scheme interrelationship analyses is useful for model
Application of Problem Based Learning through Research Investigation
ERIC Educational Resources Information Center
Beringer, Jason
2007-01-01
Problem-based learning (PBL) is a teaching technique that uses problem-solving as the basis for student learning. The technique is student-centred with teachers taking the role of a facilitator. Its general aims are to construct a knowledge base, develop problem-solving skills, teach effective collaboration and provide the skills necessary to be a…
Problem Based Learning: Application to Technology Education in Three Countries
ERIC Educational Resources Information Center
Williams, P. John; Iglesias, Juan; Barak, Moshe
2008-01-01
An increasing variety of professional educational and training disciplines are now problem based (e.g., medicine, nursing, engineering, community health), and they may have a corresponding variety of educational objectives. However, they all have in common the use of problems in the instructional sequence. The problems may be as diverse as a…
NASA Astrophysics Data System (ADS)
Formosa, F.; Fréchette, L. G.
2015-12-01
An electrical circuit equivalent (ECE) approach has been set up allowing elementary oscillatory microengine components to be modelled. They cover gas channel/chamber thermodynamics, viscosity and thermal effects, mechanical structure and electromechanical transducers. The proposed tool has been validated on a centimeter scale Free Piston membrane Stirling engine [1]. We propose here new developments taking into account scaling effects to establish models suitable for any microengines. They are based on simplifications derived from the comparison of the hydraulic radius with respect to the viscous and thermal penetration depths respectively).
Application of the method of maximum entropy in the mean to classification problems
NASA Astrophysics Data System (ADS)
Gzyl, Henryk; ter Horst, Enrique; Molina, German
2015-11-01
In this note we propose an application of the method of maximum entropy in the mean to solve a class of inverse problems comprising classification problems and feasibility problems appearing in optimization. Such problems may be thought of as linear inverse problems with convex constraints imposed on the solution as well as on the data. The method of maximum entropy in the mean proves to be a very useful tool to deal with this type of problems.
[Telematics in geriatrics--potentials, problems and application experiences].
Mix, S; Borchelt, M; Nieczaj, R; Trilhof, G; Steinhagen-Thiessen, E
2000-06-01
Modern telecommunication technology (telematics) has the potential to improve the quality of life for elders with physical and mental impairments as well as for their care giving relatives. Videophones, internet resources, and multimedia computers can be used for networking them together with social workers, nurse practitioners, physicians and therapeutic staff in service-centers. This can be viewed as a unique opportunity to establish and maintain instant and personalized access to various medical services in a situation where increasing needs are opposed to decreasing resources. However, it is not yet clear whether telematics is adequate, efficient, and effective in supporting care for geriatric patients. Some studies already showed its applicability and feasibility, but there are still no larger trials showing that maintenance or enhancement of autonomy can be achieved effectively by using new technologies. This article reviews the literature on telematics in geriatrics and presents data of a tele-rehabilitation project ("TeleReha", conducted at the Berlin Geriatric Center) which comprised mobility-impaired patients (N = 13, mean age 72 yrs), care giving relatives (N = 8), and geriatric professionals. Networking was established using ISDN technology with videophones or PC-based videoconferencing systems. Results showed that participants regard telecommunication devices as a valuable resource for their informational and communicational needs. Use of telecommunication systems was inversely related to physical mobility. Having access to professional service and counselling was rated highly important but also the opportunity to establish reliable contacts with non-professionals (relatives, other participants). Despite experienced technical problems, use of telecommunication systems was evaluated more positively in the post-test as compared to the pre-test. In summary, current experience suggests that telematics can be used efficiently by geriatric patients and by
NASA Astrophysics Data System (ADS)
Gasymov, E. A.; Guseinova, A. O.; Gasanova, U. N.
2016-07-01
One of the methods for solving mixed problems is the classical separation of variables (the Fourier method). If the boundary conditions of the mixed problem are irregular, this method, generally speaking, is not applicable. In the present paper, a generalized separation of variables and a way of application of this method to solving some mixed problems with irregular boundary conditions are proposed. Analytical representation of the solution to this irregular mixed problem is obtained.
An Application of Wedelin's Method to Railway Crew Scheduling Problem
NASA Astrophysics Data System (ADS)
Miura, Rei; Imaizumi, Jun; Fukumura, Naoto; Morito, Susumu
So many scheduling problems arise in railway industries. One of the typical scheduling problems is Crew Scheduling Problem. Much attention has been paid to this problem by a lot of researchers, but many studies have not been done to the problems in railway industries in Japan. In this paper, we consider a railway crew scheduling problem in Japan. The problem can be formulated into Set Covering Problem (SCP). In SCP, a row corresponds to a trip representing a minimal task and a column corresponds to a pairing representing a sequence of trips performed by a certain crew. Many algorithms have been developed and proposed for it. On the other hand, in practical use, it is important to investigate how these algorithms behave and work on a certain problem. Therefore, we focus on Wedelin's algorithm, which is based on Lagrange relaxation and is known as one of the high performance algorithms for SCP, and mainly examine the basic idea of this algorithm. Furthermore, we show effectiveness of this procedure through computational experiments on instances from Japanese railway.
Periodically specified satisfiability problems with applications: An alternative to domino problems
Marathe, M.V.; Hunt, H.B., III; Rosenkrantz, D.J.; Stearns, R.E.; Radhakrishnann, V.
1995-12-31
We characterize the complexities of several basic generalized CNF satisfiability problems SAT(S), when instances are specified using various kinds of 1- and 2-dimensional periodic specifications. We outline how this characterization can be used to prove a number of new hardness results for the complexity classes DSPACE(n), NSPACE(n), DEXPTIME, NEXPTIME, EXPSPACE etc. The hardness results presented significantly extend the known hardness results for periodically specified problems. Several advantages axe outlined of the use of periodically specified satisfiability problems over the use of domino problems in proving both hardness and easiness results. As one corollary, we show that a number of basic NP-hard problems become EXPSPACE hard when inputs axe represented using 1-dimensional infinite periodic wide specifications. This answers a long standing open question posed by Orlin.
Bhardwaj, M.; Day, D.; Farhat, C.; Lesoinne, M; Pierson, K.; Rixen, D.
1999-04-01
We report on the application of the one-level FETI method to the solution of a class of substructural problems associated with the Department of Energy's Accelerated Strategic Computing Initiative (ASCI). We focus on numerical and parallel scalability issues, and on preliminary performance results obtained on the ASCI Option Red supercomputer configured with as many as one thousand processors, for problems with as many as 5 million degrees of freedom.
Computation of Thermodynamic Equilibria Pertinent to Nuclear Materials in Multi-Physics Codes
NASA Astrophysics Data System (ADS)
Piro, Markus Hans Alexander
Nuclear energy plays a vital role in supporting electrical needs and fulfilling commitments to reduce greenhouse gas emissions. Research is a continuing necessity to improve the predictive capabilities of fuel behaviour in order to reduce costs and to meet increasingly stringent safety requirements by the regulator. Moreover, a renewed interest in nuclear energy has given rise to a "nuclear renaissance" and the necessity to design the next generation of reactors. In support of this goal, significant research efforts have been dedicated to the advancement of numerical modelling and computational tools in simulating various physical and chemical phenomena associated with nuclear fuel behaviour. This undertaking in effect is collecting the experience and observations of a past generation of nuclear engineers and scientists in a meaningful way for future design purposes. There is an increasing desire to integrate thermodynamic computations directly into multi-physics nuclear fuel performance and safety codes. A new equilibrium thermodynamic solver is being developed with this matter as a primary objective. This solver is intended to provide thermodynamic material properties and boundary conditions for continuum transport calculations. There are several concerns with the use of existing commercial thermodynamic codes: computational performance; limited capabilities in handling large multi-component systems of interest to the nuclear industry; convenient incorporation into other codes with quality assurance considerations; and, licensing entanglements associated with code distribution. The development of this software in this research is aimed at addressing all of these concerns. The approach taken in this work exploits fundamental principles of equilibrium thermodynamics to simplify the numerical optimization equations. In brief, the chemical potentials of all species and phases in the system are constrained by estimates of the chemical potentials of the system
Application of TRIZ approach to machine vibration condition monitoring problems
NASA Astrophysics Data System (ADS)
Cempel, Czesław
2013-12-01
Up to now machine condition monitoring has not been seriously approached by TRIZ1TRIZ= Russian acronym for Inventive Problem Solving System, created by G. Altshuller ca 50 years ago. users, and the knowledge of TRIZ methodology has not been applied there intensively. However, there are some introductory papers of present author posted on Diagnostic Congress in Cracow (Cempel, in press [11]), and Diagnostyka Journal as well. But it seems to be further need to make such approach from different sides in order to see, if some new knowledge and technology will emerge. In doing this we need at first to define the ideal final result (IFR) of our innovation problem. As a next we need a set of parameters to describe the problems of system condition monitoring (CM) in terms of TRIZ language and set of inventive principles possible to apply, on the way to IFR. This means we should present the machine CM problem by means of contradiction and contradiction matrix. When specifying the problem parameters and inventive principles, one should use analogy and metaphorical thinking, which by definition is not exact but fuzzy, and leads sometimes to unexpected results and outcomes. The paper undertakes this important problem again and brings some new insight into system and machine CM problems. This may mean for example the minimal dimensionality of TRIZ engineering parameter set for the description of machine CM problems, and the set of most useful inventive principles applied to given engineering parameter and contradictions of TRIZ.
Applications of decision analysis and related techniques to industrial engineering problems at KSC
NASA Technical Reports Server (NTRS)
Evans, Gerald W.
1995-01-01
This report provides: (1) a discussion of the origination of decision analysis problems (well-structured problems) from ill-structured problems; (2) a review of the various methodologies and software packages for decision analysis and related problem areas; (3) a discussion of how the characteristics of a decision analysis problem affect the choice of modeling methodologies, thus providing a guide as to when to choose a particular methodology; and (4) examples of applications of decision analysis to particular problems encountered by the IE Group at KSC. With respect to the specific applications at KSC, particular emphasis is placed on the use of the Demos software package (Lumina Decision Systems, 1993).
Harmony search algorithm: application to the redundancy optimization problem
NASA Astrophysics Data System (ADS)
Nahas, Nabil; Thien-My, Dao
2010-09-01
The redundancy optimization problem is a well known NP-hard problem which involves the selection of elements and redundancy levels to maximize system performance, given different system-level constraints. This article presents an efficient algorithm based on the harmony search algorithm (HSA) to solve this optimization problem. The HSA is a new nature-inspired algorithm which mimics the improvization process of music players. Two kinds of problems are considered in testing the proposed algorithm, with the first limited to the binary series-parallel system, where the problem consists of a selection of elements and redundancy levels used to maximize the system reliability given various system-level constraints; the second problem for its part concerns the multi-state series-parallel systems with performance levels ranging from perfect operation to complete failure, and in which identical redundant elements are included in order to achieve a desirable level of availability. Numerical results for test problems from previous research are reported and compared. The results of HSA showed that this algorithm could provide very good solutions when compared to those obtained through other approaches.
Coupling between a multi-physics workflow engine and an optimization framework
NASA Astrophysics Data System (ADS)
Di Gallo, L.; Reux, C.; Imbeaux, F.; Artaud, J.-F.; Owsiak, M.; Saoutic, B.; Aiello, G.; Bernardi, P.; Ciraolo, G.; Bucalossi, J.; Duchateau, J.-L.; Fausser, C.; Galassi, D.; Hertout, P.; Jaboulay, J.-C.; Li-Puma, A.; Zani, L.
2016-03-01
A generic coupling method between a multi-physics workflow engine and an optimization framework is presented in this paper. The coupling architecture has been developed in order to preserve the integrity of the two frameworks. The objective is to provide the possibility to replace a framework, a workflow or an optimizer by another one without changing the whole coupling procedure or modifying the main content in each framework. The coupling is achieved by using a socket-based communication library for exchanging data between the two frameworks. Among a number of algorithms provided by optimization frameworks, Genetic Algorithms (GAs) have demonstrated their efficiency on single and multiple criteria optimization. Additionally to their robustness, GAs can handle non-valid data which may appear during the optimization. Consequently GAs work on most general cases. A parallelized framework has been developed to reduce the time spent for optimizations and evaluation of large samples. A test has shown a good scaling efficiency of this parallelized framework. This coupling method has been applied to the case of SYCOMORE (SYstem COde for MOdeling tokamak REactor) which is a system code developed in form of a modular workflow for designing magnetic fusion reactors. The coupling of SYCOMORE with the optimization platform URANIE enables design optimization along various figures of merit and constraints.
A multi-physical model for charge and mass transport in a flexible ionic polymer sensor
NASA Astrophysics Data System (ADS)
Zhu, Zicai; Asaka, Kinji; Takagi, Kentaro; Aabloo, Alvo; Horiuchi, Tetsuya
2016-04-01
An ionic polymer material can generate electrical potential and function as a bio-sensor under a non-uniform deformation. Ionic polymer-metal composite (IPMC) is a typical flexible ionic polymer sensor material. A multi-physical sensing model is presented at first based on the same physical equations in the physical model for IPMC actuator we obtained before. Under an applied bending deformation, water and cation migrate to the direction of outside electrode immediately. Redistribution of cations causes an electrical potential difference between two electrodes. The cation migration is strongly restrained by the generated electrical potential. And the migrated cations will move back to the inner electrode under the concentration diffusion effect and lead to a relaxation of electrical potential. In the whole sensing process, transport and redistribution of charge and mass are revealed along the thickness direction by numerical analysis. The sensing process is a revised physical process of the actuation, however, the transport properties are quite different from those of the later. And the effective dielectric constant of IPMC, which is related to the morphology of the electrode-ionic polymer interface, is proved to have little relation with the sensing amplitude. All the conclusions are significant for ionic polymer sensing material design.
NASA Astrophysics Data System (ADS)
Ma, Z.; Hou, Z.; Zang, X.
2015-09-01
As a large-scale flexible inflatable structure by a huge inner lifting gas volume of several hundred thousand cubic meters, the stratospheric airship's thermal characteristic of inner gas plays an important role in its structural performance. During the floating flight, the day-night variation of the combined thermal condition leads to the fluctuation of the flow field inside the airship, which will remarkably affect the pressure acted on the skin and the structural safety of the stratospheric airship. According to the multi-physics coupling mechanism mentioned above, a numerical procedure of structural safety analysis of stratospheric airships is developed and the thermal model, CFD model, finite element code and criterion of structural strength are integrated. Based on the computation models, the distributions of the deformations and stresses of the skin are calculated with the variation of day-night time. The effects of loads conditions and structural configurations on the structural safety of stratospheric airships in the floating condition are evaluated. The numerical results can be referenced for the structural design of stratospheric airships.
Propagation of neutron-reaction uncertainties through multi-physics models of novel LWR's
NASA Astrophysics Data System (ADS)
Hernandez-Solis, Augusto; Sjöstrand, Henrik; Helgesson, Petter
2017-09-01
The novel design of the renewable boiling water reactor (RBWR) allows a breeding ratio greater than unity and thus, it aims at providing for a self-sustained fuel cycle. The neutron reactions that compose the different microscopic cross-sections and angular distributions are uncertain, so when they are employed in the determination of the spatial distribution of the neutron flux in a nuclear reactor, a methodology should be employed to account for these associated uncertainties. In this work, the Total Monte Carlo (TMC) method is used to propagate the different neutron-reactions (as well as angular distributions) covariances that are part of the TENDL-2014 nuclear data (ND) library. The main objective is to propagate them through coupled neutronic and thermal-hydraulic models in order to assess the uncertainty of important safety parameters related to multi-physics, such as peak cladding temperature along the axial direction of an RBWR fuel assembly. The objective of this study is to quantify the impact that ND covariances of important nuclides such as U-235, U-238, Pu-239 and the thermal scattering of hydrogen in H2O have in the deterministic safety analysis of novel nuclear reactors designs.
Application of nonlinear Krylov acceleration to radiative transfer problems
Till, A. T.; Adams, M. L.; Morel, J. E.
2013-07-01
The iterative solution technique used for radiative transfer is normally nested, with outer thermal iterations and inner transport iterations. We implement a nonlinear Krylov acceleration (NKA) method in the PDT code for radiative transfer problems that breaks nesting, resulting in more thermal iterations but significantly fewer total inner transport iterations. Using the metric of total inner transport iterations, we investigate a crooked-pipe-like problem and a pseudo-shock-tube problem. Using only sweep preconditioning, we compare NKA against a typical inner / outer method employing GMRES / Newton and find NKA to be comparable or superior. Finally, we demonstrate the efficacy of applying diffusion-based preconditioning to grey problems in conjunction with NKA. (authors)
Application of remote sensing to state and regional problems
NASA Technical Reports Server (NTRS)
Bouchillon, C. W.; Miller, W. F.; Landphair, H.; Zitta, V. L.
1974-01-01
The use of remote sensing techniques to help the state of Mississippi recognize and solve its environmental, resource, and socio-economic problems through inventory, analysis, and monitoring is suggested.
Application of genetics knowledge to the solution of pedigree problems
NASA Astrophysics Data System (ADS)
Hackling, Mark W.
1994-12-01
This paper reports on a study of undergraduate genetics students' conceptual and procedural knowledge and how that knowledge influences students' success in pedigree problem solving. Findings indicate that many students lack the knowledge needed to test hypotheses relating to X-linked modes of inheritance using either patterns of inheritance or genotypes. Case study data illustrate how these knowledge deficiencies acted as an impediment to correct and conclusive solutions of pedigree problems.
Application of remote sensing to hydrological problems and floods
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Novo, E. M. L. M.
1983-01-01
The main applications of remote sensors to hydrology are identified as well as the principal spectral bands and their advantages and disadvantages. Some examples of LANDSAT data applications to flooding-risk evaluation are cited. Because hydrology studies the amount of moisture and water involved in each phase of hydrological cycle, remote sensing must be emphasized as a technique for hydrological data acquisition.
Publication misrepresentation among neurosurgery residency applicants: an increasing problem.
Kistka, Heather M; Nayeri, Arash; Wang, Li; Dow, Jamie; Chandrasekhar, Rameela; Chambless, Lola B
2016-01-01
OBJECT Misrepresentation of scholarly achievements is a recognized phenomenon, well documented in numerous fields, yet the accuracy of reporting remains dependent on the honor principle. Therefore, honest self-reporting is of paramount importance to maintain scientific integrity in neurosurgery. The authors had observed a trend toward increasing numbers of publications among applicants for neurosurgery residency at Vanderbilt University and undertook this study to determine whether this change was a result of increased academic productivity, inflated reporting, or both. They also aimed to identify application variables associated with inaccurate citations. METHODS The authors retrospectively reviewed the residency applications submitted to their neurosurgery department in 2006 (n = 148) and 2012 (n = 194). The applications from 2006 were made via SF Match and those from 2012 were made using the Electronic Residency Application Service. Publications reported as "accepted" or "in press" were verified via online search of Google Scholar, PubMed, journal websites, and direct journal contact. Works were considered misrepresented if they did not exist, incorrectly listed the applicant as first author, or were incorrectly listed as peer reviewed or published in a printed journal rather than an online only or non-peer-reviewed publication. Demographic data were collected, including applicant sex, medical school ranking and country, advanced degrees, Alpha Omega Alpha membership, and USMLE Step 1 score. Zero-inflated negative binomial regression was used to identify predictors of misrepresentation. RESULTS Using univariate analysis, between 2006 and 2012 the percentage of applicants reporting published works increased significantly (47% vs 97%, p < 0.001). However, the percentage of applicants with misrepresentations (33% vs 45%) also increased. In 2012, applicants with a greater total of reported works (p < 0.001) and applicants from unranked US medical schools (those not
Application of AN Asymptotic Method to Transient Dynamic Problems
NASA Astrophysics Data System (ADS)
Fafard, M.; Henchi, K.; Gendron, G.; Ammar, S.
1997-11-01
A new method to solve linear dynamics problems using an asymptotic method is presented. Asymptotic methods have been efficiently used for many decades to solve non-linear quasistatic structural problems. Generally, structural dynamics problems are solved using finite elements for the discretization of the space domain of the differential equations, and explicit or implicit schemes for the time domain. With the asymptotic method, time schemes are not necessary to solve the discretized (space) equations. Using the analytical solution of a single degree of freedom (DOF) problem, it is demonstrated, that the Dynamic Asymptotic Method (DAM) converges to the exact solution when an infinite series expansion is used. The stability of the method has been studied. DAM is conditionally stable for a finite series expansion and unconditionally stable for an infinite series expansion. This method is similar to the analytical method of undetermined coefficients or to power series method being used to solve ordinary differential equations. For a multi-degree-of-freedom (MDOF) problem with a lumped mass matrix, no factorization or explicit inversion of global matrices is necessary. It is shown that this conditionally stable method is more efficient than other conditionally stable explicit central difference integration techniques. The solution is continuous irrespective of the time segment (step) and the derivatives are continuous up to orderN-1 whereNis the order of the series expansion.
Tangent bundle geometry from dynamics: Application to the Kepler problem
NASA Astrophysics Data System (ADS)
Cariñena, J. F.; Clemente-Gallardo, J.; Jover-Galtier, J. A.; Marmo, G.
In this paper, we consider a manifold with a dynamical vector field and enquire about the possible tangent bundle structures which would turn the starting vector field into a second-order one. The analysis is restricted to manifolds which are diffeomorphic with affine spaces. In particular, we consider the problem in connection with conformal vector fields of second-order and apply the procedure to vector fields conformally related with the harmonic oscillator (f-oscillators). We select one which covers the vector field describing the Kepler problem.
Application of decentralized cooperative problem solving in dynamic flexible scheduling
NASA Astrophysics Data System (ADS)
Guan, Zai-Lin; Lei, Ming; Wu, Bo; Wu, Ya; Yang, Shuzi
1995-08-01
The object of this study is to discuss an intelligent solution to the problem of task-allocation in shop floor scheduling. For this purpose, the technique of distributed artificial intelligence (DAI) is applied. Intelligent agents (IAs) are used to realize decentralized cooperation, and negotiation is realized by using message passing based on the contract net model. Multiple agents, such as manager agents, workcell agents, and workstation agents, make game-like decisions based on multiple criteria evaluations. This procedure of decentralized cooperative problem solving makes local scheduling possible. And by integrating such multiple local schedules, dynamic flexible scheduling for the whole shop floor production can be realized.
The Application of Physical Organic Chemistry to Biochemical Problems.
ERIC Educational Resources Information Center
Westheimer, Frank
1986-01-01
Presents the synthesis of the science of enzymology from application of the concepts of physical organic chemistry from a historical perspective. Summarizes enzyme and coenzyme mechanisms elucidated prior to 1963. (JM)
The Application of Physical Organic Chemistry to Biochemical Problems.
ERIC Educational Resources Information Center
Westheimer, Frank
1986-01-01
Presents the synthesis of the science of enzymology from application of the concepts of physical organic chemistry from a historical perspective. Summarizes enzyme and coenzyme mechanisms elucidated prior to 1963. (JM)
Application of remote sensing to state and regional problems
NASA Technical Reports Server (NTRS)
Miller, W. F.; Clark, J. R.; Solomon, J. L.; Duffy, B.; Minchew, K.; Wright, L. H. (Principal Investigator)
1981-01-01
The objectives, accomplishments, and future plans of several LANDSAT applications projects in Mississippi are discussed. The applications include land use planning in Lowandes County, strip mine inventory and reclamation, white tailed deer habitat evaluation, data analysis support systems, discrimination of forest habitats in potential lignite areas, changes in gravel operations, and determination of freshwater wetlands for inventory and monitoring. In addition, a conceptual design for a LANDSAT based information system is discussed.
Application of remote sensing to state and regional problems. [Mississippi
NASA Technical Reports Server (NTRS)
Miller, W. F.; Carter, B. D.; Solomon, J. L.; Williams, S. G.; Powers, J. S.; Clark, J. R. (Principal Investigator)
1980-01-01
Progress is reported in the following areas: remote sensing applications to land use planning Lowndes County, applications of LANDSAT data to strip mine inventory and reclamation, white tailed deer habitat evaluation using LANDSAT data, remote sensing data analysis support system, and discrimination of unique forest habitats in potential lignite areas of Mississippi. Other projects discussed include LANDSAT change discrimination in gravel operations, environmental impact modeling for highway corridors, and discrimination of fresh water wetlands for inventory and monitoring.
Applications of parallel global optimization to mechanics problems
NASA Astrophysics Data System (ADS)
Schutte, Jaco Francois
Global optimization of complex engineering problems, with a high number of variables and local minima, requires sophisticated algorithms with global search capabilities and high computational efficiency. With the growing availability of parallel processing, it makes sense to address these requirements by increasing the parallelism in optimization strategies. This study proposes three methods of concurrent processing. The first method entails exploiting the structure of population-based global algorithms such as the stochastic Particle Swarm Optimization (PSO) algorithm and the Genetic Algorithm (GA). As a demonstration of how such an algorithm may be adapted for concurrent processing we modify and apply the PSO to several mechanical optimization problems on a parallel processing machine. Desirable PSO algorithm features such as insensitivity to design variable scaling and modest sensitivity to algorithm parameters are demonstrated. A second approach to parallelism and improving algorithm efficiency is by utilizing multiple optimizations. With this method a budget of fitness evaluations is distributed among several independent sub-optimizations in place of a single extended optimization. Under certain conditions this strategy obtains a higher combined probability of converging to the global optimum than a single optimization which utilizes the full budget of fitness evaluations. The third and final method of parallelism addressed in this study is the use of quasiseparable decomposition, which is applied to decompose loosely coupled problems. This yields several sub-problems of lesser dimensionality which may be concurrently optimized with reduced effort.
Problem Analysis: Application in Developing Marketing Strategies for Colleges.
ERIC Educational Resources Information Center
Martin, John; Moore, Thomas
1991-01-01
The problem analysis technique can help colleges understand students' salient needs in a competitive market. A preliminary study demonstrates the usefulness of the approach for developing strategies aimed at maintaining student loyalty and improving word-of-mouth promotion to other prospective students. (Author/MSE)
Application of firefly algorithm to the dynamic model updating problem
NASA Astrophysics Data System (ADS)
Shabbir, Faisal; Omenzetter, Piotr
2015-04-01
Model updating can be considered as a branch of optimization problems in which calibration of the finite element (FE) model is undertaken by comparing the modal properties of the actual structure with these of the FE predictions. The attainment of a global solution in a multi dimensional search space is a challenging problem. The nature-inspired algorithms have gained increasing attention in the previous decade for solving such complex optimization problems. This study applies the novel Firefly Algorithm (FA), a global optimization search technique, to a dynamic model updating problem. This is to the authors' best knowledge the first time FA is applied to model updating. The working of FA is inspired by the flashing characteristics of fireflies. Each firefly represents a randomly generated solution which is assigned brightness according to the value of the objective function. The physical structure under consideration is a full scale cable stayed pedestrian bridge with composite bridge deck. Data from dynamic testing of the bridge was used to correlate and update the initial model by using FA. The algorithm aimed at minimizing the difference between the natural frequencies and mode shapes of the structure. The performance of the algorithm is analyzed in finding the optimal solution in a multi dimensional search space. The paper concludes with an investigation of the efficacy of the algorithm in obtaining a reference finite element model which correctly represents the as-built original structure.
APPLICATIONS OF RESEARCH TO THE PROBLEM OF INSTRUCTIONAL FLEXIBILITY.
ERIC Educational Resources Information Center
SARTAIN, HARRY W.
SELECTED RESEARCH ON THE PROBLEM OF INSTRUCTIONAL FLEXIBILITY IS SURVEYED AND DISCUSSED. BROAD TOPICS OF DISCUSSION ARE DEPARTMENTALIZATION, HOMOGENEOUS SECTIONING, INTERCLASS ABILITY SECTIONING, THE EXTENT OF VARIABILITY IN READING DEVELOPMENT, AND PRACTICES THAT MAY INCREASE FLEXIBILITY. AMONG THOSE PRACTICES TO INCREASE FLEXIBILITY ARE TEAM…
Constructive field theory and applications: Perspectives and open problems
NASA Astrophysics Data System (ADS)
Rivasseau, V.
2000-06-01
In this paper we review many interesting open problems in mathematical physics which may be attacked with the help of tools from constructive field theory. They could give work for future mathematical physicists trained with constructive methods well into the 21st century.
Application of Group Theory to Some Problems in Atomic Physics.
NASA Astrophysics Data System (ADS)
Suskin, Mark Albert
This work comprises three problems, each of which lends itself to investigation via the theory of groups and group representations. The first problem is to complete a set of operators used in the fitting of atomic energy levels of atoms whose ground configuration is f ^ 3. The role of group theory in the labelling of these operators and in their construction is explained. Values of parameters associated with a subset of the operators are also calculated via their group labels. The second problem is to explain the term inversion that occurs between states of the configuration of two equivalent electrons and certain of the states of the half-filled shell. This leads to generalizations that make it possible to investigate correspondences between matrix elements of effective operators taken between states of other configurations besides the two mentioned. This is made possible through the notion of quasispin. The third problem is the construction of recoupling coefficients for groups other than SO(3). Questions of phase convention and Kronecker-product multiplicities are taken up. Several methods of calculation are given and their relative advantages discussed. Tables of values of the calculated 6-j symbols are provided.
Application of University Resources to Local Government Problems. Final Report.
ERIC Educational Resources Information Center
Shamblin, James E.; And Others
The report details the results of a unique experimental demonstration of applying university resources to local government problems. Faculty-student teams worked with city and county personnel on projects chosen by mutual agreement, including work in areas of traffic management, law enforcement, waste heat utilization, solid waste conversion, and…
The Application of Problem Based Learning to Distance Education.
ERIC Educational Resources Information Center
Ostwald, M. J.; And Others
Since 1991, the problem-based learning (PBL) approach has been incorporated into the distance education program culminating in a Bachelor of Building degree from the Faculty of Architecture at the University of Newcastle, Australia. The Newcastle conceptual PBL model for on-campus courses was adapted to the special needs of distance learners. The…
Application of University Resources to Local Government Problems. Final Report.
ERIC Educational Resources Information Center
Shamblin, James E.; And Others
The report details the results of a unique experimental demonstration of applying university resources to local government problems. Faculty-student teams worked with city and county personnel on projects chosen by mutual agreement, including work in areas of traffic management, law enforcement, waste heat utilization, solid waste conversion, and…
On mean value iterations with application to variational inequality problems
Yao, Jen-Chih.
1989-12-01
In this report, we show that in a Hilbert space, a mean value iterative process generated by a continuous quasi-nonexpansive mapping always converges to a fixed point of the mapping without any precondition. We then employ this result to obtain approximating solutions to the variational inequality and the generalized complementarity problems. 7 refs.
NASA Astrophysics Data System (ADS)
Corrado, Cesare; Gerbeau, Jean-Frédéric; Moireau, Philippe
2015-02-01
This work addresses the inverse problem of electrocardiography from a new perspective, by combining electrical and mechanical measurements. Our strategy relies on the definition of a model of the electromechanical contraction which is registered on ECG data but also on measured mechanical displacements of the heart tissue typically extracted from medical images. In this respect, we establish in this work the convergence of a sequential estimator which combines for such coupled problems various state of the art sequential data assimilation methods in a unified consistent and efficient framework. Indeed, we aggregate a Luenberger observer for the mechanical state and a Reduced-Order Unscented Kalman Filter applied on the parameters to be identified and a POD projection of the electrical state. Then using synthetic data we show the benefits of our approach for the estimation of the electrical state of the ventricles along the heart beat compared with more classical strategies which only consider an electrophysiological model with ECG measurements. Our numerical results actually show that the mechanical measurements improve the identifiability of the electrical problem allowing to reconstruct the electrical state of the coupled system more precisely. Therefore, this work is intended to be a first proof of concept, with theoretical justifications and numerical investigations, of the advantage of using available multi-modal observations for the estimation and identification of an electromechanical model of the heart.
Development of a multi-physics simulation framework for semiconductor materials and devices
NASA Astrophysics Data System (ADS)
Almeida, Nuno Sucena
Modern day semiconductor technology devices face the ever increasing issue of accounting for quantum mechanics effects on their modeling and performance assessment. The objective of this work is to create a user-friendly, extensible and powerful multi-physics simulation blackbox for nano-scale semiconductor devices. By using a graphical device modeller this work will provide a friendly environment were a user without deep knowledge of device physics can create a device, simulate it and extract optical and electrical characteristics deemed of interest to his engineering occupation. Resorting to advanced template C++ object-oriented design from the start, this work was able to implement algorithms to simulate 1,2 and 3D devices which along with scripting using the well known Python language enables the user to create batch simulations, to better optimize device performance. Higher-dimensional semiconductors, like wires and dots, require a huge computational cost. MPI parallel libraries enable the software to tackle complex geometries which otherwise would be unfeasible on a small single-CPU computer. Quantum mechanical phenomena is described by Schrodinger's equation which must be solved self-consistently with Poisson's equation for the electrostatic charge and, if required, make use of piezoelectric charge terms from elasticity constraints. Since the software implements a generic n-dimensional FEM engine, virtually any kind of Partial Differential Equation can be solved and in the future, other required solvers besides the ones already implemented will also be included for easy of use. In particular for the semiconductor device physics, we solve the quantum mechanics effective mass conduction-valence band k·p approximation to the Schrodinger-Poisson, in any crystal growth orientation (C,polar M,A and semi-polar planes or any user defined angle) and also include Piezoelectric effects caused by strain in lattice mismatched layers, where the implemented software
Towards a multi-physics modelling framework for thrombolysis under the influence of blood flow.
Piebalgs, Andris; Xu, X Yun
2015-12-06
Thrombolytic therapy is an effective means of treating thromboembolic diseases but can also give rise to life-threatening side effects. The infusion of a high drug concentration can provoke internal bleeding while an insufficient dose can lead to artery reocclusion. It is hoped that mathematical modelling of the process of clot lysis can lead to a better understanding and improvement of thrombolytic therapy. To this end, a multi-physics continuum model has been developed to simulate the dissolution of clot over time upon the addition of tissue plasminogen activator (tPA). The transport of tPA and other lytic proteins is modelled by a set of reaction-diffusion-convection equations, while blood flow is described by volume-averaged continuity and momentum equations. The clot is modelled as a fibrous porous medium with its properties being determined as a function of the fibrin fibre radius and voidage of the clot. A unique feature of the model is that it is capable of simulating the entire lytic process from the initial phase of lysis of an occlusive thrombus (diffusion-limited transport), the process of recanalization, to post-canalization thrombolysis under the influence of convective blood flow. The model has been used to examine the dissolution of a fully occluding clot in a simplified artery at different pressure drops. Our predicted lytic front velocities during the initial stage of lysis agree well with experimental and computational results reported by others. Following canalization, clot lysis patterns are strongly influenced by local flow patterns, which are symmetric at low pressure drops, but asymmetric at higher pressure drops, which give rise to larger recirculation regions and extended areas of intense drug accumulation.
Performance of multi-physics ensembles in convective precipitation events over northeastern Spain
NASA Astrophysics Data System (ADS)
García-Ortega, E.; Lorenzana, J.; Merino, A.; Fernández-González, S.; López, L.; Sánchez, J. L.
2017-07-01
Convective precipitation with hail greatly affects southwestern Europe, causing major economic losses. The local character of this meteorological phenomenon is a serious obstacle to forecasting. Therefore, the development of reliable short-term forecasts constitutes an essential challenge to minimizing and managing risks. However, deterministic outcomes are affected by different uncertainty sources, such as physics parameterizations. This study examines the performance of different combinations of physics schemes of the Weather Research and Forecasting model to describe the spatial distribution of precipitation in convective environments with hail falls. Two 30-member multi-physics ensembles, with two and three domains of maximum resolution 9 and 3km each, were designed using various combinations of cumulus, microphysics and radiation schemes. The experiment was evaluated for 10 convective precipitation days with hail over 2005-2010 in northeastern Spain. Different indexes were used to evaluate the ability of each ensemble member to capture the precipitation patterns, which were compared with observations of a rain-gauge network. A standardized metric was constructed to identify optimal performers. Results show interesting differences between the two ensembles. In two domain simulations, the selection of cumulus parameterizations was crucial, with the Betts-Miller-Janjic scheme the best. In contrast, the Kain-Fristch cumulus scheme gave the poorest results, suggesting that it should not be used in the study area. Nevertheless, in three domain simulations, the cumulus schemes used in coarser domains were not critical and the best results depended mainly on microphysics schemes. The best performance was shown by Morrison, New Thomson and Goddard microphysics.
Towards a multi-physics modelling framework for thrombolysis under the influence of blood flow
Piebalgs, Andris
2015-01-01
Thrombolytic therapy is an effective means of treating thromboembolic diseases but can also give rise to life-threatening side effects. The infusion of a high drug concentration can provoke internal bleeding while an insufficient dose can lead to artery reocclusion. It is hoped that mathematical modelling of the process of clot lysis can lead to a better understanding and improvement of thrombolytic therapy. To this end, a multi-physics continuum model has been developed to simulate the dissolution of clot over time upon the addition of tissue plasminogen activator (tPA). The transport of tPA and other lytic proteins is modelled by a set of reaction–diffusion–convection equations, while blood flow is described by volume-averaged continuity and momentum equations. The clot is modelled as a fibrous porous medium with its properties being determined as a function of the fibrin fibre radius and voidage of the clot. A unique feature of the model is that it is capable of simulating the entire lytic process from the initial phase of lysis of an occlusive thrombus (diffusion-limited transport), the process of recanalization, to post-canalization thrombolysis under the influence of convective blood flow. The model has been used to examine the dissolution of a fully occluding clot in a simplified artery at different pressure drops. Our predicted lytic front velocities during the initial stage of lysis agree well with experimental and computational results reported by others. Following canalization, clot lysis patterns are strongly influenced by local flow patterns, which are symmetric at low pressure drops, but asymmetric at higher pressure drops, which give rise to larger recirculation regions and extended areas of intense drug accumulation. PMID:26655469
NASA Astrophysics Data System (ADS)
Nafari, Mona; Aizin, Gregory R.; Jornet, Josep M.
2017-05-01
Wireless data rates have doubled every eighteen months for the last three decades. Following this trend, Terabit-per-second links will become a reality within the next five years. In this context, Terahertz (THz) band (0.1-10 THz) communication is envisioned as a key technology of the next decade. Despite major progress towards developing THz sources, compact signal generators above 1 THz able to efficiently work at room temperature are still missing. Recently, the use of hybrid graphene/semiconductor high-electron-mobility transistors (HEMT) has been proposed as a way to generate Surface Plasmon Polariton (SPP) waves at THz frequencies. Compact size, room-temperature operation and tunability of the graphene layer, in addition to possibility for large scale integration, motivate the exploration of this approach. In this paper, a simulation model of hybrid graphene/semiconductor HEMT-based THz sources is developed. More specifically, first, the necessary conditions for the so-called Dyakonov-Shur instability to arise within the HEMT channel are derived, and the impact of imperfect boundary conditions is analyzed. Second, the required conditions for coupling between a confined plasma wave in the HEMT channel and a SPP wave in graphene are derived, by starting from the coupling analysis between two 2DEG. Multi-physics simulation are conducted by integrating the hydrodynamic equations for the description of the HEMT device with Maxwell's equations for SPP modeling. Extensive results are provided to analyze the impact of different design elements on the THz signal source. This work will guide the experimental fabrication and characterization of the devices.
Statistical Mechanics of the Community Detection Problem: Theory and Application
NASA Astrophysics Data System (ADS)
Hu, Dandan
We study phase transitions in spin glass type systems and in related computational problems. In the current work, we focus on the "community detection" problem when cast in terms of a general Potts spin glass type problem. We report on phase transitions between solvable and unsolvable regimes. Solvable region may further split into easy and hard phases. Spin glass type phase transitions appear at both low and high temperatures. Low temperature transitions correspond to an order by disorder type effect wherein fluctuations render the system ordered or solvable. Separate transitions appear at higher temperatures into a disordered (or an unsolvable) phases. Different sorts of randomness lead to disparate behaviors. We illustrate the spin glass character of both transitions and report on memory effects. We further relate Potts type spin systems to mechanical analogs and suggest how chaotic-type behavior in general thermodynamic systems can indeed naturally arise in hard-computational problems and spin-glasses. In this work, we also examine large networks (with a power law distribution in cluster size) that have a large number of communities. We infer that large systems at a constant ratio of q to the number of nodes N asymptotically tend toward insolvability in the limit of large N for any positive temperature. We further employ multivariate Tutte polynomials to show that increasing q emulates increasing T for a general Potts model, leading to a similar stability region at low T. We further apply the replica inference based Potts model method to unsupervised image segmentation on multiple scales. This approach was inspired by the statistical mechanics problem of "community detection" and its phase diagram. The problem is cast as identifying tightly bound clusters against a background. Within our multiresolution approach, we compute information theory based correlations among multiple solutions of the same graph over a range of resolutions. Significant multiresolution
Hybrid Ant Algorithm and Applications for Vehicle Routing Problem
NASA Astrophysics Data System (ADS)
Xiao, Zhang; Jiang-qing, Wang
Ant colony optimization (ACO) is a metaheuristic method that inspired by the behavior of real ant colonies. ACO has been successfully applied to several combinatorial optimization problems, but it has some short-comings like its slow computing speed and local-convergence. For solving Vehicle Routing Problem, we proposed Hybrid Ant Algorithm (HAA) in order to improve both the performance of the algorithm and the quality of solutions. The proposed algorithm took the advantages of Nearest Neighbor (NN) heuristic and ACO for solving VRP, it also expanded the scope of solution space and improves the global ability of the algorithm through importing mutation operation, combining 2-opt heuristics and adjusting the configuration of parameters dynamically. Computational results indicate that the hybrid ant algorithm can get optimal resolution of VRP effectively.
Sectional methods for aggregation problems: application to volcanic eruptions
NASA Astrophysics Data System (ADS)
Rossi, E.
2016-12-01
Particle aggregation is a general problem that is common to several scientific disciplines such as planetary formation, food industry and aerosol sciences. So far the ordinary approach to this class of problems relies on the solution of the Smoluchowski Coagulation Equations (SCE), a set of Ordinary Differential Equations (ODEs) derived from the Population Balance Equations (PBE), which basically describe the change in time of an initial grain-size distribution due to the interaction of "single" particles. The frequency of particles collisions and their sticking efficiencies depend on the specific problem under analysis, but the mathematical framework and the possible solutions to the ODEs seem to be somehow discipline-independent and very general. In this work we will focus on the problem of volcanic ash aggregation, since it represents an extreme case of complexity that can be relevant also to other disciplines. In fact volcanic ash aggregates observed during the fallouts are characterized by relevant porosities and they do not fit with simplified descriptions based on monomer-like structures or fractal geometries. In this work we propose a bidimensional approach to the PBEs which uses additive (mass) and non-additive (volume) internal descriptors in order to better characterize the evolution of volcanic ash aggregation. In particular we used sectional methods (fixed-pivot) to discretize the internal parameters space. This algorithm has been applied to a one dimensional volcanic plume model in order to investigate how the Total Grain Size Distribution (TGSD) changes throughout the erupted column in real scenarios (i.e. Eyjafjallajokull 2010, Sakurajima 2013 and Mt. Saint Helens 1980).
Simulation, Control, and Applications for Flow and Scattering Problems
2015-04-10
and Optimization (08 2011) Kazufumi Ito, Tomoya Takeuchi. CIP immersed interface methods for hyperbolic equations withdiscontinuous coefficients ...augmented variables along the interface between the fluid flow and the porous media so that the problem can be decoupled as several Poisson equations . The...computational fluid dynamics and control of incompressible flows modeled by Navier-Stokes equations . Under the support of the current ARO grant, we
The application of bifurcation theory to physical problems
NASA Astrophysics Data System (ADS)
Joseph, D. D.
Reference is made to an observation by Lighthill (Thompson, 1982) of the one great complicating feature that introduces major difficulties into mechanics, physics, chemistry, engineering, astronomy, and biology. This is that an equilibrium can be stable but may become unstable and that a process can take place continuously but may become discontinuous. It is argued here that the complications noted by Lighthill occur even in the simplest problems. It is pointed out that a given physical system may have available many modes of operation and that the mathematical model of this system can have many solutions corresponding to the same prescribed data. In physical problems of even moderate complexity, the selection rules by which the actual realized solutions are determined are elusive. To illustrate this point, consideration is given to a simple scalar ordinary differential equation whose solution set is fully defined. It is shown that even in the simplest of problems, it is possible to have the highest degree of degeneracy with many solutions and many discontinuous changes as the control parameter is varied. Also discussed is the bifurcation of a periodic solution.
NASA Astrophysics Data System (ADS)
Han, Ping; Du, GuanLin
2017-04-01
The Fiber Bragg Grating(FBG) sensors are applied to Giant Magnetostrictive Actuator(GMA) to obtain the multi-physics field factors, which are the basis of data driven model. The real working circumstance of GMA is complex and nonlinear, and the traditional theoretical physics model of GMA cannot satisfy it. Hence, the multi-physics field factors of the components of GMA in real working process are gathered real-time by FBG sensors, such as temperature of Giant Magnetostrictive Material(GMM) stick and coil, displacement and vibration of GMM stick, current of coil etc, which are utilized to represent the strong nonlinear characteristics of GMA. Furthermore, the data driven model of GMA is built with the Least Squares Support Vector Machine(LS-SVM) method based on multi-physics field factors. The performance of the novel GMA model is evaluated by experiment, its maximum error is 1.1% with frequency range from 0 to 1000Hz and temperature range from 20°C to 100°C.
Statistical Risk Assessment: Old Problems and New Applications
ERIC Educational Resources Information Center
Gottfredson, Stephen D.; Moriarty, Laura J.
2006-01-01
Statistically based risk assessment devices are widely used in criminal justice settings. Their promise remains largely unfulfilled, however, because assumptions and premises requisite to their development and application are routinely ignored and/or violated. This article provides a brief review of the most salient of these assumptions and…
Statistical Risk Assessment: Old Problems and New Applications
ERIC Educational Resources Information Center
Gottfredson, Stephen D.; Moriarty, Laura J.
2006-01-01
Statistically based risk assessment devices are widely used in criminal justice settings. Their promise remains largely unfulfilled, however, because assumptions and premises requisite to their development and application are routinely ignored and/or violated. This article provides a brief review of the most salient of these assumptions and…
Common Problems of Mobile Applications for Foreign Language Testing
ERIC Educational Resources Information Center
Garcia Laborda, Jesus; Magal-Royo, Teresa; Lopez, Jose Luis Gimenez
2011-01-01
As the use of mobile learning educational applications has become more common anywhere in the world, new concerns have appeared in the classroom, human interaction in software engineering and ergonomics. new tests of foreign languages for a number of purposes have become more and more common recently. However, studies interrelating language tests…
The Application of Geocoded Data to Educational Problems.
ERIC Educational Resources Information Center
McIsaac, Donald N.; And Others
The papers presented at a symposium on geocoding describe the preparation of a geocoded data file, some basic applications for education planning, and its use in trend analysis to produce contour maps for any desired characteristic. Geocoding data involves locating each entity, such as students or schools, in terms of grid coordinates on a…
Li, Xia; Guo, Meifang; Su, Yongfu
2016-01-01
In this article, a new multidirectional monotone hybrid iteration algorithm for finding a solution to the split common fixed point problem is presented for two countable families of quasi-nonexpansive mappings in Banach spaces. Strong convergence theorems are proved. The application of the result is to consider the split common null point problem of maximal monotone operators in Banach spaces. Strong convergence theorems for finding a solution of the split common null point problem are derived. This iteration algorithm can accelerate the convergence speed of iterative sequence. The results of this paper improve and extend the recent results of Takahashi and Yao (Fixed Point Theory Appl 2015:87, 2015) and many others .
Application of Genetic Algorithms in Nonlinear Heat Conduction Problems
Khan, Waqar A.
2014-01-01
Genetic algorithms are employed to optimize dimensionless temperature in nonlinear heat conduction problems. Three common geometries are selected for the analysis and the concept of minimum entropy generation is used to determine the optimum temperatures under the same constraints. The thermal conductivity is assumed to vary linearly with temperature while internal heat generation is assumed to be uniform. The dimensionless governing equations are obtained for each selected geometry and the dimensionless temperature distributions are obtained using MATLAB. It is observed that GA gives the minimum dimensionless temperature in each selected geometry. PMID:24695517
Application of computational fluid mechanics to atmospheric pollution problems
NASA Technical Reports Server (NTRS)
Hung, R. J.; Liaw, G. S.; Smith, R. E.
1986-01-01
One of the most noticeable effects of air pollution on the properties of the atmosphere is the reduction in visibility. This paper reports the results of investigations of the fluid dynamical and microphysical processes involved in the formation of advection fog on aerosols from combustion-related pollutants, as condensation nuclei. The effects of a polydisperse aerosol distribution, on the condensation/nucleation processes which cause the reduction in visibility are studied. This study demonstrates how computational fluid mechanics and heat transfer modeling can be applied to simulate the life cycle of the atmosphereic pollution problems.
Application of computational fluid mechanics to atmospheric pollution problems
NASA Technical Reports Server (NTRS)
Hung, R. J.; Liaw, G. S.; Smith, R. E.
1986-01-01
One of the most noticeable effects of air pollution on the properties of the atmosphere is the reduction in visibility. This paper reports the results of investigations of the fluid dynamical and microphysical processes involved in the formation of advection fog on aerosols from combustion-related pollutants, as condensation nuclei. The effects of a polydisperse aerosol distribution, on the condensation/nucleation processes which cause the reduction in visibility are studied. This study demonstrates how computational fluid mechanics and heat transfer modeling can be applied to simulate the life cycle of the atmosphereic pollution problems.
Application of genetic algorithms in nonlinear heat conduction problems.
Kadri, Muhammad Bilal; Khan, Waqar A
2014-01-01
Genetic algorithms are employed to optimize dimensionless temperature in nonlinear heat conduction problems. Three common geometries are selected for the analysis and the concept of minimum entropy generation is used to determine the optimum temperatures under the same constraints. The thermal conductivity is assumed to vary linearly with temperature while internal heat generation is assumed to be uniform. The dimensionless governing equations are obtained for each selected geometry and the dimensionless temperature distributions are obtained using MATLAB. It is observed that GA gives the minimum dimensionless temperature in each selected geometry.
Application the particle method in problems of mechanics deformable media
NASA Astrophysics Data System (ADS)
Berezhnoi, D. V.; Gabsalikova, N. F.; Miheev, V. V.
2016-11-01
The work implemented method of deformation of ground-based particle method, which is a collection of mineral grains, which are linked to some system of forces on the contact areas between the mineral particles. Two-parameter potential Lennard-Jones and it is modified version were selected for describing the behavior of ground. Some model problems of straining layer of ground in the gravity field was decided. The calculations were performed on a heterogeneous computing cluster, on each of the seven components that were installed on three GPU AMD Radeon HD 7970.
Stability of charge inversion, Thomson problem, and application to electrophoresis
NASA Astrophysics Data System (ADS)
Patra, Michael; Patriarca, Marco; Karttunen, Mikko
2003-03-01
We analyze charge inversion in colloidal systems at zero temperature using stability concepts, and connect this to the classical Thomson problem of arranging electrons on sphere. We show that for a finite microion charge, the globally stable, lowest-energy state of the complex formed by the colloid and the oppositely charged microions is always overcharged. This effect disappears in the continuous limit. Additionally, a layer of at least twice as many microions as required for charge neutrality is always locally stable. In an applied external electric field the stability of the microion cloud is reduced. Finally, this approach is applied to a system of two colloids at low but finite temperature.
Application of clustering global optimization to thin film design problems.
Lemarchand, Fabien
2014-03-10
Refinement techniques usually calculate an optimized local solution, which is strongly dependent on the initial formula used for the thin film design. In the present study, a clustering global optimization method is used which can iteratively change this initial formula, thereby progressing further than in the case of local optimization techniques. A wide panel of local solutions is found using this procedure, resulting in a large range of optical thicknesses. The efficiency of this technique is illustrated by two thin film design problems, in particular an infrared antireflection coating, and a solar-selective absorber coating.
Bayindir Çevik, Ayfer; Olgun, Nermin
2015-04-01
This study aimed to determine the relationship between problem-solving and nursing process application skills of nursing. This is a longitudinal and correlational study. The sample included 71 students. An information form, Problem-Solving Inventory, and nursing processes the students presented at the end of clinical courses were used for data collection. Although there was no significant relationship between problem-solving skills and nursing process grades, improving problem-solving skills increased successful grades. Problem-solving skills and nursing process skills can be concomitantly increased. Students were suggested to use critical thinking, practical approaches, and care plans, as well as revising nursing processes in order to improve their problem-solving skills and nursing process application skills. © 2014 NANDA International, Inc.
Application of remote sensing to state and regional problems
NASA Technical Reports Server (NTRS)
Miller, W. F. (Principal Investigator); Tingle, J.; Wright, L. H.; Tebbs, B.
1984-01-01
Progress was made in the hydroclimatology, habitat modeling and inventory, computer analysis, wildlife management, and data comparison programs that utilize LANDSAT and SEASAT data provided to Mississippi researchers through the remote sensing applications program. Specific topics include water runoff in central Mississippi, habitat models for the endangered gopher tortoise, coyote, and turkey Geographic Information Systems (GIS) development, forest inventory along the Mississipppi River, and the merging of LANDSAT and SEASAT data for enhanced forest type discrimination.
Remote sensing applications to resource problems in South Dakota
NASA Technical Reports Server (NTRS)
Myers, V. I. (Principal Investigator)
1981-01-01
The procedures used as well as the results obtained and conclusions derived are described for the following applications of remote sensing in South Dakota: (1) sage grouse management; (2) censusing Canada geese; (3) monitoring grasshopper infestation in rangeland; (4) detecting Dutch elm disease in an urban environment; (5) determining water usage from the Belle Fourche River; (6) resource management of the Lower James River; and (7) the National Model Implantation Program: Lake Herman watershed.
Application of Optical Computing to Problems with Symbolic Computations
1987-05-14
relational-algebra operations. Compare-and-exchange can be implemented with a variety of optical technology including analog optics, and digital optics with...applicability of analog implementations. However, digital approaches based on a direct mapping strategy are more flexible. Special-purpose, latching...different performance requi rements: For example, telecomunication and interprocessor message routing. 2. Description of any major Items of experimental or
Inference of Stochastic Nonlinear Oscillators with Applications to Physiological Problems
NASA Technical Reports Server (NTRS)
Smelyanskiy, Vadim N.; Luchinsky, Dmitry G.
2004-01-01
A new method of inferencing of coupled stochastic nonlinear oscillators is described. The technique does not require extensive global optimization, provides optimal compensation for noise-induced errors and is robust in a broad range of dynamical models. We illustrate the main ideas of the technique by inferencing a model of five globally and locally coupled noisy oscillators. Specific modifications of the technique for inferencing hidden degrees of freedom of coupled nonlinear oscillators is discussed in the context of physiological applications.
Signature neural networks: definition and application to multidimensional sorting problems.
Latorre, Roberto; de Borja Rodriguez, Francisco; Varona, Pablo
2011-01-01
In this paper we present a self-organizing neural network paradigm that is able to discriminate information locally using a strategy for information coding and processing inspired in recent findings in living neural systems. The proposed neural network uses: 1) neural signatures to identify each unit in the network; 2) local discrimination of input information during the processing; and 3) a multicoding mechanism for information propagation regarding the who and the what of the information. The local discrimination implies a distinct processing as a function of the neural signature recognition and a local transient memory. In the context of artificial neural networks none of these mechanisms has been analyzed in detail, and our goal is to demonstrate that they can be used to efficiently solve some specific problems. To illustrate the proposed paradigm, we apply it to the problem of multidimensional sorting, which can take advantage of the local information discrimination. In particular, we compare the results of this new approach with traditional methods to solve jigsaw puzzles and we analyze the situations where the new paradigm improves the performance.
Algorithmic differentiation: application to variational problems in computer vision.
Pock, Thomas; Pock, Michael; Bischof, Horst
2007-07-01
Many vision problems can be formulated as minimization of appropriate energy functionals. These energy functionals are usually minimized, based on the calculus of variations (Euler-Lagrange equation). Once the Euler-Lagrange equation has been determined, it needs to be discretized in order to implement it on a digital computer. This is not a trivial task and, is moreover, error-prone. In this paper, we propose a flexible alternative. We discretize the energy functional and, subsequently, apply the mathematical concept of algorithmic differentiation to directly derive algorithms that implement the energy functional's derivatives. This approach has several advantages: First, the computed derivatives are exact with respect to the implementation of the energy functional. Second, it is basically straightforward to compute second-order derivatives and, thus, the Hessian matrix of the energy functional. Third, algorithmic differentiation is a process which can be automated. We demonstrate this novel approach on three representative vision problems (namely, denoising, segmentation, and stereo) and show that state-of-the-art results are obtained with little effort.
Application of Papkovich-Neuber potentials to a crack problem.
NASA Technical Reports Server (NTRS)
Kassir, M. K.; Sih, G. C.
1973-01-01
The problem of an elastic solid containing a semi-infinite plane crack subjected to concentrated shears parallel to the edge of the crack is considered in this paper. A closed form solution using four harmonic functions is found to satisfy the finite displacement and inverse square root stress singularity at the edge of the crack. Explicit expressions in terms of elementary functions are given for the distribution of stress and displacement in the solid. These are obtained by employing Fourier and Kontorovich-Lebedev integral transforms and certain singular solutions of Laplace equations in three dimensions. The variations of the intensity of the local stress field along the crack border are shown graphically.
Application of wave mechanics theory to fluid dynamics problems: Fundamentals
NASA Technical Reports Server (NTRS)
Krzywoblocki, M. Z. V.
1974-01-01
The application of the basic formalistic elements of wave mechanics theory is discussed. The theory is used to describe the physical phenomena on the microscopic level, the fluid dynamics of gases and liquids, and the analysis of physical phenomena on the macroscopic (visually observable) level. The practical advantages of relating the two fields of wave mechanics and fluid mechanics through the use of the Schroedinger equation constitute the approach to this relationship. Some of the subjects include: (1) fundamental aspects of wave mechanics theory, (2) laminarity of flow, (3) velocity potential, (4) disturbances in fluids, (5) introductory elements of the bifurcation theory, and (6) physiological aspects in fluid dynamics.
Applications of vacuum technology to novel accelerator problems
Garwin, E.L.
1983-01-01
Vacuum requirements for electron storage rings are most demanding to fulfill, due to the presence of gas desorption caused by large quantities of synchrotron radiation, the very limited area accessible for pumping ports, the need for 10/sup -9/ torr pressures in the ring, and for pressures a decade lower in the interaction regions. Design features of a wide variety of distributed ion sublimation pumps (DIP) developed at SLAC to meet these requirements are discussed, as well as NEG (non-evaporable getter) pumps tested for use in the Large Electron Positron Collider at CERN. Application of DIP to much higher pressures in electron damping rings for the Stanford Linear Collider are discussed.
Application of remote sensing to state and regional problems. [mississippi
NASA Technical Reports Server (NTRS)
Miller, W. F.; Powers, J. S.; Clark, J. R.; Solomon, J. L.; Williams, S. G. (Principal Investigator)
1981-01-01
The methods and procedures used, accomplishments, current status, and future plans are discussed for each of the following applications of LANDSAT in Mississippi: (1) land use planning in Lowndes County; (2) strip mine inventory and reclamation; (3) white-tailed deer habitat evaluation; (4) remote sensing data analysis support systems; (5) discrimination of unique forest habitats in potential lignite areas; (6) changes in gravel operations; and (7) determining freshwater wetlands for inventory and monitoring. The documentation of all existing software and the integration of the image analysis and data base software into a single package are now considered very high priority items.
Applications of phylogenetics to solve practical problems in insect conservation.
Buckley, Thomas R
2016-12-01
Phylogenetic approaches have much promise for the setting of conservation priorities and resource allocation. There has been significant development of analytical methods for the measurement of phylogenetic diversity within and among ecological communities as a way of setting conservation priorities. Application of these tools to insects has been low as has been the uptake by conservation managers. A critical reason for the lack of uptake includes the scarcity of detailed phylogenetic and species distribution data from much of insect diversity. Environmental DNA technologies offer a means for the high throughout collection of phylogenetic data across landscapes for conservation planning.
NASA Astrophysics Data System (ADS)
Yamamoto, H.; Nakajima, K.; Zhang, K.; Nanai, S.
2015-12-01
scalabilities showing almost linear speedup against number of processors up to over ten thousand cores. Generally this allows us to perform coupled multi-physics (THC) simulations on high resolution geologic models with multi-million grid in a practical time (e.g., less than a second per time step).
Design and Analysis of a New Hair Sensor for Multi-Physical Signal Measurement.
Yang, Bo; Hu, Di; Wu, Lei
2016-07-08
A new hair sensor for multi-physical signal measurements, including acceleration, angular velocity and air flow, is presented in this paper. The entire structure consists of a hair post, a torsional frame and a resonant signal transducer. The hair post is utilized to sense and deliver the physical signals of the acceleration and the air flow rate. The physical signals are converted into frequency signals by the resonant transducer. The structure is optimized through finite element analysis. The simulation results demonstrate that the hair sensor has a frequency of 240 Hz in the first mode for the acceleration or the air flow sense, 3115 Hz in the third and fourth modes for the resonant conversion, and 3467 Hz in the fifth and sixth modes for the angular velocity transformation, respectively. All the above frequencies present in a reasonable modal distribution and are separated from interference modes. The input-output analysis of the new hair sensor demonstrates that the scale factor of the acceleration is 12.35 Hz/g, the scale factor of the angular velocity is 0.404 nm/deg/s and the sensitivity of the air flow is 1.075 Hz/(m/s)², which verifies the multifunction sensitive characteristics of the hair sensor. Besides, the structural optimization of the hair post is used to improve the sensitivity of the air flow rate and the acceleration. The analysis results illustrate that the hollow circular hair post can increase the sensitivity of the air flow and the II-shape hair post can increase the sensitivity of the acceleration. Moreover, the thermal analysis confirms the scheme of the frequency difference for the resonant transducer can prominently eliminate the temperature influences on the measurement accuracy. The air flow analysis indicates that the surface area increase of hair post is significantly beneficial for the efficiency improvement of the signal transmission. In summary, the structure of the new hair sensor is proved to be feasible by comprehensive
Design and Analysis of a New Hair Sensor for Multi-Physical Signal Measurement
Yang, Bo; Hu, Di; Wu, Lei
2016-01-01
A new hair sensor for multi-physical signal measurements, including acceleration, angular velocity and air flow, is presented in this paper. The entire structure consists of a hair post, a torsional frame and a resonant signal transducer. The hair post is utilized to sense and deliver the physical signals of the acceleration and the air flow rate. The physical signals are converted into frequency signals by the resonant transducer. The structure is optimized through finite element analysis. The simulation results demonstrate that the hair sensor has a frequency of 240 Hz in the first mode for the acceleration or the air flow sense, 3115 Hz in the third and fourth modes for the resonant conversion, and 3467 Hz in the fifth and sixth modes for the angular velocity transformation, respectively. All the above frequencies present in a reasonable modal distribution and are separated from interference modes. The input-output analysis of the new hair sensor demonstrates that the scale factor of the acceleration is 12.35 Hz/g, the scale factor of the angular velocity is 0.404 nm/deg/s and the sensitivity of the air flow is 1.075 Hz/(m/s)2, which verifies the multifunction sensitive characteristics of the hair sensor. Besides, the structural optimization of the hair post is used to improve the sensitivity of the air flow rate and the acceleration. The analysis results illustrate that the hollow circular hair post can increase the sensitivity of the air flow and the II-shape hair post can increase the sensitivity of the acceleration. Moreover, the thermal analysis confirms the scheme of the frequency difference for the resonant transducer can prominently eliminate the temperature influences on the measurement accuracy. The air flow analysis indicates that the surface area increase of hair post is significantly beneficial for the efficiency improvement of the signal transmission. In summary, the structure of the new hair sensor is proved to be feasible by comprehensive
[Current problems of information technologies application for forces medical service].
Ivanov, V V; Korneenkov, A A; Bogomolov, V D; Borisov, D N; Rezvantsev, M V
2013-06-01
The modern information technologies are the key factors for the upgrading of forces medical service. The aim of this article is the analysis of prospective information technologies application for the upgrading of forces medical service. The authors suggested 3 concepts of information support of Russian military health care on the basis of data about information technologies application in the foreign armed forces, analysis of the regulatory background, prospects of military-medical service and gathered experience of specialists. These three concepts are: development of united telecommunication network of the medical service of the Armed Forces of the Russian Federation medical service, working out and implementation of standard medical information systems for medical units and establishments, monitoring the military personnel health state and military medical service resources. It is noted that on the assumption of sufficient centralized financing and industrial implementation of the military medical service prospective information technologies, by the year 2020 the united information space of the military medical service will be created and the target information support effectiveness will be achieved.
Application of the artificial bee colony algorithm for solving the set covering problem.
Crawford, Broderick; Soto, Ricardo; Cuesta, Rodrigo; Paredes, Fernando
2014-01-01
The set covering problem is a formal model for many practical optimization problems. In the set covering problem the goal is to choose a subset of the columns of minimal cost that covers every row. Here, we present a novel application of the artificial bee colony algorithm to solve the non-unicost set covering problem. The artificial bee colony algorithm is a recent swarm metaheuristic technique based on the intelligent foraging behavior of honey bees. Experimental results show that our artificial bee colony algorithm is competitive in terms of solution quality with other recent metaheuristic approaches for the set covering problem.
Application of the Artificial Bee Colony Algorithm for Solving the Set Covering Problem
Crawford, Broderick; Soto, Ricardo; Cuesta, Rodrigo; Paredes, Fernando
2014-01-01
The set covering problem is a formal model for many practical optimization problems. In the set covering problem the goal is to choose a subset of the columns of minimal cost that covers every row. Here, we present a novel application of the artificial bee colony algorithm to solve the non-unicost set covering problem. The artificial bee colony algorithm is a recent swarm metaheuristic technique based on the intelligent foraging behavior of honey bees. Experimental results show that our artificial bee colony algorithm is competitive in terms of solution quality with other recent metaheuristic approaches for the set covering problem. PMID:24883356
On the range of applicability of Baker`s approach to the frame problem
Kartha, G.N.
1996-12-31
We investigate the range of applicability of Baker`s approach to the frame problem using an action language. We show that for temporal projection and deterministic domains, Baker`s approach gives the intuitively expected results.
NASA Technical Reports Server (NTRS)
Hidalgo, J. U.
1975-01-01
The applicability of remote sensing to transportation and traffic analysis, urban quality, and land use problems is discussed. Other topics discussed include preliminary user analysis, potential uses, traffic study by remote sensing, and urban condition analysis using ERTS.
Application of Lie groups to discretizing nuclear engineering problems
NASA Astrophysics Data System (ADS)
Grove, Travis Justin
A method utilizing groups of point transformations is applied to the three and four group time-independent neutron diffusion equations to obtain invariant difference equations for one-region and composite-region domains in one-dimensional Cartesian, cylindrical, and spherical geometries. Also, the theory behind this particular method will be discussed. A comparison of the invariant difference equations will be made to standard finite difference equations as well as to analytical results. From the analytical results, it will be shown that the invariant difference technique gives exact analytical solutions for the grid point values. The construction of invariant difference operators technique is also applied to the one-dimensional P 3 equations from neutron transport theory in Cartesian geometry, using the FLIP formulation, which allows PL equations to be written in the form of sets of coupled ordinary differential equations. The use of finite transforms will be examined to transform multi-dimensional problems into one-dimension where then the construction of invariant difference operators technique can be used to create difference equations. The solutions to the set of equations can then be transformed back into the multi-dimensional geometries. The use of finite transforms along with the construction of invariant difference operators technique is applied to a simple two-dimensional benchmark problem. In addition, a method using groups of point transformations along with Noether's theorem is shown to generate a conservation law that can be used to create a two-term recurrence relation which calculates numerically exact Green's functions in one dimension for the time-independent neutron diffusion equation for Cartesian, cylindrical, and spherical geometries. This method will be expanded to constructing two-term recurrence relations for an arbitrary number of spatial regions, as well as detailing starting point values for type 2 and type 3 homogeneous endpoint
Applications of mineral surface chemistry to environmental problems
NASA Astrophysics Data System (ADS)
White, Art F.
1995-07-01
Environmental surface chemistry involves processes that occur at the interface between the regolith, hydrosphere and atmosphere. The more limited scope of the present review addresses natural and anthropogenically-induced inorganic geochemical reactions between solutes in surface and ground waters and soil and aquifer substrates. Important surficial reactions include sorption, ion exchange, dissolution, precipitation and heterogeneous oxidation/reduction processes occurring at the solid/aqueous interface. Recent research advances in this field have addressed, both directly and indirectly, societal issues related to water quality, pollution, biogeochemical cycling, nutrient budgets and chemical weathering related to long term global climate change. This review will include recent advances in the fundamental and theoretical understanding of these surficial processes, breakthroughs in experimental and instrumental surface characterization, and development of methodologies for field applications.
Percutaneous devices: a review of applications, problems and possible solutions.
Affeld, Klaus; Grosshauser, Johannes; Goubergrits, Leonid; Kertzscher, Ulrich
2012-07-01
Percutaneous devices enable the transfer of mass, energy and forces through the skin. There is a wide clinical need for this, which is not likely to decrease or disappear. The emerging new artificial organs, such as wearable kidneys or lungs, will be in increased demand in the future. Any application lasting longer than days or weeks is endangered by infections entering the body via the exit site. The only carefree solution that has been found is for an exit site placed on the skull, where it can be securely immobilized. For the majority of the locations on the abdomen or chest, no solution for an infection-free device has been found. A solution may be possible with a better understanding of the physiology of keratinocytes as a barrier for microbes.
NASA Astrophysics Data System (ADS)
Jerez, Sonia; Montavez, Juan P.; Gomez-Navarro, Juan J.; Jimenez-Guerrero, Pedro; Lorente, Raquel; Garcia-Valero, Juan A.; Jimenez, Pedro A.; Gonzalez-Rouco, Jose F.; Zorita, Eduardo
2010-05-01
Regional climate change projections are affected by several sources of uncertainty. Some of them come from Global Circulation Models and scenarios.; others come from the downscaling process. In the case of dynamical downscaling, mainly using Regional Climate Models (RCM), the sources of uncertainty may involve nesting strategies, related to the domain position and resolution, soil characterization, internal variability, methods of solving the equations, and the configuration of model physics. Therefore, a probabilistic approach seems to be recommendable when projecting regional climate change. This problem is usually faced by performing an ensemble of simulations. The aim of this study is to evaluate the range of uncertainty in regional climate projections associated to changing the physical configuration in a RCM (MM5) as well as the capability when reproducing the observed climate. This study is performed over the Iberian Peninsula and focuses on the reproduction of the Probability Density Functions (PDFs) of daily mean temperature. The experiments consist on a multi-physics ensemble of high resolution climate simulations (30 km over the target region) for the periods 1970-1999 (present) and 2070-2099 (future). Two sets of simulations for the present have been performed using ERA40 (MM5-ERA40) and ECHAM5-3CM run1 (MM5-E5-PR) as boundary conditions. The future the experiments are driven by ECHAM5-A2-run1 (MM5-E5-A2). The ensemble has a total of eight members, as the result of combining the schemes for PBL (MRF and ETA), cumulus (GRELL and Kain-Fritch) and microphysics (Simple-Ice and Mixed phase). In a previous work this multi-physics ensemble has been analyzed focusing on the seasonal mean values of both temperature and precipitation. The main results indicate that those physics configurations that better reproduce the observed climate project the most dramatic changes for the future (i.e, the largest temperature increase and precipitation decrease). Among the
COAMPS Application to Global and Homeland Security Threat Problems
Chin, H S; Glascoe, L G
2004-09-14
Atmospheric dispersion problems have received more attention with regard to global and homeland security than their conventional roles in air pollution and local hazard assessment in the post 9/11 era. Consequently, there is growing interest to characterize meteorology uncertainty at both low and high altitudes (below and above 30 km, respectively). A 3-D Coupled Ocean Atmosphere Prediction System (COAMPS, developed by Naval Research Laboratory; Hodur, 1997) is used to address LLNL's task. The objective of this report is focused on the effort at the improvement of COAMPS forecast to address the uncertainty issue, and to provide new capability for high-altitude forecast. To assess the atmospheric dispersion behavior in a wider range of meteorological conditions and to expand its vertical scope for the potential threat at high altitudes, several modifications of COAMPS are needed to meet the project goal. These improvements include (1) the long-range forecast capability to show the variability of meteorological conditions at a much larger time scale (say, a year), and (2) the model physics enhancement to provide new capability for high-altitude forecast.
Topographic mapping of oral structures - problems and applications in prosthodontics
NASA Astrophysics Data System (ADS)
Young, John M.; Altschuler, Bruce R.
1981-10-01
The diagnosis and treatment of malocclusion, and the proper design of restorations and prostheses, requires the determination of surface topography of the teeth and related oral structures. Surface contour measurements involve not only affected teeth, but adjacent and opposing surface contours composing a complexly interacting occlusal system. No a priori knowledge is predictable as dental structures are largely asymmetrical, non-repetitive, and non-uniform curvatures in 3-D space. Present diagnosis, treatment planning, and fabrication relies entirely on the generation of physical replicas during each stage of treatment. Fabrication is limited to materials that lend themselves to casting or coating, and to hand fitting and finishing. Inspection is primarily by vision and patient perceptual feedback. Production methods are time-consuming. Prostheses are entirely custom designed by manual methods, require costly skilled technical labor, and do not lend themselves to centralization. The potential improvement in diagnostic techniques, improved patient care, increased productivity, and cost-savings in material and man-hours that could result, if rapid and accurate remote measurement and numerical (automated) fabrication methods were devised, would be significant. The unique problems of mapping oral structures, and specific limitations in materials and methods, are reviewed.
Multi-physics design and analyses of long life reactors for lunar outposts
NASA Astrophysics Data System (ADS)
Schriener, Timothy M.
event of a launch abort accident. Increasing the amount of fuel in the reactor core, and hence its operational life, would be possible by launching the reactor unfueled and fueling it on the Moon. Such a reactor would, thus, not be subject to launch criticality safety requirements. However, loading the reactor with fuel on the Moon presents a challenge, requiring special designs of the core and the fuel elements, which lend themselves to fueling on the lunar surface. This research investigates examples of both a solid core reactor that would be fueled at launch as well as an advanced concept which could be fueled on the Moon. Increasing the operational life of a reactor fueled at launch is exercised for the NaK-78 cooled Sectored Compact Reactor (SCoRe). A multi-physics design and analyses methodology is developed which iteratively couples together detailed Monte Carlo neutronics simulations with 3-D Computational Fluid Dynamics (CFD) and thermal-hydraulics analyses. Using this methodology the operational life of this compact, fast spectrum reactor is increased by reconfiguring the core geometry to reduce neutron leakage and parasitic absorption, for the same amount of HEU in the core, and meeting launch safety requirements. The multi-physics analyses determine the impacts of the various design changes on the reactor's neutronics and thermal-hydraulics performance. The option of increasing the operational life of a reactor by loading it on the Moon is exercised for the Pellet Bed Reactor (PeBR). The PeBR uses spherical fuel pellets and is cooled by He-Xe gas, allowing the reactor core to be loaded with fuel pellets and charged with working fluid on the lunar surface. The performed neutronics analyses ensure the PeBR design achieves a long operational life, and develops safe launch canister designs to transport the spherical fuel pellets to the lunar surface. The research also investigates loading the PeBR core with fuel pellets on the Moon using a transient Discrete
An Exploratory Application of Neural Networks to the Sortie Generation Forecasting Problem
1991-09-01
AD-A246 626 3MAR 02 19 AN EXPLORATORY APPLICATION OF NEURAL NETWORKS To THE SORTIE GENERATION FORECASTING PROBLEM THESIS James M. Dagg, GS-12 AFIT...2 1992M D AN EXPLORATORY APPLICATION OF NEURAL NETWORKS TO THE SORTIE GENERATION FORECASTING PROBLEM THESIS James M. Dagg, GS-12 AFIT/GLM/LSM/9 1S-11...Approved for public release; distribution unlimited the views expressed in this thesis are those of the authors and do not ref lect the of ficial
Numerical Analysis of a Multi-Physics Model for Trace Gas Sensors
NASA Astrophysics Data System (ADS)
Brennan, Brian
Trace gas sensors are currently used in many applications from leak detection to national security and may some day help with disease diagnosis. These sensors are modelled by a coupled system of complex elliptic partial differential equations for pressure and temperature. Solutions are approximated using the finite element method which we will show admits a continuous and coercive variational problem with optimal H1 and L2 error estimates. Numerically, the finite element discretization yields a skew-Hermitian dominant matrix for which classical algebraic preconditioners quickly degrade. We develop a block preconditioner that requires scalar Helmholtz solutions to apply but gives a very low outer iteration count. To handle this, we explore three preconditoners for the resulting linear system. First we analyze the classical block Jacobi and block Gauss-Seidel preconditions before presenting a custom, physics based preconditioner. We also present analysis showing eigenvalues of the custom preconditioned system are mesh-dependent but with a small coefficient. Numerical experiments confirm our theoretical discussion.
Application Problem of Biomass Combustion in Greenhouses for Crop Production
NASA Astrophysics Data System (ADS)
Kawamura, Atsuhiro; Akisawa, Atsushi; Kashiwagi, Takao
It is consumed much energy in fossil fuels to production crops in greenhouses in Japan. And fl ue gas as CO2 fertilization is used for growing crops in modern greenhouses. If biomass as renewable energy can use for production vegetables in greenhouses, more than 800,000 kl of energy a year (in crude oil equivalent) will be saved. In this study, at fi rst, we made the biomass combustion equipment, and performed fundamental examination for various pellet fuel. We performed the examination that considered an application to a real greenhouse next. We considered biomass as both a source of energy and CO2 gas for greenhouses, and the following fi ndings were obtained: 1) Based on the standard of CO2 gas fertilization to greenhouses, it is diffi cult to apply biomass as a CO2 fertilizer, so that biomass should be applied to energy use only, at least for the time being. 2) Practical biomass energy machinery for economy, high reliability and greenhouses satisfying the conservatism that it is easy is necessary. 3) It is necessary to develop crop varieties and cultivation systems requiring less strict environmental control. 4) Disposal of combustion ash occurring abundantly, effective practical use is necessary.
Application of CHAD hydrodynamics to shock-wave problems
Trease, H.E.; O`Rourke, P.J.; Sahota, M.S.
1997-12-31
CHAD is the latest in a sequence of continually evolving computer codes written to effectively utilize massively parallel computer architectures and the latest grid generators for unstructured meshes. Its applications range from automotive design issues such as in-cylinder and manifold flows of internal combustion engines, vehicle aerodynamics, underhood cooling and passenger compartment heating, ventilation, and air conditioning to shock hydrodynamics and materials modeling. CHAD solves the full unsteady Navier-Stoke equations with the k-epsilon turbulence model in three space dimensions. The code has four major features that distinguish it from the earlier KIVA code, also developed at Los Alamos. First, it is based on a node-centered, finite-volume method in which, like finite element methods, all fluid variables are located at computational nodes. The computational mesh efficiently and accurately handles all element shapes ranging from tetrahedra to hexahedra. Second, it is written in standard Fortran 90 and relies on automatic domain decomposition and a universal communication library written in standard C and MPI for unstructured grids to effectively exploit distributed-memory parallel architectures. Thus the code is fully portable to a variety of computing platforms such as uniprocessor workstations, symmetric multiprocessors, clusters of workstations, and massively parallel platforms. Third, CHAD utilizes a variable explicit/implicit upwind method for convection that improves computational efficiency in flows that have large velocity Courant number variations due to velocity of mesh size variations. Fourth, CHAD is designed to also simulate shock hydrodynamics involving multimaterial anisotropic behavior under high shear. The authors will discuss CHAD capabilities and show several sample calculations showing the strengths and weaknesses of CHAD.
Application of fluorescent dyes for some problems of bioelectromagnetics
NASA Astrophysics Data System (ADS)
Babich, Danylo; Kylsky, Alexandr; Pobiedina, Valentina; Yakunov, Andrey
2016-04-01
Fluorescent organic dyes solutions are used for non-contact measurement of the millimeter wave absorption in liquids simulating biological tissue. There is still not any certain idea of the physical mechanism describing this process despite the widespread technology of microwave radiation in the food industry, biotechnology and medicine. For creating adequate physical model one requires an accurate command of knowledge concerning to the relation between millimeter waves and irradiated object. There were three H-bonded liquids selected as the samples with different coefficients of absorption in the millimeter range like water (strong absorption), glycerol (medium absorption) and ethylene glycol (light absorption). The measurements showed that the greatest response to the action of microwaves occurs for glycerol solutions: R6G (building-up luminescence) and RC (fading luminescence). For aqueous solutions the signal is lower due to lower quantum efficiency of luminescence, and for ethylene glycol — due to the low absorption of microwaves. In the area of exposure a local increase of temperature was estimated. For aqueous solutions of both dyes the maximum temperature increase is about 7° C caused with millimeter waves absorption, which coincides with the direct radio physical measurements and confirmed by theoretical calculations. However, for glycerol solution R6G temperature equivalent for building-up luminescence is around 9° C, and for the solution of ethylene glycol it's about 15°. It is assumed the possibility of non-thermal effect of microwaves on the different processes and substances. The application of this non-contact temperature sensing is a simple and novel method to detect temperature change in small biological objects.
Application of the INSTANT-HPS PN Transport Code to the C5G7 Benchmark Problem
Y. Wang; H. Zhang; R. H. Szilard; R. C. Martineau
2011-06-01
INSTANT is the INL's next generation neutron transport solver to support high-fidelity multi-physics reactor simulation INSTANT is in continuous development to extend its capability Code is designed to take full advantage of middle to large cluster (10-1000 processors) Code is designed to focus on method adaptation while also mesh adaptation will be possible. It utilizes the most modern computing techniques to generate a neutronics tool of full-core transport calculations for reactor analysis and design. It can perform calculations on unstructured 2D/3D triangular, hexagonal and Cartesian geometries. Calculations can be easily extended to more geometries because of the independent mesh framework coded with the model Fortran. This code has a multigroup solver with thermal rebalance and Chebyshev acceleration. It employs second-order PN and Hybrid Finite Element method (PNHFEM) discretization scheme. Three different in-group solvers - preconditioned Conjugate Gradient (CG) method, preconditioned Generalized Minimal Residual Method (GMRES) and Red-Black iteration - have been implemented and parallelized with the spatial domain decomposition in the code. The input is managed with extensible markup language (XML) format. 3D variables including the flux distributions are outputted into VTK files, which can be visualized by tools such as VisIt and ParaView. An extension of the code named INSTANTHPS provides the capability to perform 3D heterogeneous transport calculations within fuel pins. C5G7 is an OECD/NEA benchmark problem created to test the ability of modern deterministic transport methods and codes to treat reactor core problems without spatial homogenization. This benchmark problem had been widely analyzed with various code packages. In this transaction, results of the applying the INSTANT-HPS code to the C5G7 problem are summarized.
NASA Technical Reports Server (NTRS)
Rado, B. Q.
1975-01-01
Automatic classification techniques are described in relation to future information and natural resource planning systems with emphasis on application to Georgia resource management problems. The concept, design, and purpose of Georgia's statewide Resource AS Assessment Program is reviewed along with participation in a workshop at the Earth Resources Laboratory. Potential areas of application discussed include: agriculture, forestry, water resources, environmental planning, and geology.
Applicability of DSDS Simulation Modeling System to ESD System Acquisition Problems.
1981-02-01
AD-A096 172 MITRE CORP BEDFORD MA F/6 9/2 APPLICABILITY OF 0505 SIMULATION MODELING SYSTEM TO ESO SYSTEM --ETC(U) FEB G1 J K FRYER F19681-C-001...STANDARDS- 1963-A ESD-TR-81-114 MTR-8187.. APPLICABILITY OF DSDS SIMULATION MODELING SYSTEM TO ESD SYSTEM ACQUISITION PROBLEMS BY JEFFREY K. FRYER
Rethinking the lecture: the application of problem based learning methods to atypical contexts.
Rogal, Sonya M M; Snider, Paul D
2008-05-01
Problem based learning is a teaching and learning strategy that uses a problematic stimulus as a means of motivating and directing students to develop and acquire knowledge. Problem based learning is a strategy that is typically used with small groups attending a series of sessions. This article describes the principles of problem based learning and its application in atypical contexts; large groups attending discrete, stand-alone sessions. The principles of problem based learning are based on Socratic teaching, constructivism and group facilitation. To demonstrate the application of problem based learning in an atypical setting, this article focuses on the graduate nurse intake from a teaching hospital. The groups are relatively large and meet for single day sessions. The modified applications of problem based learning to meet the needs of atypical groups are described. This article contains a step by step guide of constructing a problem based learning package for large, single session groups. Nurse educators facing similar groups will find they can modify problem based learning to suit their teaching context.
NASA Astrophysics Data System (ADS)
Takeda, Jun; Takagi, Kentaro; Zhu, Zicai; Asaka, Kinji
2017-04-01
Ionic polymer-metal composites (IPMCs) generate electrical potential under deformation and can be used as sensors. Recently, Zhu et al. have proposed a sensor model which describes distribution of cations, water molecules and electrical potential under bending deformation. In this paper, we discuss a simplification of the multi-physical sensor model, which is represented by a set of nonlinear partial differential equations. The nonlinear partial differential equations are simplified and approximated into a set of linear ordinary differential equations, i.e., a state-space equation model. At the end, the simplified model is validated by comparing the simulation results with those of the partial differential equation model.
Application of symbolic and algebraic manipulation software in solving applied mechanics problems
NASA Technical Reports Server (NTRS)
Tsai, Wen-Lang; Kikuchi, Noboru
1993-01-01
As its name implies, symbolic and algebraic manipulation is an operational tool which not only can retain symbols throughout computations but also can express results in terms of symbols. This report starts with a history of symbolic and algebraic manipulators and a review of the literatures. With the help of selected examples, the capabilities of symbolic and algebraic manipulators are demonstrated. These applications to problems of applied mechanics are then presented. They are the application of automatic formulation to applied mechanics problems, application to a materially nonlinear problem (rigid-plastic ring compression) by finite element method (FEM) and application to plate problems by FEM. The advantages and difficulties, contributions, education, and perspectives of symbolic and algebraic manipulation are discussed. It is well known that there exist some fundamental difficulties in symbolic and algebraic manipulation, such as internal swelling and mathematical limitation. A remedy for these difficulties is proposed, and the three applications mentioned are solved successfully. For example, the closed from solution of stiffness matrix of four-node isoparametrical quadrilateral element for 2-D elasticity problem was not available before. Due to the work presented, the automatic construction of it becomes feasible. In addition, a new advantage of the application of symbolic and algebraic manipulation found is believed to be crucial in improving the efficiency of program execution in the future. This will substantially shorten the response time of a system. It is very significant for certain systems, such as missile and high speed aircraft systems, in which time plays an important role.
Application of symbolic and algebraic manipulation software in solving applied mechanics problems
NASA Astrophysics Data System (ADS)
Tsai, Wen-Lang; Kikuchi, Noboru
1993-08-01
As its name implies, symbolic and algebraic manipulation is an operational tool which not only can retain symbols throughout computations but also can express results in terms of symbols. This report starts with a history of symbolic and algebraic manipulators and a review of the literatures. With the help of selected examples, the capabilities of symbolic and algebraic manipulators are demonstrated. These applications to problems of applied mechanics are then presented. They are the application of automatic formulation to applied mechanics problems, application to a materially nonlinear problem (rigid-plastic ring compression) by finite element method (FEM) and application to plate problems by FEM. The advantages and difficulties, contributions, education, and perspectives of symbolic and algebraic manipulation are discussed. It is well known that there exist some fundamental difficulties in symbolic and algebraic manipulation, such as internal swelling and mathematical limitation. A remedy for these difficulties is proposed, and the three applications mentioned are solved successfully. For example, the closed from solution of stiffness matrix of four-node isoparametrical quadrilateral element for 2-D elasticity problem was not available before. Due to the work presented, the automatic construction of it becomes feasible. In addition, a new advantage of the application of symbolic and algebraic manipulation found is believed to be crucial in improving the efficiency of program execution in the future. This will substantially shorten the response time of a system. It is very significant for certain systems, such as missile and high speed aircraft systems, in which time plays an important role.
Applications of numerical optimization methods to helicopter design problems: A survey
NASA Technical Reports Server (NTRS)
Miura, H.
1984-01-01
A survey of applications of mathematical programming methods is used to improve the design of helicopters and their components. Applications of multivariable search techniques in the finite dimensional space are considered. Five categories of helicopter design problems are considered: (1) conceptual and preliminary design, (2) rotor-system design, (3) airframe structures design, (4) control system design, and (5) flight trajectory planning. Key technical progress in numerical optimization methods relevant to rotorcraft applications are summarized.
Applications of numerical optimization methods to helicopter design problems - A survey
NASA Technical Reports Server (NTRS)
Miura, H.
1984-01-01
A survey of applications of mathematical programming methods is used to improve the design of helicopters and their components. Applications of multivariable search techniques in the finite dimensional space are considered. Five categories of helicopter design problems are considered: (1) conceptual and preliminary design, (2) rotor-system design, (3) airframe structures design, (4) control system design, and (5) flight trajectory planning. Key technical progress in numerical optimization methods relevant to rotorcraft applications are summarized.
Applications of numerical optimization methods to helicopter design problems - A survey
NASA Technical Reports Server (NTRS)
Miura, H.
1985-01-01
A survey of applications of mathematical programming methods is used to improve the design of helicopters and their components. Applications of multivariable search techniques in the finite dimensional space are considered. Five categories of helicopter design problems are considered: (1) conceptual and preliminary design, (2) rotor-system design, (3) airframe structures design, (4) control system design, and (5) flight trajectory planning. Key technical progress in numerical optimization methods relevant to rotorcraft applications are summarized.
NASA Astrophysics Data System (ADS)
Deniz, Sinan; Bildik, Necdet
2016-06-01
In this paper, we use Adomian Decomposition Method (ADM) to solve the singularly perturbed fourth order boundary value problem. In order to make the calculation process easier, first the given problem is transformed into a system of two second order ODEs, with suitable boundary conditions. Numerical illustrations are given to prove the effectiveness and applicability of this method in solving these kinds of problems. Obtained results shows that this technique provides a sequence of functions which converges rapidly to the accurate solution of the problems.
ERIC Educational Resources Information Center
Dershem, Herbert L.
These modules view aspects of computer use in the problem-solving process, and introduce techniques and ideas that are applicable to other modes of problem solving. The first unit looks at algorithms, flowchart language, and problem-solving steps that apply this knowledge. The second unit describes ways in which computer iteration may be used…
NASA Astrophysics Data System (ADS)
Dorn, O.; Lesselier, D.
2010-07-01
practically relevant inverse problems. The contribution by M Li, A Abubakar and T Habashy, `Application of a two-and-a-half dimensional model-based algorithm to crosswell electromagnetic data inversion', deals with a model-based inversion technique for electromagnetic imaging which addresses novel challenges such as multi-physics inversion, and incorporation of prior knowledge, such as in hydrocarbon recovery. 10. Non-stationary inverse problems, considered as a special class of Bayesian inverse problems, are framed via an orthogonal decomposition representation in the contribution by A Lipponen, A Seppänen and J P Kaipio, `Reduced order estimation of nonstationary flows with electrical impedance tomography'. The goal is to simultaneously estimate, from electrical impedance tomography data, certain characteristics of the Navier--Stokes fluid flow model together with time-varying concentration distribution. 11. Non-iterative imaging methods of thin, penetrable cracks, based on asymptotic expansion of the scattering amplitude and analysis of the multi-static response matrix, are discussed in the contribution by W-K Park, `On the imaging of thin dielectric inclusions buried within a half-space', completing, for a shallow burial case at multiple frequencies, the direct imaging of small obstacles (here, along their transverse dimension), MUSIC and non-MUSIC type indicator functions being used for that purpose. 12. The contribution by R Potthast, `A study on orthogonality sampling' envisages quick localization and shaping of obstacles from (portions of) far-field scattering patterns collected at one or more time-harmonic frequencies, via the simple calculation (and summation) of scalar products between those patterns and a test function. This is numerically exemplified for Neumann/Dirichlet boundary conditions and homogeneous/heterogeneous embedding media. 13. The contribution by J D Shea, P Kosmas, B D Van Veen and S C Hagness, `Contrast-enhanced microwave imaging of breast
Necessary conditions for maximax problems with application to aeroglide of hypervelocity vehicles
NASA Technical Reports Server (NTRS)
Vinh, N. X.; Lu, P.
1986-01-01
This paper presents the necessary conditions for solving Chebyshev minimax (or maximax) problems with bounded control. The jump conditions obtained are applicable to problems with single or multiple maxima. By using Contensou domain of maneuverability, it is shown that when the maxima are isolated single points the control is generally continuous at the jump point in the minimax problems and discontinuous in the maximax problems in which the first time derivative of the maximax function contains the control variable. The theory is applied to the problem of maximizing the flight radius in a closed circuit glide of a hypervelocity vehicle and to a maximax optimal control problem in which the control appears explicitly with the first time derivative of the maximax function.
NASA Astrophysics Data System (ADS)
Yaghmaie, Reza; Ghosh, Somnath
2017-07-01
This paper develops an accurate and efficient finite element model for simulating coupled transient electromagnetic and dynamic mechanical fields that differ widely in the frequency ranges. This coupled modeling framework is necessary for effective modeling and simulation of structures such as antennae that are governed by multi-physics problems operating in different frequency and temporal regimes. A key development is the wavelet transformation induced multi-time scaling or WATMUS method that is designed to overcome shortcomings of modeling coupled multi-physics problems that are governed by disparate frequencies. The WATMUS-based FE model is enhanced in this paper with a scaled and preconditioned Newton-GMRES solver for efficient solution. Results from the WATMUS-based FE model show the accuracy and highly improved computational efficiency in comparison with single time-scale methods. The coupled FE model is used to solve two different antenna problems with large electromagnetic to mechanical frequency ratios. The examples considered are a monopole antenna and a microstrip patch antenna. Comparing the electromagnetic fields with the progression of mechanical cycles demonstrate complex multi-physics relations in these applications.
NASA Astrophysics Data System (ADS)
Yaghmaie, Reza; Ghosh, Somnath
2017-03-01
This paper develops an accurate and efficient finite element model for simulating coupled transient electromagnetic and dynamic mechanical fields that differ widely in the frequency ranges. This coupled modeling framework is necessary for effective modeling and simulation of structures such as antennae that are governed by multi-physics problems operating in different frequency and temporal regimes. A key development is the wavelet transformation induced multi-time scaling or WATMUS method that is designed to overcome shortcomings of modeling coupled multi-physics problems that are governed by disparate frequencies. The WATMUS-based FE model is enhanced in this paper with a scaled and preconditioned Newton-GMRES solver for efficient solution. Results from the WATMUS-based FE model show the accuracy and highly improved computational efficiency in comparison with single time-scale methods. The coupled FE model is used to solve two different antenna problems with large electromagnetic to mechanical frequency ratios. The examples considered are a monopole antenna and a microstrip patch antenna. Comparing the electromagnetic fields with the progression of mechanical cycles demonstrate complex multi-physics relations in these applications.
Application of the SNoW machine learning paradigm to a set of transportation imaging problems
NASA Astrophysics Data System (ADS)
Paul, Peter; Burry, Aaron M.; Wang, Yuheng; Kozitsky, Vladimir
2012-01-01
Machine learning methods have been successfully applied to image object classification problems where there is clear distinction between classes and where a comprehensive set of training samples and ground truth are readily available. The transportation domain is an area where machine learning methods are particularly applicable, since the classification problems typically have well defined class boundaries and, due to high traffic volumes in most applications, massive roadway data is available. Though these classes tend to be well defined, the particular image noises and variations can be challenging. Another challenge is the extremely high accuracy typically required in most traffic applications. Incorrect assignment of fines or tolls due to imaging mistakes is not acceptable in most applications. For the front seat vehicle occupancy detection problem, classification amounts to determining whether one face (driver only) or two faces (driver + passenger) are detected in the front seat of a vehicle on a roadway. For automatic license plate recognition, the classification problem is a type of optical character recognition problem encompassing multiple class classification. The SNoW machine learning classifier using local SMQT features is shown to be successful in these two transportation imaging applications.
Applications of space teleoperator technology to the problems of the handicapped
NASA Technical Reports Server (NTRS)
Malone, T. B.; Deutsch, S.; Rubin, G.; Shenk, S. W.
1973-01-01
The identification of feasible and practical applications of space teleoperator technology for the problems of the handicapped were studied. A teleoperator system is defined by NASA as a remotely controlled, cybernetic, man-machine system designed to extend and augment man's sensory, manipulative, and locomotive capabilities. Based on a consideration of teleoperator systems, the scope of the study was limited to an investigation of these handicapped persons limited in sensory, manipulative, and locomotive capabilities. If the technology being developed for teleoperators has any direct application, it must be in these functional areas. Feasible and practical applications of teleoperator technology for the problems of the handicapped are described, and design criteria are presented with each application. A development plan is established to bring the application to the point of use.
Suzuki, Yuma; Shimizu, Tetsuhide; Yang, Ming
2017-01-01
The quantitative evaluation of the biomolecules transport with multi-physics in nano/micro scale is demanded in order to optimize the design of microfluidics device for the biomolecules detection with high detection sensitivity and rapid diagnosis. This paper aimed to investigate the effectivity of the computational simulation using the numerical model of the biomolecules transport with multi-physics near a microchannel surface on the development of biomolecules-detection devices. The biomolecules transport with fluid drag force, electric double layer (EDL) force, and van der Waals force was modeled by Newtonian Equation of motion. The model validity was verified in the influence of ion strength and flow velocity on biomolecules distribution near the surface compared with experimental results of previous studies. The influence of acting forces on its distribution near the surface was investigated by the simulation. The trend of its distribution to ion strength and flow velocity was agreement with the experimental result by the combination of all acting forces. Furthermore, EDL force dominantly influenced its distribution near its surface compared with fluid drag force except for the case of high velocity and low ion strength. The knowledges from the simulation might be useful for the design of biomolecules-detection devices and the simulation can be expected to be applied on its development as the design tool for high detection sensitivity and rapid diagnosis in the future.
NASA Astrophysics Data System (ADS)
Zheng, Jiajia; Li, Yancheng; Li, Zhaochun; Wang, Jiong
2015-10-01
This paper presents multi-physics modeling of an MR absorber considering the magnetic hysteresis to capture the nonlinear relationship between the applied current and the generated force under impact loading. The magnetic field, temperature field, and fluid dynamics are represented by the Maxwell equations, conjugate heat transfer equations, and Navier-Stokes equations. These fields are coupled through the apparent viscosity and the magnetic force, both of which in turn depend on the magnetic flux density and the temperature. Based on a parametric study, an inverse Jiles-Atherton hysteresis model is used and implemented for the magnetic field simulation. The temperature rise of the MR fluid in the annular gap caused by core loss (i.e. eddy current loss and hysteresis loss) and fluid motion is computed to investigate the current-force behavior. A group of impulsive tests was performed for the manufactured MR absorber with step exciting currents. The numerical and experimental results showed good agreement, which validates the effectiveness of the proposed multi-physics FEA model.
Gradient vs. approximation design optimization techniques in low-dimensional convex problems
NASA Astrophysics Data System (ADS)
Fedorik, Filip
2013-10-01
Design Optimization methods' application in structural designing represents a suitable manner for efficient designs of practical problems. The optimization techniques' implementation into multi-physical softwares permits designers to utilize them in a wide range of engineering problems. These methods are usually based on modified mathematical programming techniques and/or their combinations to improve universality and robustness for various human and technical problems. The presented paper deals with the analysis of optimization methods and tools within the frame of one to three-dimensional strictly convex optimization problems, which represent a component of the Design Optimization module in the Ansys program. The First Order method, based on combination of steepest descent and conjugate gradient method, and Supbproblem Approximation method, which uses approximation of dependent variables' functions, accompanying with facilitation of Random, Sweep, Factorial and Gradient Tools, are analyzed, where in different characteristics of the methods are observed.
The Application of an Etiological Model of Personality Disorders to Problem Gambling.
Brown, Meredith; Allen, J Sabura; Dowling, Nicki A
2015-12-01
Problem gambling is a significant mental health problem that creates a multitude of intrapersonal, interpersonal, and social difficulties. Recent empirical evidence suggests that personality disorders, and in particular borderline personality disorder (BPD), are commonly co-morbid with problem gambling. Despite this finding there has been very little research examining overlapping factors between these two disorders. The aim of this review is to summarise the literature exploring the relationship between problem gambling and personality disorders. The co-morbidity of personality disorders, particularly BPD, is reviewed and the characteristics of problem gamblers with co-morbid personality disorders are explored. An etiological model from the more advanced BPD literature-the biosocial developmental model of BPD-is used to review the similarities between problem gambling and BPD across four domains: early parent-child interactions, emotion regulation, co-morbid psychopathology and negative outcomes. It was concluded that personality disorders, in particular BPD are commonly co-morbid among problem gamblers and the presence of a personality disorder complicates the clinical picture. Furthermore BPD and problem gambling share similarities across the biosocial developmental model of BPD. Therefore clinicians working with problem gamblers should incorporate routine screening for personality disorders and pay careful attention to the therapeutic alliance, client motivations and therapeutic boundaries. Furthermore adjustments to therapy structure, goals and outcomes may be required. Directions for future research include further research into the applicability of the biosocial developmental model of BPD to problem gambling.
NASA Astrophysics Data System (ADS)
Yaakob, Shamshul Bahar; Watada, Junzo
In this paper, a hybrid neural network approach to solve mixed integer quadratic bilevel programming problems is proposed. Bilevel programming problems arise when one optimization problem, the upper problem, is constrained by another optimization, the lower problem. The mixed integer quadratic bilevel programming problem is transformed into a double-layered neural network. The combination of a genetic algorithm (GA) and a meta-controlled Boltzmann machine (BM) enables us to formulate a hybrid neural network approach to solving bilevel programming problems. The GA is used to generate the feasible partial solutions of the upper level and to provide the parameters for the lower level. The meta-controlled BM is employed to cope with the lower level problem. The lower level solution is transmitted to the upper level. This procedure enables us to obtain the whole upper level solution. The iterative processes can converge on the complete solution of this problem to generate an optimal one. The proposed method leads the mixed integer quadratic bilevel programming problem to a global optimal solution. Finally, a numerical example is used to illustrate the application of the method in a power system environment, which shows that the algorithm is feasible and advantageous.
NASA Astrophysics Data System (ADS)
Zhokh, Alexey A.; Trypolskyi, Andrey I.; Strizhak, Peter E.
2017-06-01
Asymptotic Green's functions for short and long times for time-fractional diffusion equation, derived by simple and heuristic method, are provided in case if fractional derivative is presented in Caputo sense. The applicability of the asymptotic Green's functions for solving the anomalous diffusion problem on a semi-infinite rod is demonstrated. The initial value problem for longtime solution of the time-fractional diffusion equation by Green's function approach is resolved.
Szilard, Ronaldo Henriques; Coleman, Justin; Smith, Curtis L.; Prescott, Steven; Kammerer, Annie; Youngblood, Robert; Pope, Chad
2015-07-01
Risk-Informed Margin Management Industry Application on External Events. More specifically, combined events, seismically induced external flooding analyses for a generic nuclear power plant with a generic site soil, and generic power plant system and structure. The focus of this report is to define the problem above, set up the analysis, describe the methods to be used, tools to be applied to each problem, and data analysis and validation associated with the above.
Generalized network flow model with application to power supply-demand problems
Liu, C.
1982-08-01
A generalization of the conventional network flow model to a very general F-flow model is provided. The max-flow-min-cut theorem is then generalized. The theorem is used to derive a necessary and sufficient condition for feasibility of the multi-terminal supply-demand problem based on the F-flow model. As an application, the electric power supply-demand problem is discussed from the F-flow point of view.
NASA Technical Reports Server (NTRS)
Chesler, L.; Pierce, S.
1971-01-01
Generalized, cyclic, and modified multistep numerical integration methods are developed and evaluated for application to problems of satellite orbit computation. Generalized methods are compared with the presently utilized Cowell methods; new cyclic methods are developed for special second-order differential equations; and several modified methods are developed and applied to orbit computation problems. Special computer programs were written to generate coefficients for these methods, and subroutines were written which allow use of these methods with NASA's GEOSTAR computer program.
Phillips, Cara L; Iannaccone, Julia A; Rooker, Griffin W; Hagopian, Louis P
2017-04-01
Noncontingent reinforcement (NCR) is a commonly used treatment for severe problem behavior displayed by individuals with intellectual and developmental disabilities. The current study sought to extend the literature by reporting outcomes achieved with 27 consecutive applications of NCR as the primary treatment for severe problem behavior. All applications of NCR were included regardless of treatment outcome to minimize selection bias favoring successful cases. Participants ranged in age from 5 to 33 years. We analyzed the results across behavioral function and with regard to the use of functional versus alternative reinforcers. NCR effectively treated problem behavior maintained by social reinforcement in 14 of 15 applications, using either the functional reinforcer or alternative reinforcers. When we implemented NCR to treat problem behavior maintained by automatic reinforcement, we often had to add other treatment components to produce clinically significant effects (five of nine applications). Results provide information on the effectiveness and limitations of NCR as treatment for severe problem behavior. © 2017 Society for the Experimental Analysis of Behavior.
Application of the steepest ascent optimization method to a reentry trajectory problem
NASA Technical Reports Server (NTRS)
Junkin, B. G.
1971-01-01
The direct optimization method is presented in detail. Nominal values of the control variables are input parameters. Perturbations are introduced into the control variables and the resulting first order predictions of changes in the payoff, and constraint functions are then determined. Through a sequence of prescribed cycles, a trajectory is eventually obtained which is reasonably close to the optimum. The method is successfully applied to an Apollo three-dimensional reentry problem. The study of this Apollo application problem has resulted in the development of a highly flexible computer program that can be modified to consider other trajectory optimization problems.
Boundary-value problems for elliptic functional-differential equations and their applications
NASA Astrophysics Data System (ADS)
Skubachevskii, A. L.
2016-10-01
Boundary-value problems are considered for strongly elliptic functional-differential equations in bounded domains. In contrast to the case of elliptic differential equations, smoothness of generalized solutions of such problems can be violated in the interior of the domain and may be preserved only on some subdomains, and the symbol of a self-adjoint semibounded functional-differential operator can change sign. Both necessary and sufficient conditions are obtained for the validity of a Gårding-type inequality in algebraic form. Spectral properties of strongly elliptic functional-differential operators are studied, and theorems are proved on smoothness of generalized solutions in certain subdomains and on preservation of smoothness on the boundaries of neighbouring subdomains. Applications of these results are found to the theory of non-local elliptic problems, to the Kato square-root problem for an operator, to elasticity theory, and to problems in non-linear optics. Bibliography: 137 titles.
Inverse problems with Poisson data: statistical regularization theory, applications and algorithms
NASA Astrophysics Data System (ADS)
Hohage, Thorsten; Werner, Frank
2016-09-01
Inverse problems with Poisson data arise in many photonic imaging modalities in medicine, engineering and astronomy. The design of regularization methods and estimators for such problems has been studied intensively over the last two decades. In this review we give an overview of statistical regularization theory for such problems, the most important applications, and the most widely used algorithms. The focus is on variational regularization methods in the form of penalized maximum likelihood estimators, which can be analyzed in a general setup. Complementing a number of recent convergence rate results we will establish consistency results. Moreover, we discuss estimators based on a wavelet-vaguelette decomposition of the (necessarily linear) forward operator. As most prominent applications we briefly introduce Positron emission tomography, inverse problems in fluorescence microscopy, and phase retrieval problems. The computation of a penalized maximum likelihood estimator involves the solution of a (typically convex) minimization problem. We also review several efficient algorithms which have been proposed for such problems over the last five years.
Preface to foundations of information/decision fusion with applications to engineering problems
Madan, R.N.; Rao, N.S.V.
1996-10-01
In engineering design, it was shown by von Neumann that a reliable system can be built using unreliable components by employing simple majority rule fusers. If error densities are known for individual pattern recognizers then an optimal fuser was shown to be implementable as a threshold function. Many applications have been developed for distributed sensor systems, sensor-based robotics, face recognition, decision fusion, recognition of handwritten characters, and automatic target recognition. Recently, information/decision fusion has been recognized as an independently growing field with its own principles and methods. While some of the fusion problems in engineering systems could be solved by applying existing results from other domains, many others require original approaches and solutions. In turn, these new approaches would lead to new applications in other areas. There are two paradigms at the extrema of the spectrum of the information/decision methods: (i) Fusion as Problem: In certain applications, fusion is explicitly specified in the problem statement. Particularly in robotics applications, many researchers realized the fundamental limitations of single sensor systems, thereby motivating the deployment of multiple sensors. In more general engineering applications, similar sensors are employed for fault tolerance, while in several others, different sensor modalities are required to achieve the given task. In these scenarios, fusion methods have to be first designed to solve the problem at hand. (ii) Fusion as Solution: In many instances (e.g., DNA analysis), a number of different solutions to a particular problem already exist. Often these solutions can be combined to obtain solutions that outperform any individual one. The area of forecasting is a good example of such paradigm. Although fusion is not explicitly specified in these problems, it is used as an ingredient of the solution.
The application of geographical information systems to important public health problems in Africa.
Tanser, Frank C; Le Sueur, David
2002-12-09
Africa is generally held to be in crisis, and the quality of life for the majority of the continent's inhabitants has been declining in both relative and absolute terms. In addition, the majority of the world's disease burden is realised in Africa. Geographical information systems (GIS) technology, therefore, is a tool of great inherent potential for health research and management in Africa. The spatial modelling capacity offered by GIS is directly applicable to understanding the spatial variation of disease, and its relationship to environmental factors and the health care system. Whilst there have been numerous critiques of the application of GIS technology to developed world health problems it has been less clear whether the technology is both applicable and sustainable in an African setting. If the potential for GIS to contribute to health research and planning in Africa is to be properly evaluated then the technology must be applicable to the most pressing health problems in the continent. We briefly outline the work undertaken in HIV, malaria and tuberculosis (diseases of significant public health impact and contrasting modes of transmission), outline GIS trends relevant to Africa and describe some of the obstacles to the sustainable implementation of GIS. We discuss types of viable GIS applications and conclude with a discussion of the types of African health problems of particular relevance to the application of GIS.
Associations between rushed condom application and condom use errors and problems.
Crosby, Richard; Graham, Cynthia; Milhausen, Robin; Sanders, Stephanie; Yarber, William; Shrier, Lydia A
2015-06-01
To determine whether any of four condom use errors/problems occurred more frequently when condom application was 'rushed' among a clinic-based sample from three US states. A convenience sample (n=512) completed daily electronic assessments including questions about condom use being rushed and also assessed condom breakage, slippage, leakage and incomplete use. Of 8856 events, 6.5% (n=574) occurred when application was rushed. When events involved rushed condom application, the estimated odds of breakage and slippage were almost doubled (estimated OR (EOR)=1.90 and EOR=1.86). Rushed application increased the odds of not using condoms throughout sex (EOR=1.33) and nearly tripled the odds of leakage (EOR=2.96). With one exception, all tests for interactions between gender and rushed application and between age and rushed application were not significant (p values>0.10). This event-level analysis suggests that women and men who perceive that condom application was rushed are more likely to experience errors/problems during the sexual event that substantially compromise the protective value of condoms against disease and pregnancy. Educational efforts emphasising the need to allow ample time for condom application may benefit this population. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
NASA Astrophysics Data System (ADS)
Zheng, Xu; Hao, Zhiyong; Wang, Xu; Mao, Jie
2016-06-01
High-speed-railway-train interior noise at low, medium, and high frequencies could be simulated by finite element analysis (FEA) or boundary element analysis (BEA), hybrid finite element analysis-statistical energy analysis (FEA-SEA) and statistical energy analysis (SEA), respectively. First, a new method named statistical acoustic energy flow (SAEF) is proposed, which can be applied to the full-spectrum HST interior noise simulation (including low, medium, and high frequencies) with only one model. In an SAEF model, the corresponding multi-physical-field coupling excitations are firstly fully considered and coupled to excite the interior noise. The interior noise attenuated by sound insulation panels of carriage is simulated through modeling the inflow acoustic energy from the exterior excitations into the interior acoustic cavities. Rigid multi-body dynamics, fast multi-pole BEA, and large-eddy simulation with indirect boundary element analysis are first employed to extract the multi-physical-field excitations, which include the wheel-rail interaction forces/secondary suspension forces, the wheel-rail rolling noise, and aerodynamic noise, respectively. All the peak values and their frequency bands of the simulated acoustic excitations are validated with those from the noise source identification test. Besides, the measured equipment noise inside equipment compartment is used as one of the excitation sources which contribute to the interior noise. Second, a full-trimmed FE carriage model is firstly constructed, and the simulated modal shapes and frequencies agree well with the measured ones, which has validated the global FE carriage model as well as the local FE models of the aluminum alloy-trim composite panel. Thus, the sound transmission loss model of any composite panel has indirectly been validated. Finally, the SAEF model of the carriage is constructed based on the accurate FE model and stimulated by the multi-physical-field excitations. The results show
Development and application of unified algorithms for problems in computational science
NASA Technical Reports Server (NTRS)
Shankar, Vijaya; Chakravarthy, Sukumar
1987-01-01
A framework is presented for developing computationally unified numerical algorithms for solving nonlinear equations that arise in modeling various problems in mathematical physics. The concept of computational unification is an attempt to encompass efficient solution procedures for computing various nonlinear phenomena that may occur in a given problem. For example, in Computational Fluid Dynamics (CFD), a unified algorithm will be one that allows for solutions to subsonic (elliptic), transonic (mixed elliptic-hyperbolic), and supersonic (hyperbolic) flows for both steady and unsteady problems. The objectives are: development of superior unified algorithms emphasizing accuracy and efficiency aspects; development of codes based on selected algorithms leading to validation; application of mature codes to realistic problems; and extension/application of CFD-based algorithms to problems in other areas of mathematical physics. The ultimate objective is to achieve integration of multidisciplinary technologies to enhance synergism in the design process through computational simulation. Specific unified algorithms for a hierarchy of gas dynamics equations and their applications to two other areas: electromagnetic scattering, and laser-materials interaction accounting for melting.
NASA Astrophysics Data System (ADS)
Schuster, Thomas; Hofmann, Bernd; Kaltenbacher, Barbara
2012-10-01
of concrete instances with special properties. The aim of this special section is to provide a forum for highly topical ongoing work in the area of regularization in Banach spaces, its numerics and its applications. Indeed, we have been lucky enough to obtain a number of excellent papers both from colleagues who have previously been contributing to this topic and from researchers entering the field due to its relevance in practical inverse problems. We would like to thank all contributers for enabling us to present a high quality collection of papers on topics ranging from various aspects of regularization via efficient numerical solution to applications in PDE models. We give a brief overview of the contributions included in this issue (here ordered alphabetically by first author). In their paper, Iterative regularization with general penalty term—theory and application to L1 and TV regularization, Radu Bot and Torsten Hein provide an extension of the Landweber iteration for linear operator equations in Banach space to general operators in place of the inverse duality mapping, which corresponds to the use of general regularization functionals in variational regularization. The L∞ topology in data space corresponds to the frequently occuring situation of uniformly distributed data noise. A numerically efficient solution of the resulting Tikhonov regularization problem via a Moreau-Yosida appriximation and a semismooth Newton method, along with a δ-free regularization parameter choice rule, is the topic of the paper L∞ fitting for inverse problems with uniform noise by Christian Clason. Extension of convergence rates results from classical source conditions to their generalization via variational inequalities with a priori and a posteriori stopping rules is the main contribution of the paper Regularization of linear ill-posed problems by the augmented Lagrangian method and variational inequalities by Klaus Frick and Markus Grasmair, again in the context of some
The Views of Undergraduates about Problem-Based Learning Applications in a Biochemistry Course
ERIC Educational Resources Information Center
Tarhan, Leman; Ayyildiz, Yildizay
2015-01-01
The effect of problem-based learning (PBL) applications in an undergraduate biochemistry course on students' interest in this course was investigated through four modules during one semester. Students' views about active learning and improvement in social skills were also collected and evaluated. We conducted the study with 36 senior students from…
Applications of dynamic scheduling technique to space related problems: Some case studies
NASA Technical Reports Server (NTRS)
Nakasuka, Shinichi; Ninomiya, Tetsujiro
1994-01-01
The paper discusses the applications of 'Dynamic Scheduling' technique, which has been invented for the scheduling of Flexible Manufacturing System, to two space related scheduling problems: operation scheduling of a future space transportation system, and resource allocation in a space system with limited resources such as space station or space shuttle.
ERIC Educational Resources Information Center
Hamadneh, Iyad M.; Al-Masaeed, Aslan
2015-01-01
This study aimed at finding out mathematics teachers' attitudes towards photo math application in solving mathematical problems using mobile camera; it also aim to identify significant differences in their attitudes according to their stage of teaching, educational qualifications, and teaching experience. The study used judgmental/purposive…
Thinking about Applications: Effects on Mental Models and Creative Problem-Solving
ERIC Educational Resources Information Center
Barrett, Jamie D.; Peterson, David R.; Hester, Kimberly S.; Robledo, Issac C.; Day, Eric A.; Hougen, Dean P.; Mumford, Michael D.
2013-01-01
Many techniques have been used to train creative problem-solving skills. Although the available techniques have often proven to be effective, creative training often discounts the value of thinking about applications. In this study, 248 undergraduates were asked to develop advertising campaigns for a new high-energy soft drink. Solutions to this…
ERIC Educational Resources Information Center
Wibawa, Kadek Adi; Nusantara, Toto; Subanji; Parta, I. Nengah
2017-01-01
This study aims to reveal the fragmentation of thinking structure's students in solving the problems of application definite integral in area. Fragmentation is a term on the computer (storage) that is highly relevant correlated with theoretical constructions that occur in the human brain (memory). Almost every student has a different way to…
Thinking about Applications: Effects on Mental Models and Creative Problem-Solving
ERIC Educational Resources Information Center
Barrett, Jamie D.; Peterson, David R.; Hester, Kimberly S.; Robledo, Issac C.; Day, Eric A.; Hougen, Dean P.; Mumford, Michael D.
2013-01-01
Many techniques have been used to train creative problem-solving skills. Although the available techniques have often proven to be effective, creative training often discounts the value of thinking about applications. In this study, 248 undergraduates were asked to develop advertising campaigns for a new high-energy soft drink. Solutions to this…
ERIC Educational Resources Information Center
Yang, Eunice
2016-01-01
This paper discusses the use of a free mobile engineering application (app) called Autodesk® ForceEffect™ to provide students assistance with spatial visualization of forces and more practice in solving/visualizing statics problems compared to the traditional pencil-and-paper method. ForceEffect analyzes static rigid-body systems using free-body…
The Views of Undergraduates about Problem-Based Learning Applications in a Biochemistry Course
ERIC Educational Resources Information Center
Tarhan, Leman; Ayyildiz, Yildizay
2015-01-01
The effect of problem-based learning (PBL) applications in an undergraduate biochemistry course on students' interest in this course was investigated through four modules during one semester. Students' views about active learning and improvement in social skills were also collected and evaluated. We conducted the study with 36 senior students from…
ERIC Educational Resources Information Center
Yang, Eunice
2016-01-01
This paper discusses the use of a free mobile engineering application (app) called Autodesk® ForceEffect™ to provide students assistance with spatial visualization of forces and more practice in solving/visualizing statics problems compared to the traditional pencil-and-paper method. ForceEffect analyzes static rigid-body systems using free-body…
ERIC Educational Resources Information Center
Hamadneh, Iyad M.; Al-Masaeed, Aslan
2015-01-01
This study aimed at finding out mathematics teachers' attitudes towards photo math application in solving mathematical problems using mobile camera; it also aim to identify significant differences in their attitudes according to their stage of teaching, educational qualifications, and teaching experience. The study used judgmental/purposive…
Application of NASA management approach to solve complex problems on earth
NASA Technical Reports Server (NTRS)
Potate, J. S.
1972-01-01
The application of NASA management approach to solving complex problems on earth is discussed. The management of the Apollo program is presented as an example of effective management techniques. Four key elements of effective management are analyzed. Photographs of the Cape Kennedy launch sites and supporting equipment are included to support the discussions.
NASA Technical Reports Server (NTRS)
Kenny, Sean P.; Hou, Gene J. W.
1994-01-01
A method for eigenvalue and eigenvector approximate analysis for the case of repeated eigenvalues with distinct first derivatives is presented. The approximate analysis method developed involves a reparameterization of the multivariable structural eigenvalue problem in terms of a single positive-valued parameter. The resulting equations yield first-order approximations to changes in the eigenvalues and the eigenvectors associated with the repeated eigenvalue problem. This work also presents a numerical technique that facilitates the definition of an eigenvector derivative for the case of repeated eigenvalues with repeated eigenvalue derivatives (of all orders). Examples are given which demonstrate the application of such equations for sensitivity and approximate analysis. Emphasis is placed on the application of sensitivity analysis to large-scale structural and controls-structures optimization problems.
Application of the Sinc method to a dynamic elasto-plastic problem
NASA Astrophysics Data System (ADS)
Abdella, K.; Yu, X.; Kucuk, I.
2009-01-01
This paper presents the application of Sinc bases to simulate numerically the dynamic behavior of a one-dimensional elastoplastic problem. The numerical methods that are traditionally employed to solve elastoplastic problems include finite difference, finite element and spectral methods. However, more recently, biorthogonal wavelet bases have been used to study the dynamic response of a uniaxial elasto-plastic rod [Giovanni F. Naldi, Karsten Urban, Paolo Venini, A wavelet-Galerkin method for elastoplasticity problems, Report 181, RWTH Aachen IGPM, and Math. Modelling and Scient. Computing, vol. 10, 2000]. In this paper the Sinc-Galerkin method is used to solve the straight elasto-plastic rod problem. Due to their exponential convergence rates and their need for a relatively fewer nodal points, Sinc based methods can significantly outperform traditional numerical methods [J. Lund, K.L. Bowers, Sinc Methods for Quadrature and Differential Equations, SIAM, Philadelphia, 1992]. However, the potential of Sinc-based methods for solving elastoplasticity problems has not yet been explored. The aim of this paper is to demonstrate the possible application of Sinc methods through the numerical investigation of the unsteady one dimensional elastic-plastic rod problem.
NASA Astrophysics Data System (ADS)
Ma, Wei; Lu, Liang; Liu, Ting; Xu, Xianbo; Sun, Liepeng; Li, Chenxing; Shi, Longbo; Wang, Wenbin; He, Yuan; Zhao, Hongwei
2017-09-01
The resonant frequency stability of the radio frequency quadrupole (RFQ) is an important concern during commissioning. The power dissipated on the RFQ internal surface will heat the cavity and lead to a temperature rise and a structural deformation, especially in the continuous wave (CW) RFQs, which will cause the resonant frequency shifts. It is important to simulate the temperature rise, the deformation and the frequency shift of the RFQ cavity. The cooling water takes away the power to maintain the frequency stability. Meanwhile, the RFQ resonant frequency can be tuned by adjusting the water temperature. In this paper, a detailed three-dimensional multi-physics analysis of the Low Energy Accelerator Facility (LEAF) RFQ will be presented and a commissioning frequency tuning strategy will be studied.
Espino, Daniel M; Shepherd, Duncan E T; Hukins, David W L
2014-01-01
A transient multi-physics model of the mitral heart valve has been developed, which allows simultaneous calculation of fluid flow and structural deformation. A recently developed contact method has been applied to enable simulation of systole (the stage when blood pressure is elevated within the heart to pump blood to the body). The geometry was simplified to represent the mitral valve within the heart walls in two dimensions. Only the mitral valve undergoes deformation. A moving arbitrary Lagrange-Euler mesh is used to allow true fluid-structure interaction (FSI). The FSI model requires blood flow to induce valve closure by inducing strains in the region of 10-20%. Model predictions were found to be consistent with existing literature and will undergo further development.
The 9-Step Problem Design Process for Problem-Based Learning: Application of the 3C3R Model
ERIC Educational Resources Information Center
Hung, Woei
2009-01-01
The design of problems is crucial for the effectiveness of problem-based learning (PBL). Research has shown that PBL problems have not always been effective. Ineffective PBL problems could affect whether students acquire sufficient domain knowledge, activate appropriate prior knowledge, and properly direct their own learning. This paper builds on…
Multi-scale and multi-physics simulations using the multi-fluid plasma model
2017-04-25
relating higher moment variables to the lower ones The fluids are coupled to each other and to the electromagnetic fields through Maxwell’s equations and... electromagnetic fields An ideal numerical method for the MFPM should: be high-order accurate capture shocks couple the flux and the sources not impose... ELECTROMAGNETIC PLASMA SHOCK PROBLEM Fast rarefaction wave (FR), a slow compound wave (SC), a contact discontinuity (CD), a slow shock (SS), and another
An application of the Nash-Moser theorem to the vacuum boundary problem of gaseous stars
NASA Astrophysics Data System (ADS)
Makino, Tetu
2017-01-01
We have been studying spherically symmetric motions of gaseous stars with physical vacuum boundary governed either by the Euler-Poisson equations in the non-relativistic theory or by the Einstein-Euler equations in the relativistic theory. The problems are to construct solutions whose first approximations are small time-periodic solutions to the linearized problem at an equilibrium and to construct solutions to the Cauchy problem near an equilibrium. These problems can be solved when 1 / (γ - 1) is an integer, where γ is the adiabatic exponent of the gas near the vacuum, by the formulation by R. Hamilton of the Nash-Moser theorem. We discuss on an application of the formulation by J.T. Schwartz of the Nash-Moser theorem to the case in which 1 / (γ - 1) is not an integer but sufficiently large.
The Fractional Fourier Transform and Its Application to Energy Localization Problems
NASA Astrophysics Data System (ADS)
Oonincx, Patrick J.; ter Morsche, Hennie G.
2003-12-01
Applying the fractional Fourier transform (FRFT) and the Wigner distribution on a signal in a cascade fashion is equivalent to a rotation of the time and frequency parameters of the Wigner distribution. We presented in ter Morsche and Oonincx, 2002, an integral representation formula that yields affine transformations on the spatial and frequency parameters of the[InlineEquation not available: see fulltext.]-dimensional Wigner distribution if it is applied on a signal with the Wigner distribution as for the FRFT. In this paper, we show how this representation formula can be used to solve certain energy localization problems in phase space. Examples of such problems are given by means of some classical results. Although the results on localization problems are classical, the application of generalized Fourier transform enlarges the class of problems that can be solved with traditional techniques.
NASA Astrophysics Data System (ADS)
Costner, Kelly Mitchell
This study developed and piloted the Problem-Solving Approach to program evaluation, which involves the direct application of the problem-solving process as a metaphor for program evaluation. A rationale for a mathematics-specific approach is presented, and relevant literature in both program evaluation and mathematics education is reviewed. The Problem-Solving Approach was piloted with a high-school level integrated course in mathematics and science that used graphing calculators and data collection devices with the goal of helping students to gain better understanding of relationships between mathematics and science. Twelve students participated in the course, which was co-taught by a mathematics teacher and a science teacher. Data collection for the evaluation included observations, a pre- and posttest, student questionnaires, student interviews, teacher interviews, principal interviews, and a focus group that involved both students and their teachers. Results of the evaluation of the course are presented as an evaluation report. Students showed improvement in their understandings of mathematics-science relationships, but also showed growth in terms of self-confidence, independence, and various social factors that were not expected outcomes. The teachers experienced a unique form of professional development by learning and relearning concepts in each other's respective fields and by gaining insights into each other's teaching strengths. Both the results of the evaluation and the evaluation process itself are discussed in light of the proposed problem-solving approach. The use of problem solving and of specific problem-solving strategies was found to be prevalent among the students and the teachers, as well as in the activities of the evaluator. Specific problem-solving strategies are highlighted for their potential value in program evaluation situations. The resulting Problem-Solving Approach, revised through the pilot application, employs problem solving as a
Complimentary single technique and multi-physics modeling tools for NDE challenges
NASA Astrophysics Data System (ADS)
Le Lostec, Nechtan; Budyn, Nicolas; Sartre, Bernard; Glass, S. W.
2014-02-01
The challenges of modeling and simulation for Non Destructive Examination (NDE) research and development at AREVA NDE Solutions Technical Center (NETEC) are presented. In particular, the choice of a relevant software suite covering different applications and techniques and the process/scripting tools required for simulation and modeling are discussed. The software portfolio currently in use is then presented along with the limitations of the different software: CIVA for ultrasound (UT) methods, PZFlex for UT probes, Flux for eddy current (ET) probes and methods, plus Abaqus for multiphysics modeling. The finite element code, Abaqus is also considered as the future direction for many of our NDE modeling and simulation tasks. Some application examples are given on modeling of a piezoelectric acoustic phased array transducer and preliminary thermography configurations.
NASA Astrophysics Data System (ADS)
Narwadi, Teguh; Subiyanto
2017-03-01
The Travelling Salesman Problem (TSP) is one of the best known NP-hard problems, which means that no exact algorithm to solve it in polynomial time. This paper present a new variant application genetic algorithm approach with a local search technique has been developed to solve the TSP. For the local search technique, an iterative hill climbing method has been used. The system is implemented on the Android OS because android is now widely used around the world and it is mobile system. It is also integrated with Google API that can to get the geographical location and the distance of the cities, and displays the route. Therefore, we do some experimentation to test the behavior of the application. To test the effectiveness of the application of hybrid genetic algorithm (HGA) is compare with the application of simple GA in 5 sample from the cities in Central Java, Indonesia with different numbers of cities. According to the experiment results obtained that in the average solution HGA shows in 5 tests out of 5 (100%) is better than simple GA. The results have shown that the hybrid genetic algorithm outperforms the genetic algorithm especially in the case with the problem higher complexity.
On well-partial-order theory and its application to combinatorial problems of VLSI design
NASA Technical Reports Server (NTRS)
Fellows, M.; Langston, M.
1990-01-01
We nonconstructively prove the existence of decision algorithms with low-degree polynomial running times for a number of well-studied graph layout, placement, and routing problems. Some were not previously known to be in p at all; others were only known to be in p by way of brute force or dynamic programming formulations with unboundedly high-degree polynomial running times. Our methods include the application of the recent Robertson-Seymour theorems on the well-partial-ordering of graphs under both the minor and immersion orders. We also briefly address the complexity of search versions of these problems.
NASA Astrophysics Data System (ADS)
Simons, Neil Richard Samuel
In this thesis the development and application of general purpose computer simulation techniques for macroscopic electromagnetic phenomena are investigated. These techniques are applicable to a wide variety of practical problems pertaining to: Electromagnetic Compatibility and Interference, Radar-Cross-Section, and the analysis and design of antennas. The goal of this research is to examine methods that are applicable to a wide variety of problems rather than specialized approaches that are only useful for specific problems. A brief review of the computational electromagnetics literature indicates two general types of methods are applicable. These are numerical approximation of integral-equation formulations and numerical approximation of differential-equation formulations. Because of their relative efficiency for inhomogeneous geometries, the direction of the thesis proceeds with numerical approximations to differential-equation based formulations. The differential-equation based numerical methods include various finite-difference, finite-element, finite -volume, and transmission line matrix methods. A literature review and overview of these numerical methods is provided. The goal of the overview is to provide the capability for the classification for existing and future differential equation based numerical methods to identify relative advantages and disadvantages. Extensions to the two-dimensional transmission line matrix method are presented. The extensions are intended to provide some of the flexibility traditionally associated with finite-difference and finite-element methods. Three new two-dimensional models are presented. Two of the new models utilize triangular rather than the usual rectangular spatial discretization. The third model introduces the capability of higher-order spatial accuracy. The efficiency and application of the new models are discussed. The development of two general-purpose electromagnetic simulation programs is presented. Both are
NASA Astrophysics Data System (ADS)
Imran, H. M.; Kala, J.; Ng, A. W. M.; Muthukumaran, S.
2017-06-01
Appropriate choice of physics options among many physics parameterizations is important when using the Weather Research and Forecasting (WRF) model. The responses of different physics parameterizations of the WRF model may vary due to geographical locations, the application of interest, and the temporal and spatial scales being investigated. Several studies have evaluated the performance of the WRF model in simulating the mean climate and extreme rainfall events for various regions in Australia. However, no study has explicitly evaluated the sensitivity of the WRF model in simulating heatwaves. Therefore, this study evaluates the performance of a WRF multi-physics ensemble that comprises 27 model configurations for a series of heatwave events in Melbourne, Australia. Unlike most previous studies, we not only evaluate temperature, but also wind speed and relative humidity, which are key factors influencing heatwave dynamics. No specific ensemble member for all events explicitly showed the best performance, for all the variables, considering all evaluation metrics. This study also found that the choice of planetary boundary layer (PBL) scheme had largest influence, the radiation scheme had moderate influence, and the microphysics scheme had the least influence on temperature simulations. The PBL and microphysics schemes were found to be more sensitive than the radiation scheme for wind speed and relative humidity. Additionally, the study tested the role of Urban Canopy Model (UCM) and three Land Surface Models (LSMs). Although the UCM did not play significant role, the Noah-LSM showed better performance than the CLM4 and NOAH-MP LSMs in simulating the heatwave events. The study finally identifies an optimal configuration of WRF that will be a useful modelling tool for further investigations of heatwaves in Melbourne. Although our results are invariably region-specific, our results will be useful to WRF users investigating heatwave dynamics elsewhere.
The potential application of the blackboard model of problem solving to multidisciplinary design
NASA Technical Reports Server (NTRS)
Rogers, James L.
1989-01-01
The potential application of the blackboard model of problem solving to multidisciplinary design is discussed. Multidisciplinary design problems are complex, poorly structured, and lack a predetermined decision path from the initial starting point to the final solution. The final solution is achieved using data from different engineering disciplines. Ideally, for the final solution to be the optimum solution, there must be a significant amount of communication among the different disciplines plus intradisciplinary and interdisciplinary optimization. In reality, this is not what happens in today's sequential approach to multidisciplinary design. Therefore it is highly unlikely that the final solution is the true optimum solution from an interdisciplinary optimization standpoint. A multilevel decomposition approach is suggested as a technique to overcome the problems associated with the sequential approach, but no tool currently exists with which to fully implement this technique. A system based on the blackboard model of problem solving appears to be an ideal tool for implementing this technique because it offers an incremental problem solving approach that requires no a priori determined reasoning path. Thus it has the potential of finding a more optimum solution for the multidisciplinary design problems found in today's aerospace industries.
NASA Technical Reports Server (NTRS)
Muravyov, Alexander A.
1999-01-01
In this paper, a method for obtaining nonlinear stiffness coefficients in modal coordinates for geometrically nonlinear finite-element models is developed. The method requires application of a finite-element program with a geometrically non- linear static capability. The MSC/NASTRAN code is employed for this purpose. The equations of motion of a MDOF system are formulated in modal coordinates. A set of linear eigenvectors is used to approximate the solution of the nonlinear problem. The random vibration problem of the MDOF nonlinear system is then considered. The solutions obtained by application of two different versions of a stochastic linearization technique are compared with linear and exact (analytical) solutions in terms of root-mean-square (RMS) displacements and strains for a beam structure.
Optimization-based additive decomposition of weakly coercive problems with applications
Bochev, Pavel B.; Ridzal, Denis
2016-01-27
In this study, we present an abstract mathematical framework for an optimization-based additive decomposition of a large class of variational problems into a collection of concurrent subproblems. The framework replaces a given monolithic problem by an equivalent constrained optimization formulation in which the subproblems define the optimization constraints and the objective is to minimize the mismatch between their solutions. The significance of this reformulation stems from the fact that one can solve the resulting optimality system by an iterative process involving only solutions of the subproblems. Consequently, assuming that stable numerical methods and efficient solvers are available for every subproblem,more » our reformulation leads to robust and efficient numerical algorithms for a given monolithic problem by breaking it into subproblems that can be handled more easily. An application of the framework to the Oseen equations illustrates its potential.« less
NASA Astrophysics Data System (ADS)
Shestopaloff, Yu. K.
2011-05-01
The article solves the problem of finding the maximum number of solutions for equations composed of power functions and sums of exponential functions. It introduces concept of corresponding functions and proves the relationships between the properties of polynomial, power and sums of exponential functions. One of the results is generalization of Descartes Rule of Signs for other than polynomial functions. Obtained results are applied to two practical problems. One is the finding of an adequate description of transition electrical signals. Secondly, the proved theorems are applied to the problem of finding the initial value for iterative algorithms used to solve one particular case of IRR (internal rate of return) equation for mortgage calculations. Overall, the results are proved to be beneficial for theoretical and practical applications in industry and in different areas of science and technology.
On the application of pseudo-spectral FFT technique to non-periodic problems
NASA Technical Reports Server (NTRS)
Biringen, S.; Kao, K. H.
1988-01-01
The reduction-to-periodicity method using the pseudo-spectral Fast Fourier Transform (FFT) technique is applied to the solution of nonperiodic problems including the two-dimensional Navier-Stokes equations. The accuracy of the method is demonstrated by calculating derivatives of given functions, one- and two-dimensional convective-diffusive problems, and by comparing the relative errors due to the FFT method with seocnd order Finite Difference Methods (FDM). Finally, the two-dimensional Navier-Stokes equations are solved by a fractional step procedure using both the FFT and the FDM methods for the driven cavity flow and the backward facing step problems. Comparisons of these solutions provide a realistic assessment of the FFT method indicating its range of applicability.
Optimization-based additive decomposition of weakly coercive problems with applications
Bochev, Pavel B.; Ridzal, Denis
2016-01-27
In this study, we present an abstract mathematical framework for an optimization-based additive decomposition of a large class of variational problems into a collection of concurrent subproblems. The framework replaces a given monolithic problem by an equivalent constrained optimization formulation in which the subproblems define the optimization constraints and the objective is to minimize the mismatch between their solutions. The significance of this reformulation stems from the fact that one can solve the resulting optimality system by an iterative process involving only solutions of the subproblems. Consequently, assuming that stable numerical methods and efficient solvers are available for every subproblem, our reformulation leads to robust and efficient numerical algorithms for a given monolithic problem by breaking it into subproblems that can be handled more easily. An application of the framework to the Oseen equations illustrates its potential.
On the application of pseudo-spectral FFT technique to non-periodic problems
NASA Technical Reports Server (NTRS)
Biringen, S.; Kao, K. H.
1988-01-01
The reduction-to-periodicity method using the pseudo-spectral Fast Fourier Transform (FFT) technique is applied to the solution of nonperiodic problems including the two-dimensional Navier-Stokes equations. The accuracy of the method is demonstrated by calculating derivatives of given functions, one- and two-dimensional convective-diffusive problems, and by comparing the relative errors due to the FFT method with seocnd order Finite Difference Methods (FDM). Finally, the two-dimensional Navier-Stokes equations are solved by a fractional step procedure using both the FFT and the FDM methods for the driven cavity flow and the backward facing step problems. Comparisons of these solutions provide a realistic assessment of the FFT method indicating its range of applicability.
Crime Scene Investigation: Clinical Application of Chemical Shift Imaging as a Problem Solving Tool
2016-02-26
MDW/SGVU SUBJECT: Professional Presentation Approva l 26 FEB 2016 1. Your paper, entitled Crime Scene Investigation: Clinical Aoolication of...or technical information as a publication/presentation, a new 59 MDW Form 3039 must be submitted for review and approval.] Crime Scene Investiga...tion: Clinical Application of Chemical Shift Imaging as a Problem Solving Tool 1. TITLE OF MATERIAL TO BE PUBLISHED OR PRESENTED Crime Scene
Multi-Physic Stochastic Modeling of a High Speed Composite Flywheel Energy Storage System
NASA Astrophysics Data System (ADS)
Pettingill, Justin D.
High speed flywheel energy storage systems (FESS) are predicted to outperform other energy storage systems in energy density, environmental impact, and lifetime. Proper development can transcend FESS into the new standard of energy storage for space and terrestrial applications. Maintaining structural integrity of a hubless, high speed rotating machine requires the use of high strength, light weight composites. A field regulated reluctance machine (FRRM) requires a magnetically permeable material and irregular geometry to electromagnetically spin the flywheel. This thesis describes modeling of mechanical and electromagnetic changes created by geometry and material necessary to produce rotation. Incorporating permeable composite materials will be examined to categorize characteristics favoring design constraints. The design space will be explored and mapped with the use of an interpolation process known as Kriging. After future constraints are determined, the blueprint will be helpful in determining optimal material and geometry to maximize energy stored by the flywheel.
NASA Astrophysics Data System (ADS)
Al-Zanaidi, M. A.; Grossmann, C.; Noack, A.
2006-04-01
As a rule, parabolic problems with nonsmooth data show rapid changes of its solution or even possess solutions of reduced smoothness. While for smooth data various time integration methods, e.g. the trapezoidal rule or the Euler backwards scheme, work efficiently, but in case of jumps effects of high-frequency oscillations are observable over a long time horizon or steep changes are smeared out. Implicit Taylor methods (ITM), which are mostly applied in specific applications, like interval methods, but not commonly used for general cases, combine high accuracy with strong damping of unwanted oscillations. These properties make them a good choice in case of nonsmooth data. In the present paper ITM are investigated in detail for semi-discrete linear parabolic problems. In ITM at each time level a large-scale linear system has to be solved and preconditioned conjugate gradient methods (PCG) can efficiently be applied. Here adapted preconditioners are constructed, and tight spectral bounds are derived which are independent of the discretization parameters of the parabolic problem. As an important application ITM are considered in case of boundary heat control. Occurring control constraints are involved by means of penalty functions. To solve the completely discretized problem gradient-based numerical algorithms are used where the gradient of the objective is partially evaluated via discrete adjoints and partially by explicitly available terms corresponding to the penalties. Some test examples illustrate the efficiency of the considered algorithms.
Splitting methods for split feasibility problems with application to Dantzig selectors
NASA Astrophysics Data System (ADS)
He, Hongjin; Xu, Hong-Kun
2017-05-01
The split feasibility problem (SFP), which refers to the task of finding a point that belongs to a given nonempty, closed and convex set, and whose image under a bounded linear operator belongs to another given nonempty, closed and convex set, has promising applicability in modeling a wide range of inverse problems. Motivated by the increasingly data-driven regularization in the areas of signal/image processing and statistical learning, in this paper, we study the regularized split feasibility problem (RSFP), which provides a unified model for treating many real-world problems. By exploiting the split nature of the RSFP, we shall gainfully employ several efficient splitting methods to solve the model under consideration. A remarkable advantage of our methods lies in their easier subproblems in the sense that the resulting subproblems have closed-form representations or can be efficiently solved up to a high precision. As an interesting application, we apply the proposed algorithms for finding Dantzig selectors, in addition to demonstrating the effectiveness of the splitting methods through some computational results on synthetic and real medical data sets.
Aditya, Satabdi; DasGupta, Bhaskar; Karpinski, Marek
2013-01-01
In this survey paper, we will present a number of core algorithmic questions concerning several transitive reduction problems on network that have applications in network synthesis and analysis involving cellular processes. Our starting point will be the so-called minimum equivalent digraph problem, a classic computational problem in combinatorial algorithms. We will subsequently consider a few non-trivial extensions or generalizations of this problem motivated by applications in systems biology. We will then discuss the applications of these algorithmic methodologies in the context of three major biological research questions: synthesizing and simplifying signal transduction networks, analyzing disease networks, and measuring redundancy of biological networks. PMID:24833332
NASA Astrophysics Data System (ADS)
Di Luca, Alejandro; Flaounas, Emmanouil; Drobinski, Philippe; Brossier, Cindy Lebeaupin
2014-11-01
The use of high resolution atmosphere-ocean coupled regional climate models to study possible future climate changes in the Mediterranean Sea requires an accurate simulation of the atmospheric component of the water budget (i.e., evaporation, precipitation and runoff). A specific configuration of the version 3.1 of the weather research and forecasting (WRF) regional climate model was shown to systematically overestimate the Mediterranean Sea water budget mainly due to an excess of evaporation (~1,450 mm yr-1) compared with observed estimations (~1,150 mm yr-1). In this article, a 70-member multi-physics ensemble is used to try to understand the relative importance of various sub-grid scale processes in the Mediterranean Sea water budget and to evaluate its representation by comparing simulated results with observed-based estimates. The physics ensemble was constructed by performing 70 1-year long simulations using version 3.3 of the WRF model by combining six cumulus, four surface/planetary boundary layer and three radiation schemes. Results show that evaporation variability across the multi-physics ensemble (˜10 % of the mean evaporation) is dominated by the choice of the surface layer scheme that explains more than ˜70 % of the total variance and that the overestimation of evaporation in WRF simulations is generally related with an overestimation of surface exchange coefficients due to too large values of the surface roughness parameter and/or the simulation of too unstable surface conditions. Although the influence of radiation schemes on evaporation variability is small (˜13 % of the total variance), radiation schemes strongly influence exchange coefficients and vertical humidity gradients near the surface due to modifications of temperature lapse rates. The precipitation variability across the physics ensemble (˜35 % of the mean precipitation) is dominated by the choice of both cumulus (˜55 % of the total variance) and planetary boundary layer (˜32 % of
A parallel multi-block/multi-physics approach for multi-phase flow in porous media
NASA Astrophysics Data System (ADS)
Lu, Qin
The main purpose of this dissertation is to investigate accurate and efficient numerical techniques for simulation of multi-phase/multi-component flow and transport phenomena in porous media which are of major importance in the petroleum and environmental industries. We propose to emphasize a novel numerical methodology, which is called the multi-block algorithm. This algorithm is based on the decomposition of the simulation domain into multiple non-overlapping subdomains (blocks) according to the geological, geometric and physical/chemical properties. One then applies the most suitable grid, numerical scheme and physical model in each subdomain, so that the computational cost is reduced and accuracy is preserved. Across the interface of neighboring subdomains, the consistent primary variables and the continuity of the component mass fluxes are imposed in a weak sense. In this dissertation we first discuss the mathematical and numerical formulations of physical models, such as the implicit black-oil model, the implicit and IMPES two-phase hydrology models. We then formulate the multi-block black-oil model coupling different grids, which can be non-matching on the interface. In addition, we define the multi-model couplings; in particular, the coupling of the implicit and IMPES schemes for two-phase immiscible flow, and the coupling of the implicit three-phase black-oil model and the implicit two-phase hydrology model. Computational examples are presented to demonstrate the scalability of the multi-block/multi-model simulators over the traditional single-block/single-model simulators. Excellent agreements of the results between these two approaches are shown. Parallel computation issues, especially the MPI (Message Passing Interface) multi-communicator implementation and model-based load balancing strategies for the parallelism of the multi-model problem are also considered. Summary of these results is presented in the last chapter.
A two-phase multi-physics model for simulating plasma discharge in liquids
NASA Astrophysics Data System (ADS)
Charchi, Ali; Farouk, Tanvir
2014-10-01
Plasma discharge in liquids has been a topic of interest in recent years both in terms of fundamental science as well as practical applications. Even though there has been a large amount of experimental work reported in the literature, modeling and simulation studies on plasma discharges in liquids is limited. To obtain a more detailed model for plasma discharge in liquid phase a two-phase multiphysics model has been developed. The model resolves both the liquid and gas phase and solves the mass and momentum conservation of the averaged species in both the phases. The fluid motion equation considers surface tension, electric field force as well as gravitational force. To calculate the electric force, the charge conservation equations for positive and negative ions and also for the electrons are solved. The Possion's equation is solved in each time step for obtaining a self consistent electric field. The obtained electric field and charge distribution is used to calculate the electric body force exerted on the fluid. Simulation show that the coupled effect of plasma, surface and gravity results in a time-evolving bubble shape. The influence of different plasma parameters on the bubble dynamics is studied.
Application of advanced plasma technology to energy materials and environmental problems
NASA Astrophysics Data System (ADS)
Kobayashi, Akira
2015-04-01
Advanced plasma system has been proposed for various energy materials and for its application to environmental problems. The gas tunnel type plasma device developed by the author exhibits high energy density and also high efficiency. Regarding the application to thermal processing, one example is the plasma spraying of ceramics such as Al2O3 and ZrO2 as thermal barrier coatings (TBCs). The performances of these ceramic coatings are superior to conventional ones, namely, the properties such as the mechanical and chemical properties, thermal behavior and high temperature oxidation resistance of the alumina/zirconia thermal barrier coatings (TBCs) have been clarified and discussed. The ZrO2 composite coating has a possibility for the development of high functionally graded TBC. The results showed that the alumina/zirconia composite system exhibited an improvement of mechanical properties and oxidation resistance. Another application of gas tunnel type plasma to a functional material is the surface modification of metals. TiN films were formed in a short time of 5 s on Ti and its alloy. Also, thick TiN coatings were easily obtained by gas tunnel type plasma reactive spraying on any metals. Regarding the application to the environmental problems, the decomposition of CO2 gas is also introduced by applying the gas tunnel type plasma system.
Solutions to the Inverse LQR Problem with Application to Biological Systems Analysis.
Priess, M Cody; Conway, Richard; Choi, Jongeun; Popovich, John M; Radcliffe, Clark
2015-03-01
In this paper, we present a set of techniques for finding a cost function to the time-invariant Linear Quadratic Regulator (LQR) problem in both continuous- and discrete-time cases. Our methodology is based on the solution to the inverse LQR problem, which can be stated as: does a given controller K describe the solution to a time-invariant LQR problem, and if so, what weights Q and R produce K as the optimal solution? Our motivation for investigating this problem is the analysis of motion goals in biological systems. We first describe an efficient Linear Matrix Inequality (LMI) method for determining a solution to the general case of this inverse LQR problem when both the weighting matrices Q and R are unknown. Our first LMI-based formulation provides a unique solution when it is feasible. Additionally, we propose a gradient-based, least-squares minimization method that can be applied to approximate a solution in cases when the LMIs are infeasible. This new method is very useful in practice since the estimated gain matrix K from the noisy experimental data could be perturbed by the estimation error, which may result in the infeasibility of the LMIs. We also provide an LMI minimization problem to find a good initial point for the minimization using the proposed gradient descent algorithm. We then provide a set of examples to illustrate how to apply our approaches to several different types of problems. An important result is the application of the technique to human subject posture control when seated on a moving robot. Results show that we can recover a cost function which may provide a useful insight on the human motor control goal.
Solutions to the Inverse LQR Problem with Application to Biological Systems Analysis
Priess, M Cody; Conway, Richard; Choi, Jongeun; Popovich, John M; Radcliffe, Clark
2015-01-01
In this paper, we present a set of techniques for finding a cost function to the time-invariant Linear Quadratic Regulator (LQR) problem in both continuous- and discrete-time cases. Our methodology is based on the solution to the inverse LQR problem, which can be stated as: does a given controller K describe the solution to a time-invariant LQR problem, and if so, what weights Q and R produce K as the optimal solution? Our motivation for investigating this problem is the analysis of motion goals in biological systems. We first describe an efficient Linear Matrix Inequality (LMI) method for determining a solution to the general case of this inverse LQR problem when both the weighting matrices Q and R are unknown. Our first LMI-based formulation provides a unique solution when it is feasible. Additionally, we propose a gradient-based, least-squares minimization method that can be applied to approximate a solution in cases when the LMIs are infeasible. This new method is very useful in practice since the estimated gain matrix K from the noisy experimental data could be perturbed by the estimation error, which may result in the infeasibility of the LMIs. We also provide an LMI minimization problem to find a good initial point for the minimization using the proposed gradient descent algorithm. We then provide a set of examples to illustrate how to apply our approaches to several different types of problems. An important result is the application of the technique to human subject posture control when seated on a moving robot. Results show that we can recover a cost function which may provide a useful insight on the human motor control goal. PMID:26640359
Misrepresentation of publications by radiology residency applicants: is it really a problem?
Eisenberg, Ronald L; Cunningham, Meredith; Kung, Justin W; Slanetz, Priscilla J
2013-03-01
The aim of this study was to determine whether the previous relatively high rate of misrepresentation of publications is still a problem with current applicants for radiology residency. The publications submitted by a sample of 300 applicants for a radiology residency in 2011 were assessed using PubMed and an extensive Internet search to verify whether the articles were in print and had the applicants listed as authors and in the same positions of authorship. Whether the applicants graduated from US or international medical schools was recorded. Of the 138 applicants (46.0%) who cited 1 or more publications, there were 5 misrepresentations (3.6%). These included 1 article not found in the cited journal, 1 journal that could not be found, 1 article in which the applicant was not listed as an author, and 2 instances in which the applicants were not in the same positions of authorship (listed as lead authors but actually second authors). The misrepresentation rate was 1.9% among US graduates and 8.8% among graduates of international medical schools. The low rate of misrepresentation of publications, especially among graduates of US medical schools, does not seem to warrant spending the time to check the citations of journal articles of all applicants for radiology residency positions. Nevertheless, it is reasonable to request that applicants bring to their interviews a copy of each cited article and to assess their knowledge of all other listed research activities. Copyright © 2013 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Regional climate hindcast simulations within EURO-CORDEX: evaluation of a WRF multi-physics ensemble
NASA Astrophysics Data System (ADS)
Katragkou, E.; García-Díez, M.; Vautard, R.; Sobolowski, S.; Zanis, P.; Alexandri, G.; Cardoso, R. M.; Colette, A.; Fernandez, J.; Gobiet, A.; Goergen, K.; Karacostas, T.; Knist, S.; Mayer, S.; Soares, P. M. M.; Pytharoulis, I.; Tegoulias, I.; Tsikerdekis, A.; Jacob, D.
2015-03-01
In the current work we present six hindcast WRF (Weather Research and Forecasting model) simulations for the EURO-CORDEX (European Coordinated Regional Climate Downscaling Experiment) domain with different configurations in microphysics, convection and radiation for the time period 1990-2008. All regional model simulations are forced by the ERA-Interim reanalysis and have the same spatial resolution (0.44°). These simulations are evaluated for surface temperature, precipitation, short- and longwave downward radiation at the surface and total cloud cover. The analysis of the WRF ensemble indicates systematic temperature and precipitation biases, which are linked to different physical mechanisms in the summer and winter seasons. Overestimation of total cloud cover and underestimation of downward shortwave radiation at the surface, mostly linked to the Grell-Devenyi convection and CAM (Community Atmosphere Model) radiation schemes, intensifies the negative bias in summer temperatures over northern Europe (max -2.5 °C). Conversely, a strong positive bias in downward shortwave radiation in summer over central (40-60%) and southern Europe mitigates the systematic cold bias over these regions, signifying a typical case of error compensation. Maximum winter cold biases are over northeastern Europe (-2.8 °C); this location suggests that land-atmosphere rather than cloud-radiation interactions are to blame. Precipitation is overestimated in summer by all model configurations, especially the higher quantiles which are associated with summertime deep cumulus convection. The largest precipitation biases are produced by the Kain-Fritsch convection scheme over the Mediterranean. Precipitation biases in winter are lower than those for summer in all model configurations (15-30%). The results of this study indicate the importance of evaluating not only the basic climatic parameters of interest for climate change applications (temperature and precipitation), but also other
A hierarchical multi-physics model for design of high toughness steels
NASA Astrophysics Data System (ADS)
Hao, Su; Moran, Brian; Kam Liu, Wing; Olson, Gregory B.
2003-05-01
In support of the computational design of high toughness steels as hierarchically structured materials, a multiscale, multiphysics methodology is developed for a `ductile fracture simulator.' At the nanometer scale, the method unites continuum mechanics with quantum physics, using first-principles calculations to predict the force-distance laws for interfacial separation with both normal and plastic sliding components. The predicted adhesion behavior is applied to the description of interfacial decohesion for both micron-scale primary inclusions governing primary void formation and submicron-scale secondary particles governing microvoid-based shear localization that accelerates primary void coalescence. Fine scale deformation is described by a `Particle Dynamics' method that extends the framework of molecular dynamics to multi-atom aggregates. This is combined with other meshfree and finite-element methods in two-level cell modeling to provide a hierarchical constitutive model for crack advance, combining conventional plasticity, microstructural damage, strain gradient effects and transformation plasticity from dispersed metastable austenite. Detailed results of a parallel experimental study of a commercial steel are used to calibrate the model at multiple scales. An initial application provides a Toughness-Strength-Adhesion diagram defining the relation among alloy strength, inclusion adhesion energy and fracture toughness as an aid to microstructural design. The analysis of this paper introduces an approach of creative steel design that can be stated as the exploration of the effective connections among the five key-components: elements selection, process design, micro/nanostructure optimization, desirable properties and industrial performance by virtue of innovations and inventions.
NASA Technical Reports Server (NTRS)
Jackson, C. E., Jr.
1977-01-01
A sample problem library containing 20 problems covering most facets of Nastran Thermal Analyzer modeling is presented. Areas discussed include radiative interchange, arbitrary nonlinear loads, transient temperature and steady-state structural plots, temperature-dependent conductivities, simulated multi-layer insulation, and constraint techniques. The use of the major control options and important DMAP alters is demonstrated.
Gasmi, A.; Sprague, M. A.; Jonkman, J. M.; Jones, W. B.
2013-02-01
In this paper we examine the stability and accuracy of numerical algorithms for coupling time-dependent multi-physics modules relevant to computer-aided engineering (CAE) of wind turbines. This work is motivated by an in-progress major revision of FAST, the National Renewable Energy Laboratory's (NREL's) premier aero-elastic CAE simulation tool. We employ two simple examples as test systems, while algorithm descriptions are kept general. Coupled-system governing equations are framed in monolithic and partitioned representations as differential-algebraic equations. Explicit and implicit loose partition coupling is examined. In explicit coupling, partitions are advanced in time from known information. In implicit coupling, there is dependence on other-partition data at the next time step; coupling is accomplished through a predictor-corrector (PC) approach. Numerical time integration of coupled ordinary-differential equations (ODEs) is accomplished with one of three, fourth-order fixed-time-increment methods: Runge-Kutta (RK), Adams-Bashforth (AB), and Adams-Bashforth-Moulton (ABM). Through numerical experiments it is shown that explicit coupling can be dramatically less stable and less accurate than simulations performed with the monolithic system. However, PC implicit coupling restored stability and fourth-order accuracy for ABM; only second-order accuracy was achieved with RK integration. For systems without constraints, explicit time integration with AB and explicit loose coupling exhibited desired accuracy and stability.
Procassini, R.J.
1997-12-31
The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution of particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.
Le Pallec, J. C.; Crouzet, N.; Bergeaud, V.; Delavaud, C.
2012-07-01
The control of uncertainties in the field of reactor physics and their propagation in best-estimate modeling are a major issue in safety analysis. In this framework, the CEA develops a methodology to perform multi-physics simulations including uncertainties analysis. The present paper aims to present and apply this methodology for the analysis of an accidental situation such as REA (Rod Ejection Accident). This accident is characterized by a strong interaction between the different areas of the reactor physics (neutronic, fuel thermal and thermal hydraulic). The modeling is performed with CRONOS2 code. The uncertainties analysis has been conducted with the URANIE platform developed by the CEA: For each identified response from the modeling (output) and considering a set of key parameters with their uncertainties (input), a surrogate model in the form of a neural network has been produced. The set of neural networks is then used to carry out a sensitivity analysis which consists on a global variance analysis with the determination of the Sobol indices for all responses. The sensitivity indices are obtained for the input parameters by an approach based on the use of polynomial chaos. The present exercise helped to develop a methodological flow scheme, to consolidate the use of URANIE tool in the framework of parallel calculations. Finally, the use of polynomial chaos allowed computing high order sensitivity indices and thus highlighting and classifying the influence of identified uncertainties on each response of the analysis (single and interaction effects). (authors)
NASA Astrophysics Data System (ADS)
Soleimani, Meisam; Wriggers, Peter; Rath, Henryke; Stiesch, Meike
2016-10-01
In this paper, a 3D computational model has been developed to investigate biofilms in a multi-physics framework using smoothed particle hydrodynamics (SPH) based on a continuum approach. Biofilm formation is a complex process in the sense that several physical phenomena are coupled and consequently different time-scales are involved. On one hand, biofilm growth is driven by biological reaction and nutrient diffusion and on the other hand, it is influenced by fluid flow causing biofilm deformation and interface erosion in the context of fluid and deformable solid interaction. The geometrical and numerical complexity arising from these phenomena poses serious complications and challenges in grid-based techniques such as finite element. Here the solution is based on SPH as one of the powerful meshless methods. SPH based computational modeling is quite new in the biological community and the method is uniquely robust in capturing the interface-related processes of biofilm formation such as erosion. The obtained results show a good agreement with experimental and published data which demonstrates that the model is capable of simulating and predicting overall spatial and temporal evolution of biofilm.
NASA Astrophysics Data System (ADS)
Kuroda, Shinjiro; Suzuki, Naoya; Tanigawa, Hiroshi; Suzuki, Kenichiro
2013-06-01
In this paper, we present and demonstrate the principle of variable resonance frequency selection by using a fishbone-shaped microelectromechanical system (MEMS) resonator. To analyze resonator displacement caused by an electrostatic force, a multi-physics simulation, which links the applied voltage load to the mechanical domain, is carried out. The simulation clearly shows that resonators are operated by three kinds of electrostatic force exerted on the beam. A new frequency selection algorithm that selects only one among various resonant modes is also presented. The conversion matrix that transforms the voltages applied to each driving electrode into the resonant beam displacement at each resonant mode is first derived by experimental measurements. Following this, the matrix is used to calculate a set of voltages for maximizing the rejection ratio in each resonant mode. This frequency selection method is applied in a fishbone-shaped MEMS resonator with five driving electrodes and the frequency selection among the 1st resonant mode to the 5th resonant mode is successfully demonstrated. From a fine adjustment of the voltage set, a 42 dB rejection ratio is obtained.
Powell, Adam; Pati, Soobhankar
2012-03-11
Solid Oxide Membrane (SOM) Electrolysis is a new energy-efficient zero-emissions process for producing high-purity magnesium and high-purity oxygen directly from industrial-grade MgO. SOM Recycling combines SOM electrolysis with electrorefining, continuously and efficiently producing high-purity magnesium from low-purity partially oxidized scrap. In both processes, electrolysis and/or electrorefining take place in the crucible, where raw material is continuously fed into the molten salt electrolyte, producing magnesium vapor at the cathode and oxygen at the inert anode inside the SOM. This paper describes a three-dimensional multi-physics finite-element model of ionic current, fluid flow driven by argon bubbling and thermal buoyancy, and heat and mass transport in the crucible. The model predicts the effects of stirring on the anode boundary layer and its time scale of formation, and the effect of natural convection at the outer wall. MOxST has developed this model as a tool for scale-up design of these closely-related processes.
Applicability of the flow-net program to solution of Space Station fluid dynamics problems
NASA Astrophysics Data System (ADS)
Navickas, J.; Rivard, W. C.
The Space Station design encompasses a variety of fluid systems that require extensive flow and combined flow-thermal analyses. The types of problems encountered range from two-phase cryogenic to high-pressure gaseous systems. Design of such systems requires the most advanced analytical tools. Because Space Station applications are a new area for existing two-phase flow programs, typically developed for nuclear safety applications, a careful evaluation of their capabilities to treat generic Space Station flows is appropriate. The results from an assessment of one particular program, FLOW-NET, developed by Flow Science, In., are presented. Three typical problems are analyzed: (1) fill of a hyperbaric module with gaseous nitrogen from a high-pressure supply system, (2) response of a liquid ammonia line to a rapid pressure decrease, and (3) performance of a basic two-phase, thermal control network. The three problems were solved successfully. Comparison of the results with those obtained by analytical methods supports the FLOW-NET calculations.
Fujikake, K; Tago, S; Plasson, R; Nakazawa, R; Okano, K; Maezawa, D; Mukawa, T; Kuroda, A; Asakura, K
2014-01-01
Up to date, no worldwide standard in vitro method has been established for the determination of the sun protection factor (SPF), since there are many problems in terms of its repeatability and reliability. Here, we have studied the problems on the in vitro SPF measurements brought about by the phenomenon called viscous fingering. A spatially periodic stripe pattern is usually formed spontaneously when a viscous fluid is applied onto a solid substrate. For the in vitro SPF measurements, the recommended amount of sunscreen is applied onto a substrate, and the intensity of the transmitted UV light through the sunscreen layer is evaluated. Our theoretical analysis indicated that the nonuniformity of the thickness of the sunscreen layer varied the net UV absorbance. Pseudo-sunscreen composites having no phase separation structures were prepared and applied on a quartz plate for the measurements of the UV absorbance. Two types of applicators, a block applicator and a 4-sided applicator were used. The flat surface was always obtained when the 4-sided applicator was used, while the spatially periodic stripe pattern was always generated spontaneously when the block applicator was used. The net UV absorbance of the layer on which the stripe pattern was formed was found to be lower than that of the flat layer having the same average thickness. Theoretical simulations quantitatively reproduced the variation of the net UV absorbance led by the change of the geometry of the layer. The results of this study propose the definite necessity of strict regulations on the coating method of sunscreens for the establishment of the in vitro SPF test method.
NASA Astrophysics Data System (ADS)
Yoon, Young-Cheol; Song, Jeong-Hoon
2014-06-01
The Extended Particle Difference Method is developed for interfacial singularity problems based on the Extended Particle Derivative Approximation scheme. Node-wise strong formulations are adopted for transient heat transfer problems, potential problems, and elasticity problems with various interfacial boundaries. The governing partial differential equations are directly discretized at interior and boundary nodes and the interface condition is immersed in the derivative approximation or is enforced at interfacial points. Assemblage of the discretized equations generates a linear algebraic system of equations, which accelerates computation speed due to the avoidance of numerical integration. Solving the system gives the nodal solution together with the jump solutions. We also demonstrate the robustness and effectiveness of the developed method with various numerical examples. Despite the existence of the singularity in the solution fields, the method overcomes the geometrical complexity inherent in interface modeling and thus achieves the second-order accuracy.
NASA Astrophysics Data System (ADS)
Ivanyshyn Yaman, Olha; Le Louër, Frédérique
2016-09-01
This paper deals with the material derivative analysis of the boundary integral operators arising from the scattering theory of time-harmonic electromagnetic waves and its application to inverse problems. We present new results using the Piola transform of the boundary parametrisation to transport the integral operators on a fixed reference boundary. The transported integral operators are infinitely differentiable with respect to the parametrisations and simplified expressions of the material derivatives are obtained. Using these results, we extend a nonlinear integral equations approach developed for solving acoustic inverse obstacle scattering problems to electromagnetism. The inverse problem is formulated as a pair of nonlinear and ill-posed integral equations for the unknown boundary representing the boundary condition and the measurements, for which the iteratively regularized Gauss-Newton method can be applied. The algorithm has the interesting feature that it avoids the numerous numerical solution of boundary value problems at each iteration step. Numerical experiments are presented in the special case of star-shaped obstacles.
Application of different variants of the BEM in numerical modeling of bioheat transfer problems.
Majchrzak, Ewa
2013-09-01
Heat transfer processes proceeding in the living organisms are described by the different mathematical models. In particular, the typical continuous model of bioheat transfer bases on the most popular Pennes equation, but the Cattaneo-Vernotte equation and the dual phase lag equation are also used. It should be pointed out that in parallel are also examined the vascular models, and then for the large blood vessels and tissue domain the energy equations are formulated separately. In the paper the different variants of the boundary element method as a tool of numerical solution of bioheat transfer problems are discussed. For the steady state problems and the vascular models the classical BEM algorithm and also the multiple reciprocity BEM are presented. For the transient problems connected with the heating of tissue, the various tissue models are considered for which the 1st scheme of the BEM, the BEM using discretization in time and the general BEM are applied. Examples of computations illustrate the possibilities of practical applications of boundary element method in the scope of bioheat transfer problems.
Disc margins of the discrete-time LQR and its application to consensus problem
NASA Astrophysics Data System (ADS)
Lee, Jinyoung; Kim, Jung-Su; Shim, Hyungbo
2012-10-01
This article presents a complex gain margin of discrete-time linear quadratic regulator (DLQR) and its application to a consensus problem of multi-agent higher order linear systems. Since the consensus problem can be converted into a robust control problem with perturbation expressed by complex numbers, and since the classical gain and phase margins are not enough to handle the current case, we study the so-called 'disc margin' which is somehow a combination of gain and phase margins. We first compute the disc margin of DLQR controller based on a Lyapunov argument, which is simple but yields a relaxed result over those previously reported in the literature. Then, it is shown that the disc margin can be enlarged arbitrarily when the system is asymptotically null controllable with bounded controls and when a low-gain feedback is employed. Based on this fact, the discrete-time consensus problem is solved by a DLQR-based consensus controller. Simulation study shows that the DLQR-based consensus controller has better robustness property against model uncertainties in the input channel.
Applications of Fourier Analysis in Homogenization of the Dirichlet Problem: L p Estimates
NASA Astrophysics Data System (ADS)
Aleksanyan, Hayk; Shahgholian, Henrik; Sjölin, Per
2015-01-01
Let u ɛ be a solution to the system where , is a smooth uniformly convex domain, and g is 1-periodic in its second variable, and both A ɛ and g are sufficiently smooth. Our results in this paper are twofold. First we prove L p convergence results for solutions of the above system and for the non oscillating operator , with the following convergence rate for all which we prove is (generically) sharp for . Here u 0 is the solution to the averaging problem. Second, combining our method with the recent results due to Kenig, Lin and Shen (Commun Pure Appl Math 67(8):1219-1262, 2014), we prove (for certain class of operators and when ) for both the oscillating operator and boundary data. For this case, we take , where A is 1-periodic as well. Some further applications of the method to the homogenization of the Neumann problem with oscillating boundary data are also considered.
NASA Astrophysics Data System (ADS)
Marwati, Rini; Yulianti, Kartika; Pangestu, Herny Wulandari
2016-02-01
A fuzzy evolutionary algorithm is an integration of an evolutionary algorithm and a fuzzy system. In this paper, we present an application of a genetic algorithm to a fuzzy evolutionary algorithm to detect and to solve chromosomes conflict. A chromosome conflict is identified by existence of any two genes in a chromosome that has the same values as two genes in another chromosome. Based on this approach, we construct an algorithm to solve a lecture scheduling problem. Time codes, lecture codes, lecturer codes, and room codes are defined as genes. They are collected to become chromosomes. As a result, the conflicted schedule turns into chromosomes conflict. Built in the Delphi program, results show that the conflicted lecture schedule problem is solvable by this algorithm.
Application of High Order Acoustic Finite Elements to Transmission Losses and Enclosure Problems
NASA Technical Reports Server (NTRS)
Craggs, A.; Stevenson, G.
1985-01-01
A family of acoustic finite elements was developed based on C continuity (acoustic pressure being the nodal variable) and the no-flow condition. The family include triangular, quadrilateral and hexahedral isoparametric elements with linear quadratic and cubic variation in modelling and distortion. Of greatest use in problems with irregular boundaries are the cubic isoparametric elements: the 32 node hexahedral element for three-dimensional systems; and the twelve node quadrilateral and ten node triangular elements for two-dimensional/axisymmetric applications. These elements were applied to problems involving cavity resonances, transmission loss in silencers and the study of end effects, using a Floating Point Systems 164 attached array processor accessed through an Amdahl 5860 mainframe. The elements are presently being used to study the end effects associated with duct terminations within finite enclosures. The transmission losses with various silencers and sidebranches in ducts is also being studied using the same elements.
Lande subtraction method with finite integration limits and application to strong-field problems.
Jiang, Tsin-Fu; Jheng, Shih-Da; Lee, Yun-Min; Su, Zheng-Yao
2012-12-01
The Lande subtraction method has been widely used in Coulomb problems, but the momentum coordinate p∈(0,∞) is assumed. In past applications, a very large range of p was used for accuracy. We derive the supplementary formulation with p∈(0,p_{max}) at reasonably small p_{max} for practical calculations. With the recipe, accuracy of the hydrogenic eigenspectrum is dramatically improved compared to the ordinary Lande formula by the same momentum grids. We apply the present formulation to strong-field atomic above-threshold ionization and high-order harmonic generations. We demonstrate that the proposed momentum space method can be another practical theoretical tool for atomic strong-field problems in addition to the existing methods.
Bíró, Oszkár; Koczka, Gergely; Preis, Kurt
2014-05-01
An efficient finite element method to take account of the nonlinearity of the magnetic materials when analyzing three-dimensional eddy current problems is presented in this paper. The problem is formulated in terms of vector and scalar potentials approximated by edge and node based finite element basis functions. The application of Galerkin techniques leads to a large, nonlinear system of ordinary differential equations in the time domain. The excitations are assumed to be time-periodic and the steady-state periodic solution is of interest only. This is represented either in the frequency domain as a finite Fourier series or in the time domain as a set of discrete time values within one period for each finite element degree of freedom. The former approach is the (continuous) harmonic balance method and, in the latter one, discrete Fourier transformation will be shown to lead to a discrete harmonic balance method. Due to the nonlinearity, all harmonics, both continuous and discrete, are coupled to each other. The harmonics would be decoupled if the problem were linear, therefore, a special nonlinear iteration technique, the fixed-point method is used to linearize the equations by selecting a time-independent permeability distribution, the so-called fixed-point permeability in each nonlinear iteration step. This leads to uncoupled harmonics within these steps. As industrial applications, analyses of large power transformers are presented. The first example is the computation of the electromagnetic field of a single-phase transformer in the time domain with the results compared to those obtained by traditional time-stepping techniques. In the second application, an advanced model of the same transformer is analyzed in the frequency domain by the harmonic balance method with the effect of the presence of higher harmonics on the losses investigated. Finally a third example tackles the case of direct current (DC) bias in the coils of a single-phase transformer.
Application of remote sensing to state and regional problems. [for Mississippi
NASA Technical Reports Server (NTRS)
Miller, W. F.; Bouchillon, C. W.; Harris, J. C.; Carter, B.; Whisler, F. D.; Robinette, R.
1974-01-01
The primary purpose of the remote sensing applications program is for various members of the university community to participate in activities that improve the effective communication between the scientific community engaged in remote sensing research and development and the potential users of modern remote sensing technology. Activities of this program are assisting the State of Mississippi in recognizing and solving its environmental, resource and socio-economic problems through inventory, analysis, and monitoring by appropriate remote sensing systems. Objectives, accomplishments, and current status of the following individual projects are reported: (1) bark beetle project; (2) state park location planning; and (3) waste source location and stream channel geometry monitoring.
Recently available techniques applicable to genetic problems in the Middle East.
Ozand, Pinar T; Odaib, Ali Al; Sakati, Nadia; Al-Hellani, Ali M
2005-01-01
In this paper, we address the preventive health aspects of genetic problems in the Middle East and provide guidelines to prioritize preventive strategies. Applications of various novel genetic techniques such as comprehensive neonatal screening, high throughput heterozygote detection, preimplantation genetic diagnosis, Affymetrix systems, the NanoChip system and a new way of sensitive karyotyping for single-cell chromosome abnormalities are discussed. In conclusion, from the various genetic techniques available, each country should adopt strategies most suitable to its genetic needs and should prioritize the programs to be used in prevention. Copyright 2005 S. Karger AG, Basel.
NASA Technical Reports Server (NTRS)
Horton, F. E.
1970-01-01
The utility of remote sensing techniques to urban data acquisition problems in several distinct areas was identified. This endeavor included a comparison of remote sensing systems for urban data collection, the extraction of housing quality data from aerial photography, utilization of photographic sensors in urban transportation studies, urban change detection, space photography utilization, and an application of remote sensing techniques to the acquisition of data concerning intra-urban commercial centers. The systematic evaluation of variable extraction for urban modeling and planning at several different scales, and the model derivation for identifying and predicting economic growth and change within a regional system of cities are also studied.
Scattering by randomly oriented ellipsoids: Application to aerosol and cloud problems
NASA Technical Reports Server (NTRS)
Asano, S.; Sato, M.; Hansen, J. E.
1979-01-01
A program was developed for computing the scattering and absorption by arbitrarily oriented and randomly oriented prolate and oblate spheroids. This permits examination of the effect of particle shape for cases ranging from needles through spheres to platelets. Applications of this capability to aerosol and cloud problems are discussed. Initial results suggest that the effect of nonspherical particle shape on transfer of radiation through aerosol layers and cirrus clouds, as required for many climate studies, can be readily accounted for by defining an appropriate effective spherical particle radius.
NASA Technical Reports Server (NTRS)
Horton, F. E.
1970-01-01
The utility of remote sensing techniques to urban data acquisition problems in several distinct areas was identified. This endeavor included a comparison of remote sensing systems for urban data collection, the extraction of housing quality data from aerial photography, utilization of photographic sensors in urban transportation studies, urban change detection, space photography utilization, and an application of remote sensing techniques to the acquisition of data concerning intra-urban commercial centers. The systematic evaluation of variable extraction for urban modeling and planning at several different scales, and the model derivation for identifying and predicting economic growth and change within a regional system of cities are also studied.
Application of a substructuring technique to the problem of crack extension and closure
NASA Technical Reports Server (NTRS)
Armen, H., Jr.
1974-01-01
A substructuring technique, originally developed for the efficient reanalysis of structures, is incorporated into the methodology associated with the plastic analysis of structures. An existing finite-element computer program that accounts for elastic-plastic material behavior under cyclic loading was modified to account for changing kinematic constraint conditions - crack growth and intermittent contact of crack surfaces in two dimensional regions. Application of the analysis is presented for a problem of a centercrack panel to demonstrate the efficiency and accuracy of the technique.
Vuceljic, M. J.
2007-04-23
There are a lot of methods dealing with the problems how to get the local radial intensity from a measured lateral intensity of the spectral line. All of them need some a priori information and often a preliminary filtering of the signal. Thus, it is always a question about loosing the useful information of the signal. One of the methods for determination radial intensity is a Tikhonov regularization method. This method requires minimum a priori information such as: the intensity is a monotone positive function. To check applicability limitations of the method, some model functions have been introduced. Special attention was devoted to the model function with the fine structure.
Application of a hybrid generation/utility assessment heuristic to a class of scheduling problems
NASA Technical Reports Server (NTRS)
Heyward, Ann O.
1989-01-01
A two-stage heuristic solution approach for a class of multiobjective, n-job, 1-machine scheduling problems is described. Minimization of job-to-job interference for n jobs is sought. The first stage generates alternative schedule sequences by interchanging pairs of schedule elements. The set of alternative sequences can represent nodes of a decision tree; each node is reached via decision to interchange job elements. The second stage selects the parent node for the next generation of alternative sequences through automated paired comparison of objective performance for all current nodes. An application of the heuristic approach to communications satellite systems planning is presented.
Doyle, Scott; Monaco, James; Feldman, Michael; Tomaszewski, John; Madabhushi, Anant
2011-10-28
to obtain balanced classes. The accuracy of our prediction is verified by empirically-observed costs. Finally, we find that over-sampling the minority class yields a marginal improvement in classifier accuracy but the improved performance comes at the expense of greater annotation cost. We have combined AL with class balancing to yield a general training strategy applicable to most supervised classification problems where the dataset is expensive to obtain and which suffers from the minority class problem. An intelligent training strategy is a critical component of supervised classification, but the integration of AL and intelligent choice of class ratios, as well as the application of a general cost model, will help researchers to plan the training process more quickly and effectively.
ERIC Educational Resources Information Center
Seyhan, Hatice Güngör
2015-01-01
This study was conducted with 98 prospective science teachers, who were composed of 50 prospective teachers that had participated in problem-solving applications and 48 prospective teachers who were taught within a more researcher-oriented teaching method in science laboratories. The first aim of this study was to determine the levels of…
ERIC Educational Resources Information Center
Seyhan, Hatice Güngör
2015-01-01
This study was conducted with 98 prospective science teachers, who were composed of 50 prospective teachers that had participated in problem-solving applications and 48 prospective teachers who were taught within a more researcher-oriented teaching method in science laboratories. The first aim of this study was to determine the levels of…
METLIN-PC: An applications-program package for problems of mathematical programming
Pshenichnyi, B.N.; Sobolenko, L.A.; Sosnovskii, A.A.; Aleksandrova, V.M.; Shul`zhenko, Yu.V.
1994-05-01
The METLIN-PC applications-program package (APP) was developed at the V.M. Glushkov Institute of Cybernetics of the Academy of Sciences of Ukraine on IBM PC XT and AT computers. The present version of the package was written in Turbo Pascal and Fortran-77. The METLIN-PC is chiefly designed for the solution of smooth problems of mathematical programming and is a further development of the METLIN prototype, which was created earlier on a BESM-6 computer. The principal property of the previous package is retained - the applications modules employ a single approach based on the linearization method of B.N. Pschenichnyi. Hence the name {open_quotes}METLIN.{close_quotes}
ALE-AMR: A new 3D multi-physics code for modeling laser/target effects
NASA Astrophysics Data System (ADS)
Koniges, A. E.; Masters, N. D.; Fisher, A. C.; Anderson, R. W.; Eder, D. C.; Kaiser, T. B.; Bailey, D. S.; Gunney, B.; Wang, P.; Brown, B.; Fisher, K.; Hansen, F.; Maddox, B. R.; Benson, D. J.; Meyers, M.; Geille, A.
2010-08-01
We have developed a new 3D multi-physics multi-material code, ALE-AMR, for modeling laser/target effects including debris/shrapnel generation. The code combines Arbitrary Lagrangian Eulerian (ALE) hydrodynamics with Adaptive Mesh Refinement (AMR) to connect the continuum to microstructural regimes. The code is unique in its ability to model hot radiating plasmas and cold fragmenting solids. New numerical techniques were developed for many of the physics packages to work efficiency on a dynamically moving and adapting mesh. A flexible strength/failure framework allows for pluggable material models. Material history arrays are used to store persistent data required by the material models, for instance, the level of accumulated damage or the evolving yield stress in J2 plasticity models. We model ductile metals as well as brittle materials such as Si, Be, and B4C. We use interface reconstruction based on volume fractions of the material components within mixed zones and reconstruct interfaces as needed. This interface reconstruction model is also used for void coalescence and fragmentation. The AMR framework allows for hierarchical material modeling (HMM) with different material models at different levels of refinement. Laser rays are propagated through a virtual composite mesh consisting of the finest resolution representation of the modeled space. A new 2nd order accurate diffusion solver has been implemented for the thermal conduction and radiation transport packages. The code is validated using laser and x-ray driven spall experiments in the US and France. We present an overview of the code and simulation results.
Multi-Scale Multi-physics Methods Development for the Calculation of Hot-Spots in the NGNP
Downar, Thomas; Seker, Volkan
2013-04-30
Radioactive gaseous fission products are released out of the fuel element at a significantly higher rate when the fuel temperature exceeds 1600°C in high-temperature gas-cooled reactors (HTGRs). Therefore, it is of paramount importance to accurately predict the peak fuel temperature during all operational and design-basis accident conditions. The current methods used to predict the peak fuel temperature in HTGRs, such as the Next-Generation Nuclear Plant (NGNP), estimate the average fuel temperature in a computational mesh modeling hundreds of fuel pebbles or a fuel assembly in a pebble-bed reactor (PBR) or prismatic block type reactor (PMR), respectively. Experiments conducted in operating HTGRs indicate considerable uncertainty in the current methods and correlations used to predict actual temperatures. The objective of this project is to improve the accuracy in the prediction of local "hot" spots by developing multi-scale, multi-physics methods and implementing them within the framework of established codes used for NGNP analysis.The multi-scale approach which this project will implement begins with defining suitable scales for a physical and mathematical model and then deriving and applying the appropriate boundary conditions between scales. The macro scale is the greatest length that describes the entire reactor, whereas the meso scale models only a fuel block in a prismatic reactor and ten to hundreds of pebbles in a pebble bed reactor. The smallest scale is the micro scale--the level of a fuel kernel of the pebble in a PBR and fuel compact in a PMR--which needs to be resolved in order to calculate the peak temperature in a fuel kernel.
Henriques, David; González, Patricia; Doallo, Ramón; Saez-Rodriguez, Julio; Banga, Julio R.
2017-01-01
Background We consider a general class of global optimization problems dealing with nonlinear dynamic models. Although this class is relevant to many areas of science and engineering, here we are interested in applying this framework to the reverse engineering problem in computational systems biology, which yields very large mixed-integer dynamic optimization (MIDO) problems. In particular, we consider the framework of logic-based ordinary differential equations (ODEs). Methods We present saCeSS2, a parallel method for the solution of this class of problems. This method is based on an parallel cooperative scatter search metaheuristic, with new mechanisms of self-adaptation and specific extensions to handle large mixed-integer problems. We have paid special attention to the avoidance of convergence stagnation using adaptive cooperation strategies tailored to this class of problems. Results We illustrate its performance with a set of three very challenging case studies from the domain of dynamic modelling of cell signaling. The simpler case study considers a synthetic signaling pathway and has 84 continuous and 34 binary decision variables. A second case study considers the dynamic modeling of signaling in liver cancer using high-throughput data, and has 135 continuous and 109 binaries decision variables. The third case study is an extremely difficult problem related with breast cancer, involving 690 continuous and 138 binary decision variables. We report computational results obtained in different infrastructures, including a local cluster, a large supercomputer and a public cloud platform. Interestingly, the results show how the cooperation of individual parallel searches modifies the systemic properties of the sequential algorithm, achieving superlinear speedups compared to an individual search (e.g. speedups of 15 with 10 cores), and significantly improving (above a 60%) the performance with respect to a non-cooperative parallel scheme. The scalability of the
ERIC Educational Resources Information Center
Dogru, Mustafa
2008-01-01
Helping students to improve their problems solving skills is the primary target of science teacher trainees. In modern science, for training the students, methods should be used for improving their thinking skills, making connections with events and concepts and scientific operations skills rather than information and definition giving. One of…
NASA Astrophysics Data System (ADS)
Mo, Chao-jie; Qin, Li-zi; Zhao, Fei; Yang, Li-jun
2016-12-01
We investigate the application of the dissipative particle dynamics method to the instability problem of a long liquid thread surrounded by another fluid. The dispersion curves obtained from simulations are compared with classic theoretical predictions. The results from standard dissipative particle dynamics (DPD) simulations at first have a tendency of gradually approaching to Tomotika's Stokes flow prediction when the Reynolds number is decreased. But they then abnormally deviate again when the viscosity is very large. The same phenomenon is also confirmed in droplet retraction simulations when also compared with theoretical Stokes flow results. On the other hand, when a hard-core DPD model is used, with the decrease of the Reynolds number the simulation results did finally approach Tomotika's predictions when Re ≈0.1 . A combined presentation of the hard-core DPD results and the standard DPD results, excluding the abnormal ones, demonstrates that they are approximately on a continuum when labeled with Reynolds number. These results suggest that the standard DPD method is a suitable method for investigation of the instability problem of immersed liquid thread in the inertioviscous regime (0.1
Yang, Pengyi; Yoo, Paul D; Fernando, Juanita; Zhou, Bing B; Zhang, Zili; Zomaya, Albert Y
2014-03-01
Data sampling is a widely used technique in a broad range of machine learning problems. Traditional sampling approaches generally rely on random resampling from a given dataset. However, these approaches do not take into consideration additional information, such as sample quality and usefulness. We recently proposed a data sampling technique, called sample subset optimization (SSO). The SSO technique relies on a cross-validation procedure for identifying and selecting the most useful samples as subsets. In this paper, we describe the application of SSO techniques to imbalanced and ensemble learning problems, respectively. For imbalanced learning, the SSO technique is employed as an under-sampling technique for identifying a subset of highly discriminative samples in the majority class. In ensemble learning, the SSO technique is utilized as a generic ensemble technique where multiple optimized subsets of samples from each class are selected for building an ensemble classifier. We demonstrate the utilities and advantages of the proposed techniques on a variety of bioinformatics applications where class imbalance, small sample size, and noisy data are prevalent.
Mo, Chao-Jie; Qin, Li-Zi; Zhao, Fei; Yang, Li-Jun
2016-12-01
We investigate the application of the dissipative particle dynamics method to the instability problem of a long liquid thread surrounded by another fluid. The dispersion curves obtained from simulations are compared with classic theoretical predictions. The results from standard dissipative particle dynamics (DPD) simulations at first have a tendency of gradually approaching to Tomotika's Stokes flow prediction when the Reynolds number is decreased. But they then abnormally deviate again when the viscosity is very large. The same phenomenon is also confirmed in droplet retraction simulations when also compared with theoretical Stokes flow results. On the other hand, when a hard-core DPD model is used, with the decrease of the Reynolds number the simulation results did finally approach Tomotika's predictions when Re≈0.1. A combined presentation of the hard-core DPD results and the standard DPD results, excluding the abnormal ones, demonstrates that they are approximately on a continuum when labeled with Reynolds number. These results suggest that the standard DPD method is a suitable method for investigation of the instability problem of immersed liquid thread in the inertioviscous regime (0.1
Non-Linear Problems in NMR: Application of the DFM Variation of Parameters Method
NASA Astrophysics Data System (ADS)
Erker, Jay Charles
This Dissertation introduces, develops, and applies the Dirac-McLachlan-Frenkel (DFM) time dependent variation of parameters approach to Nuclear Magnetic Resonance (NMR) problems. Although never explicitly used in the treatment of time domain NMR problems to date, the DFM approach has successfully predicted the dynamics of optically prepared wave packets on excited state molecular energy surfaces. Unlike the Floquet, average Hamiltonian, and Van Vleck transformation methods, the DFM approach is not restricted by either the size or symmetry of the time domain perturbation. A particularly attractive feature of the DFM method is that measured data can be used to motivate a parameterized trial function choice and that the DFM theory provides the machinery to provide the optimum, minimum error choices for these parameters. Indeed a poor parameterized trial function choice will lead to a poor match with real experiments, even with optimized parameters. Although there are many NMR problems available to demonstrate the application of the DFM variation of parameters, five separate cases that have escaped analytical solution and thus require numerical methods are considered here: molecular diffusion in a magnetic field gradient, radiation damping in the presence of inhomogeneous broadening, multi-site chemical exchange, and the combination of molecular diffusion in a magnetic field gradient with chemical exchange. The application to diffusion in a gradient is used as an example to develop the DFM method for application to NMR. The existence of a known analytical solution and experimental results allows for direct comparison between the theoretical results of the DFM method and Torrey's solution to the Bloch equations corrected for molecular diffusion. The framework of writing classical Bloch equations in matrix notation is then applied to problems without analytical solution. The second example includes the generation of a semi-analytical functional form for the free
NASA Astrophysics Data System (ADS)
Luo, Shunlong; Sun, Yuan
2017-08-01
Quantifications of coherence are intensively studied in the context of completely decoherent operations (i.e., von Neuamnn measurements, or equivalently, orthonormal bases) in recent years. Here we investigate partial coherence (i.e., coherence in the context of partially decoherent operations such as Lüders measurements). A bona fide measure of partial coherence is introduced. As an application, we address the monotonicity problem of K -coherence (a quantifier for coherence in terms of Wigner-Yanase skew information) [Girolami, Phys. Rev. Lett. 113, 170401 (2014), 10.1103/PhysRevLett.113.170401], which is introduced to realize a measure of coherence as axiomatized by Baumgratz, Cramer, and Plenio [Phys. Rev. Lett. 113, 140401 (2014), 10.1103/PhysRevLett.113.140401]. Since K -coherence fails to meet the necessary requirement of monotonicity under incoherent operations, it is desirable to remedy this monotonicity problem. We show that if we modify the original measure by taking skew information with respect to the spectral decomposition of an observable, rather than the observable itself, as a measure of coherence, then the problem disappears, and the resultant coherence measure satisfies the monotonicity. Some concrete examples are discussed and related open issues are indicated.
A special application of absolute value techniques in authentic problem solving
NASA Astrophysics Data System (ADS)
Stupel, Moshe
2013-06-01
There are at least five different equivalent definitions of the absolute value concept. In instances where the task is an equation or inequality with only one or two absolute value expressions, it is a worthy educational experience for learners to solve the task using each one of the definitions. On the other hand, if more than two absolute value expressions are involved, the definition that is most helpful is the one involving solving by intervals and evaluating critical points. In point of fact, application of this technique is one reason that the topic of absolute value is important in mathematics in general and in mathematics teaching in particular. We present here an authentic practical problem that is solved using absolute values and the 'intervals' method, after which the solution is generalized with surprising results. This authentic problem also lends itself to investigation using educational technological tools such as GeoGebra dynamic geometry software: mathematics teachers can allow their students to initially cope with the problem by working in an inductive environment in which they conduct virtual experiments until a solid conjecture has been reached, after which they should prove the conjecture deductively, using classic theoretical mathematical tools.
NASA Astrophysics Data System (ADS)
Yang, Eunice
2016-02-01
This paper discusses the use of a free mobile engineering application (app) called Autodesk® ForceEffect™ to provide students assistance with spatial visualization of forces and more practice in solving/visualizing statics problems compared to the traditional pencil-and-paper method. ForceEffect analyzes static rigid-body systems using free-body diagrams (FBDs) and provides solutions in real time. It is a cost-free software that is available for download on the Internet. The software is supported on the iOS™, Android™, and Google Chrome™ platforms. It is easy to use and the learning curve is approximately two hours using the tutorial provided within the app. The use of ForceEffect has the ability to provide students different problem modalities (textbook, real-world, and design) to help them acquire and improve on skills that are needed to solve force equilibrium problems. Although this paper focuses on the engineering mechanics statics course, the technology discussed is also relevant to the introductory physics course.
The optimal solution of a non-convex state-dependent LQR problem and its applications.
Xu, Xudan; Zhu, J Jim; Zhang, Ping
2014-01-01
This paper studies a Non-convex State-dependent Linear Quadratic Regulator (NSLQR) problem, in which the control penalty weighting matrix [Formula: see text] in the performance index is state-dependent. A necessary and sufficient condition for the optimal solution is established with a rigorous proof by Euler-Lagrange Equation. It is found that the optimal solution of the NSLQR problem can be obtained by solving a Pseudo-Differential-Riccati-Equation (PDRE) simultaneously with the closed-loop system equation. A Comparison Theorem for the PDRE is given to facilitate solution methods for the PDRE. A linear time-variant system is employed as an example in simulation to verify the proposed optimal solution. As a non-trivial application, a goal pursuit process in psychology is modeled as a NSLQR problem and two typical goal pursuit behaviors found in human and animals are reproduced using different control weighting [Formula: see text]. It is found that these two behaviors save control energy and cause less stress over Conventional Control Behavior typified by the LQR control with a constant control weighting [Formula: see text], in situations where only the goal discrepancy at the terminal time is of concern, such as in Marathon races and target hitting missions.
The Optimal Solution of a Non-Convex State-Dependent LQR Problem and Its Applications
Xu, Xudan; Zhu, J. Jim; Zhang, Ping
2014-01-01
This paper studies a Non-convex State-dependent Linear Quadratic Regulator (NSLQR) problem, in which the control penalty weighting matrix in the performance index is state-dependent. A necessary and sufficient condition for the optimal solution is established with a rigorous proof by Euler-Lagrange Equation. It is found that the optimal solution of the NSLQR problem can be obtained by solving a Pseudo-Differential-Riccati-Equation (PDRE) simultaneously with the closed-loop system equation. A Comparison Theorem for the PDRE is given to facilitate solution methods for the PDRE. A linear time-variant system is employed as an example in simulation to verify the proposed optimal solution. As a non-trivial application, a goal pursuit process in psychology is modeled as a NSLQR problem and two typical goal pursuit behaviors found in human and animals are reproduced using different control weighting . It is found that these two behaviors save control energy and cause less stress over Conventional Control Behavior typified by the LQR control with a constant control weighting , in situations where only the goal discrepancy at the terminal time is of concern, such as in Marathon races and target hitting missions. PMID:24747417
Resolving all-order method convergence problems for atomic physics applications
Gharibnejad, H.; Derevianko, A.; Eliav, E.; Safronova, M. S.
2011-05-15
The development of the relativistic all-order method where all single, double, and partial triple excitations of the Dirac-Hartree-Fock wave function are included to all orders of perturbation theory led to many important results for the study of fundamental symmetries, development of atomic clocks, ultracold atom physics, and others, as well as provided recommended values of many atomic properties critically evaluated for their accuracy for a large number of monovalent systems. This approach requires iterative solutions of the linearized coupled-cluster equations leading to convergence issues in some cases where correlation corrections are particularly large or lead to an oscillating pattern. Moreover, these issues also lead to similar problems in the configuration-interaction (CI)+all-order method for many-particle systems. In this work, we have resolved most of the known convergence problems by applying two different convergence stabilizer methods, namely, reduced linear equation and direct inversion of iterative subspace. Examples are presented for B, Al, Zn{sup +}, and Yb{sup +}. Solving these convergence problems greatly expands the number of atomic species that can be treated with the all-order methods and is anticipated to facilitate many interesting future applications.
NASA Technical Reports Server (NTRS)
Johnson, O. W.
1964-01-01
A modified spray gun, with separate containers for resin and additive components, solves the problems of quick hardening and nozzle clogging. At application, separate atomizers spray the liquids in front of the nozzle face where they blend.
NASA Astrophysics Data System (ADS)
Mandò, Pier Andrea
1994-03-01
The physical and technological problems associated with an external beam setup are discussed, together with advantages and limitations in IBA applications. As far as exit windows are concerned, presently the best choice seems to be 8 μm Kapton® foils. They can last for over one week of beam irradiation under standard conditions and give rise anyway to no sudden rupture. Aluminized Mylar® windows can indeed be obtained in thinner foils, but their resistance under beam bombardment is much poorer. Other possible choices for the window material, which are shortly discussed, are nickel and zirconium foils. In a helium atmosphere, Si(Li) detectors with very thin Be windows (8 μm), used for PIXE analysis, have undergone the problem of gas permeation inside the cryostat, but they always recovered to their original condition with a simple pump and bake procedure. Particle detectors we used for external RBS analysis are cheap, standard silicon-junction, which have shown no significant problem of performance deterioration even after weeks of use. The difficulty of a correct current measurement when operating with an external beam is pointed out. Solutions which have been adopted are either external rotating choppers, on which the yield of beam-induced interactions is sampled, or in-vacuum particle detectors monitoring the RBS spectrum of the exit window itself. The possibility of extracting mubeams as small as 10 μm, e.g. for geological applications, or diffused beams of some mm 2, e.g. for environmental applications, is also shortly discussed. In the final part of the paper, some examples are given of recent external PIXE-RBS applications to the analysis of paints and inks of ancient manuscripts. Attributions of miniatures to different artists, tentatively suggested by art-historians, have been strengthened by the IBA measurements. These have shown in some cases that the sources of supply of the raw material were different even though the kind of pigment was the same
NASA Technical Reports Server (NTRS)
Tamma, Kumar K.; Namburu, Raju R.
1990-01-01
The present paper describes recent advances and trends in finite element developments and applications for solidification problems. In particular, in comparison to traditional methods of approach, new enthalpy-based architectures based on a generalized trapezoidal family of representations are presented which provide different perspectives, physical interpretation and solution architectures for effective numerical simulation of phase change processes encountered in solidification problems. Various numerical test models are presented and the results support the proposition for employing such formulations for general phase change applications.
NASA Technical Reports Server (NTRS)
Tamma, Kumar K.; Namburu, Raju R.
1990-01-01
The present paper describes recent advances and trends in finite element developments and applications for solidification problems. In particular, in comparison to traditional methods of approach, new enthalpy-based architectures based on a generalized trapezoidal family of representations are presented which provide different perspectives, physical interpretation and solution architectures for effective numerical simulation of phase change processes encountered in solidification problems. Various numerical test models are presented and the results support the proposition for employing such formulations for general phase change applications.
NASA Astrophysics Data System (ADS)
Champaney, L.; Boucard, P.-A.; Guinard, S.
2008-07-01
The objective of the work presented here is to develop an efficient strategy for the parametric analysis of bolted joints designed for aerospace applications. These joints are used in elastic structural assemblies with local nonlinearities (such as unilateral contact with friction) under quasi-static loading. Our approach is based on a decomposition of an assembly into substructures (representing the parts) and interfaces (representing the connections). The problem within each substructure is solved by the finite element method, while an iterative scheme based on the LATIN method (Ladevèze in Nonlinear computational structural mechanics—new approaches and non-incremental methods of calculation, 1999) is used for the global resolution. The proposed strategy consists in calculating response surfaces (Rajashekhar and Ellingwood in Struct Saf 12:205-220, 1993) such that each point of a surface is associated with a design configuration. Each design configuration corresponds to a set of values of all the variable parameters (friction coefficients, prestresses) which are introduced into the mechanical analysis. Here, instead of carrying out a full calculation for each point of the surface, we propose to use the capabilities of the LATIN method and reutilize the solution of one problem (for one set of parameters) in order to solve similar problems (for the other sets of parameters) (Boucard and Champaney in Int J Numer Methods Eng 57:1259-1281, 2003). The strategy is adaptive in the sense that it takes into account the results of the previous calculations. The method presented can be used for several types of nonlinear problems requiring multiple analyses: for example, it has already been used for structural identification (Allix and Vidal in Comput Methods Appl Mech Eng 191:2727-2758, 2001).
Zörnig, Peter
2015-08-01
We present integer programming models for some variants of the farthest string problem. The number of variables and constraints is substantially less than that of the integer linear programming models known in the literature. Moreover, the solution of the linear programming-relaxation contains only a small proportion of noninteger values, which considerably simplifies the rounding process. Numerical tests have shown excellent results, especially when a small set of long sequences is given.
NASA Astrophysics Data System (ADS)
Zheng, Q.
2011-12-01
On the application of the genetic algorithm to the predictability problems involving "on-off" switches ZHENG Qin(1,2), DAI Yi(1), ZHANG Lu(1)and LU Xiaoqing(1) (1)Institute of Science, PLA University of Science and Technology, Nanjing 211101, China; (2)State Key Laboratory of Numerical Modeling for Atmospheric Sciences and Geophysical Fluid Dynamics (LASG), Institute of Atmospheric Physics, Chinese Academy of Sciences, Beijing 100029, China Abstract The lower bound of maximum predictable time can be formulated into a constrained nonlinear optimization problem, and the traditional solution to this problem is the filtering method and the conditional nonlinear optimal perturbation (CNOP) method. Usually, the CNOP method is implemented with the help of a gradient descent algorithm based on the adjoint method, which is named as the ADJ-CNOP, hereinafter. However, with the increasing improvement of actual prediction models, more and more physical processes are taken into consideration in models in the form of parameterization, thus giving rise to the "on-off" switch problem, which affects tremendously the effectiveness of the conventional gradient descent algorithm based on the adjoint method. This paper attempts to apply a genetic algorithm (GA) to the CNOP method, named as the GA-CNOP, to solve the predictability problems involving the "on-off" switches. As the precision of the filtering method depends uniquely on the division of the constraint region, its results are taken as benchmarks and a series of comparisons between the ADJ-CNOP and the GA-CNOP are performed. It is revealed that the GA-CNOP can always figure out the accurate lower bound of maximum predictable time even in discontinuous cases, while the ADJ-CNOP, owing to the effect of "on-off" switches, often yields the incorrect lower bound of maximum predictable time. This would suggest that in non-smooth cases, using a GA to solve the predictability problems is more effective than using the conventional
Bhardwaj, M.; Day, D.; Farhat, C.; Lesoinne, M.; Pierson, K; Rixen, D.
1999-04-01
We report on the application of the one-level FETI method to the solution of a class of structural problems associated with the Department of Energy's Accelerated Strategic Computing Initiative (ASCI). We focus on numerical and parallel scalability issues,and discuss the treatment by FETI of severe structural heterogeneities. We also report on preliminary performance results obtained on the ASCI Option Red supercomputer configured with as many as one thousand processors, for problems with as many as 5 million degrees of freedom.
Nash, Stephen G.
2013-11-11
The research focuses on the modeling and optimization of nanoporous materials. In systems with hierarchical structure that we consider, the physics changes as the scale of the problem is reduced and it can be important to account for physics at the fine level to obtain accurate approximations at coarser levels. For example, nanoporous materials hold promise for energy production and storage. A significant issue is the fabrication of channels within these materials to allow rapid diffusion through the material. One goal of our research is to apply optimization methods to the design of nanoporous materials. Such problems are large and challenging, with hierarchical structure that we believe can be exploited, and with a large range of important scales, down to atomistic. This requires research on large-scale optimization for systems that exhibit different physics at different scales, and the development of algorithms applicable to designing nanoporous materials for many important applications in energy production, storage, distribution, and use. Our research has two major research thrusts. The first is hierarchical modeling. We plan to develop and study hierarchical optimization models for nanoporous materials. The models have hierarchical structure, and attempt to balance the conflicting aims of model fidelity and computational tractability. In addition, we analyze the general hierarchical model, as well as the specific application models, to determine their properties, particularly those properties that are relevant to the hierarchical optimization algorithms. The second thrust was to develop, analyze, and implement a class of hierarchical optimization algorithms, and apply them to the hierarchical models we have developed. We adapted and extended the optimization-based multigrid algorithms of Lewis and Nash to the optimization models exemplified by the hierarchical optimization model. This class of multigrid algorithms has been shown to be a powerful tool for
Application of spectral Lanczos decomposition method to large scale problems arising geophysics
Tamarchenko, T.
1996-12-31
This paper presents an application of Spectral Lanczos Decomposition Method (SLDM) to numerical modeling of electromagnetic diffusion and elastic waves propagation in inhomogeneous media. SLDM approximates an action of a matrix function as a linear combination of basis vectors in Krylov subspace. I applied the method to model electromagnetic fields in three-dimensions and elastic waves in two dimensions. The finite-difference approximation of the spatial part of differential operator reduces the initial boundary-value problem to a system of ordinary differential equations with respect to time. The solution to this system requires calculating exponential and sine/cosine functions of the stiffness matrices. Large scale numerical examples are in a good agreement with the theoretical error bounds and stability estimates given by Druskin, Knizhnerman, 1987.
NASA Technical Reports Server (NTRS)
Rabitz, Herschel
1987-01-01
The use of parametric and functional gradient sensitivity analysis techniques is considered for models described by partial differential equations. By interchanging appropriate dependent and independent variables, questions of inverse sensitivity may be addressed to gain insight into the inversion of observational data for parameter and function identification in mathematical models. It may be argued that the presence of a subset of dominantly strong coupled dependent variables will result in the overall system sensitivity behavior collapsing into a simple set of scaling and self similarity relations amongst elements of the entire matrix of sensitivity coefficients. These general tools are generic in nature, but herein their application to problems arising in selected areas of physics and chemistry is presented.
A review of vector convergence acceleration methods, with applications to linear algebra problems
NASA Astrophysics Data System (ADS)
Brezinski, C.; Redivo-Zaglia, M.
In this article, in a few pages, we will try to give an idea of convergence acceleration methods and extrapolation procedures for vector sequences, and to present some applications to linear algebra problems and to the treatment of the Gibbs phenomenon for Fourier series in order to show their effectiveness. The interested reader is referred to the literature for more details. In the bibliography, due to space limitation, we will only give the more recent items, and, for older ones, we refer to Brezinski and Redivo-Zaglia, Extrapolation methods. (Extrapolation Methods. Theory and Practice, North-Holland, 1991). This book also contains, on a magnetic support, a library (in Fortran 77 language) for convergence acceleration algorithms and extrapolation methods.
On multidisciplinary research on the application of remote sensing to water resources problems
NASA Technical Reports Server (NTRS)
1972-01-01
This research is directed toward development of a practical, operational remote sensing water quality monitoring system. To accomplish this, five fundamental aspects of the problem have been under investigation during the past three years. These are: (1) development of practical and economical methods of obtaining, handling and analyzing remote sensing data; (2) determination of the correlation between remote sensed imagery and actual water quality parameters; (3) determination of the optimum technique for monitoring specific water pollution parameters and for evaluating the reliability with which this can be accomplished; (4) determination of the extent of masking due to depth of penetration, bottom effects, film development effects, and angle falloff, and development of techniques to eliminate or minimize them; and (5) development of operational procedures which might be employed by a municipal, state or federal agency for the application of remote sensing to water quality monitoring, including space-generated data.
Applications of Quantum Theory of Atomic and Molecular Scattering to Problems in Hypersonic Flow
NASA Technical Reports Server (NTRS)
Malik, F. Bary
1995-01-01
The general status of a grant to investigate the applications of quantum theory in atomic and molecular scattering problems in hypersonic flow is summarized. Abstracts of five articles and eleven full-length articles published or submitted for publication are included as attachments. The following topics are addressed in these articles: fragmentation of heavy ions (HZE particles); parameterization of absorption cross sections; light ion transport; emission of light fragments as an indicator of equilibrated populations; quantum mechanical, optical model methods for calculating cross sections for particle fragmentation by hydrogen; evaluation of NUCFRG2, the semi-empirical nuclear fragmentation database; investigation of the single- and double-ionization of He by proton and anti-proton collisions; Bose-Einstein condensation of nuclei; and a liquid drop model in HZE particle fragmentation by hydrogen.
Development and application of fluorescent diagnostics to fundamental droplet and spray problems
NASA Astrophysics Data System (ADS)
Melton, Lynn A.
1994-09-01
This final report describes work carried out under ARO grant DAALO3-91-G-0033 for the development and application of fluorescent diagnostics to fundamental droplet problems. Particular emphasis has been placed on attempts to understand the heating, evaporation, and internal circulation processes of sub-millimeter droplets. At the University of Texas at Dallas, a new type of exciplex fluorescence thermometer, based on the temperature dependent shift of the exciplex band, has been developed and applied to thermometry of evaporating droplets and surface liquids. Algorithms and programs have been developed and disseminated for the correction of 'droplet slicing images' (DSI) for the effects of refraction by the front hemisphere of the droplet. At United Technologies Research Center, DSI techniques have been used to demonstrate unequivocally that aerodynamic shear can induce internal circulation in sub-millimeter droplets and to show that droplets rotate and interact with the surrounding gas phase flow field.
The application of the statistical theory of extreme values to gust-load problems
NASA Technical Reports Server (NTRS)
Press, Harry
1950-01-01
An analysis is presented which indicates that the statistical theory of extreme values is applicable to the problems of predicting the frequency of encountering the larger gust loads and gust velocities for both specific test conditions as well as commercial transport operations. The extreme-value theory provides an analytic form for the distributions of maximum values of gust load and velocity. Methods of fitting the distribution are given along with a method of estimating the reliability of the predictions. The theory of extreme values is applied to available load data from commercial transport operations. The results indicate that the estimates of the frequency of encountering the larger loads are more consistent with the data and more reliable than those obtained in previous analyses. (author)
NASA Astrophysics Data System (ADS)
Fatkhutdinov, Aybulat
2017-04-01
Decision support in many research fields including surface water and groundwater management often relies on various optimization algorithms. However, application of an automated model optimization may require significant computational resources and be very time consuming. On the other side, during each scenario simulation large amount of data is produced which potentially can be used to train a data-driven model that can help to solve similar optimization problems more efficiently, e.g. by providing preliminary likelihood distribution of optimized variables. The main problem for application of any machine learning technique for characterization of hydrogeological situations is high variability of conditions including aquifer hydraulic properties and geometries, its interaction with surface water objects as well as artificial disturbance. The aim of this study is to find parameters that can be used as a training set for model learning, apply them on various learning algorithms and to test how strong performance of following optimization algorithm can be improved by supplementing it with a trained model. For the purposes of the experiment synthetically generated groundwater models with varying parameters are used. Generated models simulate a common situation when optimum position and parameters of designed well site have to be found. Parameters that compose set of model predictors include types, relative positions and properties of boundary conditions, aquifer properties and configuration. Target variables are relative positions of wells and ranges of their pumping/injection rates. Tested learning algorithms include neural networks, support vector machines and classification trees supplemented by posterior likelihood estimation. A variation of an evolutionary algorithm is used for optimization purposes.
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.; Lytle, John K.
1989-01-01
An algebraic adaptive grid scheme based on the concept of arc equidistribution is presented. The scheme locally adjusts the grid density based on gradients of selected flow variables from either finite difference or finite volume calculations. A user-prescribed grid stretching can be specified such that control of the grid spacing can be maintained in areas of known flowfield behavior. For example, the grid can be clustered near a wall for boundary layer resolution and made coarse near the outer boundary of an external flow. A grid smoothing technique is incorporated into the adaptive grid routine, which is found to be more robust and efficient than the weight function filtering technique employed by other researchers. Since the present algebraic scheme requires no iteration or solution of differential equations, the computer time needed for grid adaptation is trivial, making the scheme useful for three-dimensional flow problems. Applications to two- and three-dimensional flow problems show that a considerable improvement in flowfield resolution can be achieved by using the proposed adaptive grid scheme. Although the scheme was developed with steady flow in mind, it is a good candidate for unsteady flow computations because of its efficiency.
Cao, Jianping; Du, Zhengjian; Mo, Jinhan; Li, Xinxiao; Xu, Qiujian; Zhang, Yinping
2016-12-20
Passive sampling is an alternative to active sampling for measuring concentrations of gas-phase volatile organic compounds (VOCs). However, the uncertainty or relative error of the measurements have not been minimized due to the limitations of existing design methods. In this paper, we have developed a novel method, the inverse problem optimization method, to address the problems associated with designing accurate passive samplers. The principle is to determine the most appropriate physical properties of the materials, and the optimal geometry of a passive sampler, by minimizing the relative sampling error based on the mass transfer model of VOCs for a passive sampler. As an example application, we used our proposed method to optimize radial passive samplers for the sampling of benzene and formaldehyde in a normal indoor environment. A new passive sampler, which we have called the Tsinghua Passive Diffusive Sampler (THPDS), for indoor benzene measurement was developed according to the optimized results. Silica zeolite was selected as the sorbent for the THPDS. The measured overall uncertainty of THPDS (22% for benzene) is lower than that of most commercially available passive samplers but is quite a bit larger than the modeled uncertainty (4.8% for benzene, the optimized result), suggesting that further research is required.
NASA Astrophysics Data System (ADS)
Lawrence, Chris C.; Febbraro, Michael; Flaska, Marek; Pozzi, Sara A.; Becchetti, F. D.
2016-08-01
Verification of future warhead-dismantlement treaties will require detection of certain warhead attributes without the disclosure of sensitive design information, and this presents an unusual measurement challenge. Neutron spectroscopy—commonly eschewed as an ill-posed inverse problem—may hold special advantages for warhead verification by virtue of its insensitivity to certain neutron-source parameters like plutonium isotopics. In this article, we investigate the usefulness of unfolded neutron spectra obtained from organic-scintillator data for verifying a particular treaty-relevant warhead attribute: the presence of high-explosive and neutron-reflecting materials. Toward this end, several improvements on current unfolding capabilities are demonstrated: deuterated detectors are shown to have superior response-matrix condition to that of standard hydrogen-base scintintillators; a novel data-discretization scheme is proposed which removes important detector nonlinearities; and a technique is described for re-parameterizing the unfolding problem in order to constrain the parameter space of solutions sought, sidestepping the inverse problem altogether. These improvements are demonstrated with trial measurements and verified using accelerator-based time-of-flight calculation of reference spectra. Then, a demonstration is presented in which the elemental compositions of low-Z neutron-attenuating materials are estimated to within 10%. These techniques could have direct application in verifying the presence of high-explosive materials in a neutron-emitting test item, as well as other for treaty verification challenges.
NASA Technical Reports Server (NTRS)
Antoniewicz, Robert F.; Duke, Eugene L.; Menon, P. K. A.
1991-01-01
The design of nonlinear controllers has relied on the use of detailed aerodynamic and engine models that must be associated with the control law in the flight system implementation. Many of these controllers were applied to vehicle flight path control problems and have attempted to combine both inner- and outer-loop control functions in a single controller. An approach to the nonlinear trajectory control problem is presented. This approach uses linearizing transformations with measurement feedback to eliminate the need for detailed aircraft models in outer-loop control applications. By applying this approach and separating the inner-loop and outer-loop functions two things were achieved: (1) the need for incorporating detailed aerodynamic models in the controller is obviated; and (2) the controller is more easily incorporated into existing aircraft flight control systems. An implementation of the controller is discussed, and this controller is tested on a six degree-of-freedom F-15 simulation and in flight on an F-15 aircraft. Simulation data are presented which validates this approach over a large portion of the F-15 flight envelope. Proof of this concept is provided by flight-test data that closely matches simulation results. Flight-test data are also presented.
A Bayesian approach to Fourier Synthesis inverse problem with application in SAR imaging
NASA Astrophysics Data System (ADS)
Zhu, Sha; Mohammad-Djafari, Ali
2011-03-01
In this paper we propose a Bayesian approach to the ill-posed inverse problem of Fourier synthesis (FS) which consists in reconstructing a function from partial knowledge of its Fourier Transform (FT) with application in SAR (Synthetic Aperture Radar) imaging. The function to be estimated represents an image of the observed scene. Considering this observed scene is mainly composed of point sources, we propose to use a Generalized Gaussian (GG) prior model, and then the Maximum A posterior (MAP) estimator as the desired solution. In particular, we are interested in bi-static case of spotlight-mode SAR data. In a first step, we consider real valued reflectivities but we account for the complex value of the measured data. The relation between the Fourier transform of the measured data and the unknown scene reflectivity is modeled by a 2D spatial FT. The inverse problem becomes then a FS and depending on the geometry of the data acquisition, only the set of locations in the Fourier space are different. We give a detailed modeling of the data acquisition process that we simulated, then apply the proposed method on those synthetic data to measure its performances compared to some other classical methods. Finally, we demonstrate the performance of the method on experimental SAR data obtained in a collaborative work by ONERA.
What is the problem in clinical application of sentinel node concept to gastric cancer surgery?
Miyashiro, Isao
2012-03-01
More than ten years have passed since the sentinel node (SN) concept for gastric cancer surgery was first discussed. Less invasive modified surgical approaches based on the SN concept have already been put into practice for malignant melanoma and breast cancer, however the SN concept is not yet placed in a standard position in gastric cancer surgery even after two multi-institutional prospective clinical trials, the Japan Clinical Oncology Group trial (JCOG0302) and the Japanese Society for Sentinel Node Navigation Surgery (SNNS) trial. What is the problem in the clinical application of the SN concept to gastric cancer surgery? There is no doubt that we need reliable indicator(s) to determine with certainty the absence of metastasis in the lymph nodes in order to avoid unnecessary lymphadenectomy. There are several matters of debate in performing the actual procedure, such as the type of tracer, the site of injection, how to detect and harvest, how to detect metastases of SNs, and learning period. These issues have to be addressed further to establish the most suitable procedure. Novel technologies such as indocyanine green (ICG) fluorescence imaging and one-step nucleic acid amplification (OSNA) may overcome the current difficulties. Once we know what the problems are and how to tackle them, we can pursue the goal.
Design of fiber optic communication systems for IVHS applications: problems and recommendations
NASA Astrophysics Data System (ADS)
Arya, Vivek; Hobeika, Antoine G.; de Vries, Marten J.; Claus, Richard O.
1995-01-01
The objective of this paper is to highlight recommendations made by fifteen experts from industry and State Departments of Transportation (DOT) regarding the design, implementation, operations, and maintenance of fiber optic communication links presently being used in their transportation management systems. This paper also brings forth the problems faced during the deployment of these systems. The procedure followed for this research was to review the specifications and design guidelines for various Federal Highway Administration (FHWA) projects which have implemented, or are in the process of implementing, fiber optic communication links for their traffic management systems. DOT officials and industry design consultants who were directly involved in the implementation of the FHWA projects were then interviewed on issues concerning system design, operations and management, bidding, and other institutional aspects. The result of these interviews is a set of recommendations ranging from increased use of the latest standards to suggestions for more efficient planning of the traffic management center. These problems and recommendations are presented in this paper. This paper thus offers valuable guidelines for the design and implementation of fiber optic communication systems for future IVHS and transportation management applications.
An extended theory of thin airfoils and its application to the biplane problem
NASA Technical Reports Server (NTRS)
Millikan, Clark B
1931-01-01
The report presents a new treatment, due essentially to von Karman, of the problem of the thin airfoil. The standard formulae for the angle of zero lift and zero moment are first developed and the analysis is then extended to give the effect of disturbing or interference velocities, corresponding to an arbitrary potential flow, which are superimposed on a normal rectilinear flow over the airfoil. An approximate method is presented for obtaining the velocities induced by a 2-dimensional airfoil at a point some distance away. In certain cases this method has considerable advantage over the simple "lifting line" procedure usually adopted. The interference effects for a 2-dimensional biplane are considered in the light of the previous analysis. The results of the earlier sections are then applied to the general problem of the interference effects for a 3-dimensional biplane, and formulae and charts are given which permit the characteristics of the individual wings of an arbitrary biplane without sweepback or dihedral to be calculated. In the final section the conclusions drawn from the application of the theory to a considerable number of special cases are discussed, and curves are given illustrating certain of these conclusions and serving as examples to indicate the nature of the agreement between the theory and experiment.
Applicability extent of 2-D heat equation for numerical analysis of a multiphysics problem
NASA Astrophysics Data System (ADS)
Khawaja, H.
2017-01-01
This work focuses on thermal problems, solvable using the heat equation. The fundamental question being answered here is: what are the limits of the dimensions that will allow a 3-D thermal problem to be accurately modelled using a 2-D Heat Equation? The presented work solves 2-D and 3-D heat equations using the Finite Difference Method, also known as the Forward-Time Central-Space (FTCS) method, in MATLAB®. For this study, a cuboidal shape domain with a square cross-section is assumed. The boundary conditions are set such that there is a constant temperature at its center and outside its boundaries. The 2-D and 3-D heat equations are solved in a time dimension to develop a steady state temperature profile. The method is tested for its stability using the Courant-Friedrichs-Lewy (CFL) criteria. The results are compared by varying the thickness of the 3-D domain. The maximum error is calculated, and recommendations are given on the applicability of the 2-D heat equation.
Lawrence, Chris C.; Flaska, Marek; Pozzi, Sara A.; Febbraro, Michael; Becchetti, F. D.
2016-08-14
Verification of future warhead-dismantlement treaties will require detection of certain warhead attributes without the disclosure of sensitive design information, and this presents an unusual measurement challenge. Neutron spectroscopy—commonly eschewed as an ill-posed inverse problem—may hold special advantages for warhead verification by virtue of its insensitivity to certain neutron-source parameters like plutonium isotopics. In this article, we investigate the usefulness of unfolded neutron spectra obtained from organic-scintillator data for verifying a particular treaty-relevant warhead attribute: the presence of high-explosive and neutron-reflecting materials. Toward this end, several improvements on current unfolding capabilities are demonstrated: deuterated detectors are shown to have superior response-matrix condition to that of standard hydrogen-base scintintillators; a novel data-discretization scheme is proposed which removes important detector nonlinearities; and a technique is described for re-parameterizing the unfolding problem in order to constrain the parameter space of solutions sought, sidestepping the inverse problem altogether. These improvements are demonstrated with trial measurements and verified using accelerator-based time-of-flight calculation of reference spectra. Then, a demonstration is presented in which the elemental compositions of low-Z neutron-attenuating materials are estimated to within 10%. These techniques could have direct application in verifying the presence of high-explosive materials in a neutron-emitting test item, as well as other for treaty verification challenges.
Parallel satellite orbital situational problems solver for space missions design and control
NASA Astrophysics Data System (ADS)
Atanassov, Atanas Marinov
2016-11-01
Solving different scientific problems for space applications demands implementation of observations, measurements or realization of active experiments during time intervals in which specific geometric and physical conditions are fulfilled. The solving of situational problems for determination of these time intervals when the satellite instruments work optimally is a very important part of all activities on every stage of preparation and realization of space missions. The elaboration of universal, flexible and robust approach for situation analysis, which is easily portable toward new satellite missions, is significant for reduction of missions' preparation times and costs. Every situation problem could be based on one or more situation conditions. Simultaneously solving different kinds of situation problems based on different number and types of situational conditions, each one of them satisfied on different segments of satellite orbit requires irregular calculations. Three formal approaches are presented. First one is related to situation problems description that allows achieving flexibility in situation problem assembling and presentation in computer memory. The second formal approach is connected with developing of situation problem solver organized as processor that executes specific code for every particular situational condition. The third formal approach is related to solver parallelization utilizing threads and dynamic scheduling based on "pool of threads" abstraction and ensures a good load balance. The developed situation problems solver is intended for incorporation in the frames of multi-physics multi-satellite space mission's design and simulation tools.
NASA Astrophysics Data System (ADS)
Ebrahimnejad, Ali
2015-08-01
There are several methods, in the literature, for solving fuzzy variable linear programming problems (fuzzy linear programming in which the right-hand-side vectors and decision variables are represented by trapezoidal fuzzy numbers). In this paper, the shortcomings of some existing methods are pointed out and to overcome these shortcomings a new method based on the bounded dual simplex method is proposed to determine the fuzzy optimal solution of that kind of fuzzy variable linear programming problems in which some or all variables are restricted to lie within lower and upper bounds. To illustrate the proposed method, an application example is solved and the obtained results are given. The advantages of the proposed method over existing methods are discussed. Also, one application of this algorithm in solving bounded transportation problems with fuzzy supplies and demands is dealt with. The proposed method is easy to understand and to apply for determining the fuzzy optimal solution of bounded fuzzy variable linear programming problems occurring in real-life situations.
On the spectra of certain integro-differential-delay problems with applications in neurodynamics
NASA Astrophysics Data System (ADS)
Grindrod, P.; Pinotsis, D. A.
2011-01-01
We investigate the spectrum of certain integro-differential-delay equations (IDDEs) which arise naturally within spatially distributed, nonlocal, pattern formation problems. Our approach is based on the reformulation of the relevant dispersion relations with the use of the Lambert function. As a particular application of this approach, we consider the case of the Amari delay neural field equation which describes the local activity of a population of neurons taking into consideration the finite propagation speed of the electric signal. We show that if the kernel appearing in this equation is symmetric around some point a≠0 or consists of a sum of such terms, then the relevant dispersion relation yields spectra with an infinite number of branches, as opposed to finite sets of eigenvalues considered in previous works. Also, in earlier works the focus has been on the most rightward part of the spectrum and the possibility of an instability driven pattern formation. Here, we numerically survey the structure of the entire spectra and argue that a detailed knowledge of this structure is important within neurodynamical applications. Indeed, the Amari IDDE acts as a filter with the ability to recognise and respond whenever it is excited in such a way so as to resonate with one of its rightward modes, thereby amplifying such inputs and dampening others. Finally, we discuss how these results can be generalised to the case of systems of IDDEs.
Localized suffix array and its application to genome mapping problems for paired-end short reads.
Kimura, Kouichi; Koike, Asako
2009-10-01
We introduce a new data structure, a localized suffix array, based on which occurrence information is dynamically represented as the combination of global positional information and local lexicographic order information in text search applications. For the search of a pair of words within a given distance, many candidate positions that share a coarse-grained global position can be compactly represented in term of local lexicographic orders as in the conventional suffix array, and they can be simultaneously examined for violation of the distance constraint at the coarse-grained resolution. Trade-off between the positional and lexicographical information is progressively shifted towards finer positional resolution, and the distance constraint is reexamined accordingly. Thus the paired search can be efficiently performed even if there are a large number of occurrences for each word. The localized suffix array itself is in fact a reordering of bits inside the conventional suffix array, and their memory requirements are essentially the same. We demonstrate an application to genome mapping problems for paired-end short reads generated by new-generation DNA sequencers. When paired reads are highly repetitive, it is time-consuming to naïvely calculate, sort, and compare all of the coordinates. For a human genome re-sequencing data of 36 base pairs, more than 10 times speedups over the naïve method were observed in almost half of the cases where the sums of redundancies (number of individual occurrences) of paired reads were greater than 2,000.
Fujisaki, Keisuke; Ikeda, Tomoyuki
2013-01-01
To connect different scale models in the multi-scale problem of microwave use, equivalent material constants were researched numerically by a three-dimensional electromagnetic field, taking into account eddy current and displacement current. A volume averaged method and a standing wave method were used to introduce the equivalent material constants; water particles and aluminum particles are used as composite materials. Consumed electrical power is used for the evaluation. Water particles have the same equivalent material constants for both methods; the same electrical power is obtained for both the precise model (micro-model) and the homogeneous model (macro-model). However, aluminum particles have dissimilar equivalent material constants for both methods; different electric power is obtained for both models. The varying electromagnetic phenomena are derived from the expression of eddy current. For small electrical conductivity such as water, the macro-current which flows in the macro-model and the micro-current which flows in the micro-model express the same electromagnetic phenomena. However, for large electrical conductivity such as aluminum, the macro-current and micro-current express different electromagnetic phenomena. The eddy current which is observed in the micro-model is not expressed by the macro-model. Therefore, the equivalent material constant derived from the volume averaged method and the standing wave method is applicable to water with a small electrical conductivity, although not applicable to aluminum with a large electrical conductivity. PMID:28788395
Fujisaki, Keisuke; Ikeda, Tomoyuki
2013-11-21
To connect different scale models in the multi-scale problem of microwave use, equivalent material constants were researched numerically by a three-dimensional electromagnetic field, taking into account eddy current and displacement current. A volume averaged method and a standing wave method were used to introduce the equivalent material constants; water particles and aluminum particles are used as composite materials. Consumed electrical power is used for the evaluation. Water particles have the same equivalent material constants for both methods; the same electrical power is obtained for both the precise model (micro-model) and the homogeneous model (macro-model). However, aluminum particles have dissimilar equivalent material constants for both methods; different electric power is obtained for both models. The varying electromagnetic phenomena are derived from the expression of eddy current. For small electrical conductivity such as water, the macro-current which flows in the macro-model and the micro-current which flows in the micro-model express the same electromagnetic phenomena. However, for large electrical conductivity such as aluminum, the macro-current and micro-current express different electromagnetic phenomena. The eddy current which is observed in the micro-model is not expressed by the macro-model. Therefore, the equivalent material constant derived from the volume averaged method and the standing wave method is applicable to water with a small electrical conductivity, although not applicable to aluminum with a large electrical conductivity.
Trajectory evolution in the multi-body problem with applications in the Saturnian system
NASA Astrophysics Data System (ADS)
Craig Davis, Diane; Howell, Kathleen C.
2011-12-01
Recent discoveries by the Cassini spacecraft have generated interest in future missions to further explore the moons of Saturn as well as other small bodies in the solar system. Incorporating multi-body dynamics into the preliminary design can aid the design process and potentially reduce the cost of maneuvers that are required to achieve certain objectives. The focus in this investigation is the development and application of additional design tools to facilitate preliminary trajectory design in a multi-body environment where the gravitational influence of both primaries is quite significant. Within the context of the circular restricted 3-body problem, then, the evolution of trajectories in the vicinity of the smaller primary (P 2) that are strongly influenced by the distant larger primary (P 1) is investigated. By parameterizing the orbits in terms of radius and periapse orientation relative to the P 1-P 2 line, the short- and long-term behaviors of the trajectories are predictable. Initial conditions that yield a trajectory with a particular set of desired characteristics are easily selected from periapsis Poincaré maps for both short- and long-term orbits. Analysis in the Sun-Saturn and Saturn-Titan systems serves as the basis for examples of mission design applications.
An Application of Context- and Problem-Based Learning (C-PBL) into Teaching Thermodynamics
NASA Astrophysics Data System (ADS)
Baran, Mukadder; Sozbilir, Mustafa
2017-05-01
This study aims to investigate the applicability of context- and problem-based learning (C-PBL) into teaching thermodynamics and to examine its influence on the students' achievements in chemistry, retention of knowledge, students' attitudes, motivation and interest towards chemistry. The embedded mixed method design was utilized with a group of 13 chemistry students in a 2-year program of "Medical Laboratory and Techniques" at a state university in an underdeveloped city at the southeastern region of Turkey. The research data were collected via questionnaires regarding the students' attitudes, motivation and interest in chemistry, an achievement test on "thermodynamics" and interviews utilized to find out the applicability of C-PBL into thermodynamics. The findings demonstrated that C-PBL led a statistically significant increase in the students' achievement in thermodynamics and their interest in chemistry, while no statistically significant difference was observed in the students' attitudes and motivation towards chemistry before and after the intervention. The interviews revealed that C-PBL developed not only the students' communication skills but also their skills in using time effectively, making presentations, reporting research results and using technology. It was also found to increase their self-confidence together with the positive attitudes towards C-PBL and being able to associate chemistry with daily life. In light of these findings, it could be stated that it will be beneficial to increase the use of C-PBL in teaching chemistry.
NASA Astrophysics Data System (ADS)
Schlosser, Peter; Smethie, William M.; Toggweiler, J. Robert
1998-07-01
On October 16-20, 1995, a Maurice Ewing Symposium on Applications of Trace Substance Measurements to Oceanographic Problems was held at Biosphere 2 in Oracle, Arizona. The objectives of this symposium were (1) to review the status of tracer methodology for oceanographic research (technological advances and progress in applications), (2) to evaluate the potential of the individual tracers for regional and global studies of water mass formation and circulation in the ocean, and (3) to outline the role of tracers in calibration and improvement of global circulation models. Trace substances of natural and anthropogenic origin have been used to study circulation and mixing in the ocean for roughly the past 4 decades. In such studies the penetration and subsequent spreading of anthropogenic trace substances released to the ocean are observed and evaluated in terms of flow paths and mean transit and residence times of specific water masses. These studies are basically regional or global dye experiments. Additionally, the radioactive character of several natural tracers is used to determine the mean residence times of deep waters.
NASA Astrophysics Data System (ADS)
Andersen, Anders H.; Rayens, William S.; Li, Ren-Cang; Blonder, Lee X.
2000-10-01
In this paper we describe the enormous potential that multilinear models hold for the analysis of data from neuroimaging experiments that rely on functional magnetic resonance imaging (MRI) or other imaging modalities. A case is made for why one might fully expect that the successful introduction of these models to the neuroscience community could define the next generation of structure-seeking paradigms in the area. In spite of the potential for immediate application, there is much to do from the perspective of statistical science. That is, although multilinear models have already been particularly successful in chemistry and psychology, relatively little is known about their statistical properties. To that end, our research group at the University of Kentucky has made significant progress. In particular, we are in the process of developing formal influence measures for multilinear methods as well as associated classification models and effective implementations. We believe that these problems will be among the most important and useful to the scientific community. Details are presented herein and an application is given in the context of facial emotion processing experiments.
Multi-fluid problems in magnetohydrodynamics with applications to astrophysical processes
NASA Astrophysics Data System (ADS)
Greenfield, Eric John
2016-01-01
I begin this study by presenting an overview of the theory of magnetohydrodynamics and the necessary conditions to justify the fluid treatment of a plasma. Upon establishing the fluid description of a plasma we move on to a discussion of magnetohydrodynamics in both the ideal and Hall regimes. This framework is then extended to include multiple plasmas in order to consider two problems of interest in the field of theoretical space physics. The first is a study on the evolution of a partially ionized plasma, a topic with many applications in space physics. A multi-fluid approach is necessary in this case to account for the motions of an ion fluid, electron fluid and neutral atom fluid; all of which are coupled to one another by collisions and/or electromagnetic forces. The results of this study have direct application towards an open question concerning the cascade of Kolmogorov-like turbulence in the interstellar plasma which we will discuss below. The second application of multi-fluid magnetohydrodynamics that we consider in this thesis concerns the amplification of magnetic field upstream of a collisionless, parallel shock. The relevant fluids here are the ions and electrons comprising the interstellar plasma and the galactic cosmic ray ions. Previous works predict that the streaming of cosmic rays lead to an instability resulting in significant amplification of the interstellar magnetic field at supernova blastwaves. This prediction is routinely invoked to explain the acceleration of galactic cosmic rays up to energies of 1015 eV. I will examine this phenomenon in detail using the multi-fluid framework outlined below. The purpose of this work is to first confirm the existence of an instability using a purely fluid approach with no additional approximations. If confirmed, I will determine the necessary conditions for it to operate.
Ultrasonic focusing through inhomogeneous media by application of the inverse scattering problem
Haddadin, Osama S.; Ebbini, Emad S.
2010-01-01
A new approach is introduced for self-focusing phased arrays through inhomogeneous media for therapeutic and imaging applications. This algorithm utilizes solutions to the inverse scattering problem to estimate the impulse response (Green’s function) of the desired focal point(s) at the elements of the array. This approach is a two-stage procedure, where in the first stage the Green’s functions is estimated from measurements of the scattered field taken outside the region of interest. In the second stage, these estimates are used in the pseudoinverse method to compute excitation weights satisfying predefined set of constraints on the structure of the field at the focus points. These scalar, complex valued excitation weights are used to modulate the incident field for retransmission. The pseudoinverse pattern synthesis method requires knowing the Green’s function between the focus points and the array, which is difficult to attain for an unknown inhomogeneous medium. However, the solution to the inverse scattering problem, the scattering function, can be used directly to compute the required inhomogeneous Green’s function. This inverse scattering based self-focusing is noninvasive and does not require a strong point scatterer at or near the desired focus point. It simply requires measurements of the scattered field outside the region of interest. It can be used for high resolution imaging and enhanced therapeutic effects through inhomogeneous media without making any assumptions on the shape, size, or location of the inhomogeneity. This technique is outlined and numerical simulations are shown which validate this technique for single and multiple focusing using a circular array. PMID:9670525
An analytic solution of the stochastic storage problem applicable to soil water
Milly, P.C.D.
1993-01-01
The accumulation of soil water during rainfall events and the subsequent depletion of soil water by evaporation between storms can be described, to first order, by simple accounting models. When the alternating supplies (precipitation) and demands (potential evaporation) are viewed as random variables, it follows that soil-water storage, evaporation, and runoff are also random variables. If the forcing (supply and demand) processes are stationary for a sufficiently long period of time, an asymptotic regime should eventually be reached where the probability distribution functions of storage, evaporation, and runoff are stationary and uniquely determined by the distribution functions of the forcing. Under the assumptions that the potential evaporation rate is constant, storm arrivals are Poisson-distributed, rainfall is instantaneous, and storm depth follows an exponential distribution, it is possible to derive the asymptotic distributions of storage, evaporation, and runoff analytically for a simple balance model. A particular result is that the fraction of rainfall converted to runoff is given by (1 - R−1)/(eα(1−R−1) − R−1), in which R is the ratio of mean potential evaporation to mean rainfall and a is the ratio of soil water-holding capacity to mean storm depth. The problem considered here is analogous to the well-known problem of storage in a reservoir behind a dam, for which the present work offers a new solution for reservoirs of finite capacity. A simple application of the results of this analysis suggests that random, intraseasonal fluctuations of precipitation cannot by themselves explain the observed dependence of the annual water balance on annual totals of precipitation and potential evaporation.
Applications of a finite-volume algorithm for incompressible MHD problems
NASA Astrophysics Data System (ADS)
Vantieghem, S.; Sheyko, A.; Jackson, A.
2016-02-01
We present the theory, algorithms and implementation of a parallel finite-volume algorithm for the solution of the incompressible magnetohydrodynamic (MHD) equations using unstructured grids that are applicable for a wide variety of geometries. Our method implements a mixed Adams-Bashforth/Crank-Nicolson scheme for the nonlinear terms in the MHD equations and we prove that it is stable independent of the time step. To ensure that the solenoidal condition is met for the magnetic field, we use a method whereby a pseudo-pressure is introduced into the induction equation; since we are concerned with incompressible flows, the resulting Poisson equation for the pseudo-pressure is solved alongside the equivalent Poisson problem for the velocity field. We validate our code in a variety of geometries including periodic boxes, spheres, spherical shells, spheroids and ellipsoids; for the finite geometries we implement the so-called ferromagnetic or pseudo-vacuum boundary conditions appropriate for a surrounding medium with infinite magnetic permeability. This implies that the magnetic field must be purely perpendicular to the boundary. We present a number of comparisons against previous results and against analytical solutions, which verify the code's accuracy. This documents the code's reliability as a prelude to its use in more difficult problems. We finally present a new simple drifting solution for thermal convection in a spherical shell that successfully sustains a magnetic field of simple geometry. By dint of its rapid stabilization from the given initial conditions, we deem it suitable as a benchmark against which other self-consistent dynamo codes can be tested.
The physical and mathematical aspects of inverse problems in radiation detection and applications.
Hussein, Esam M A
2012-07-01
The inverse problem is the problem of converting detectable measurements into useful quantifiable indications. It is the problem of spectrum unfolding, image reconstruction, identifying a threat material, or devising a radiotherapy plan. The solution of an inverse problem requires a forward model that relates the quantities of interest to measurements. This paper explores the physical issues associated with formulating a radiation-transport forward model best suited for inversion, and the mathematical challenges associated with the solution of the corresponding inverse problem.
NASA Astrophysics Data System (ADS)
2014-11-01
Editors: M.S.Tagirov, V.V.Semashko, A.S.Nizamutdinov Kazan is the motherland of Electronic Paramagnetic Resonance (EPR) which was discovered in Kazan State University in 1944 by prof. E.K.Zavojskii. Since the Young Scientist School of Magnetic Resonance run by professor G.V.Skrotskii from MIPT stopped its work, Kazan took up the activity under the initiative of academician A.S.Borovik-Romanov. Nowadays this school is rejuvenated and the International Youth Scientific School studying "Actual problems of the magnetic resonance and its application" is developing. Traditionally the main subjects of the School meetings are: Magnetic Resonance in Solids, Chemistry, Geology, Biology and Medicine. The unchallenged organizers of that school are Kazan Federal University and Kazan E. K. Zavoisky Physical-Technical Institute. The rector of the School is professor Murat Tagirov, vice-rector - professor Valentine Zhikharev. Since 1997 more than 100 famous scientists from Germany, France, Switzerland, USA, Japan, Russia, Ukraine, Moldavia, Georgia provided plenary lecture presentations. Almost 700 young scientists have had an opportunity to participate in discussions of the latest scientific developments, to make their oral reports and to improve their knowledge and skills. To enhance competition among the young scientists, reports take place every year and the Program Committee members name the best reports, the authors of which are invited to prepare full-scale scientific papers. Since 2013 the International Youth Scientific School "Actual problems of the magnetic resonance and its application", following the tendency for comprehensive studies of matter properties and its interaction with electromagnetic fields, expanded "the field of interest" and opened the new section: Coherent Optics and Optical Spectroscopy. Many young people have submitted interesting reports on photonics, quantum electronics, laser physics, quantum optics, traditional optical and laser spectroscopy, non
Application of Second-Moment Source Analysis to Three Problems in Earthquake Forecasting
NASA Astrophysics Data System (ADS)
Donovan, J.; Jordan, T. H.
2011-12-01
Though earthquake forecasting models have often represented seismic sources as space-time points (usually hypocenters), a more complete hazard analysis requires the consideration of finite-source effects, such as rupture extent, orientation, directivity, and stress drop. The most compact source representation that includes these effects is the finite moment tensor (FMT), which approximates the degree-two polynomial moments of the stress glut by its projection onto the seismic (degree-zero) moment tensor. This projection yields a scalar space-time source function whose degree-one moments define the centroid moment tensor (CMT) and whose degree-two moments define the FMT. We apply this finite-source parameterization to three forecasting problems. The first is the question of hypocenter bias: can we reject the null hypothesis that the conditional probability of hypocenter location is uniformly distributed over the rupture area? This hypothesis is currently used to specify rupture sets in the "extended" earthquake forecasts that drive simulation-based hazard models, such as CyberShake. Following McGuire et al. (2002), we test the hypothesis using the distribution of FMT directivity ratios calculated from a global data set of source slip inversions. The second is the question of source identification: given an observed FMT (and its errors), can we identify it with an FMT in the complete rupture set that represents an extended fault-based rupture forecast? Solving this problem will facilitate operational earthquake forecasting, which requires the rapid updating of earthquake triggering and clustering models. Our proposed method uses the second-order uncertainties as a norm on the FMT parameter space to identify the closest member of the hypothetical rupture set and to test whether this closest member is an adequate representation of the observed event. Finally, we address the aftershock excitation problem: given a mainshock, what is the spatial distribution of aftershock
NASA Technical Reports Server (NTRS)
Arya, V. K.; Kaufman, A.
1987-01-01
A description of the finite element implementation of Robinson's unified viscoplastic model into the General Purpose Finite Element Program (MARC) is presented. To demonstrate its application, the implementation is applied to some uniaxial and multiaxial problems. A comparison of the results for the multiaxial problem of a thick internally pressurized cylinder, obtained using the finite element implementation and an analytical solution, is also presented. The excellent agreement obtained confirms the correct finite element implementation of Robinson's model.
Gable, C.; Travis, B.J.; O`Connell, R.J.; Stone, H.A.
1995-06-01
Flow in the mantle of terrestrial planets produces stresses and topography on the planet`s surface which may allow us to infer the dynamics and evolution of the planet`s -interior. This project is directed towards understanding the relationship between dynamical processes related to buoyancy-driven flow and the observable expression (e.g. earthquakes, surface topography) of the flow. Problems considered include the ascent of mantle plumes and their interaction with compositional discontinuities, the deformation of subducted slabs, and effects of lateral viscosity variations on post-glacial rebound. We find that plumes rising from the lower mantle into a lower-viscosity upper mantle become extended vertically. As the plume spreads beneath the planet`s surface, the dynamic topography changes from a bell-shape to a plateau shape. The topography and surface stresses associated . with surface features called arachnoids, novae and coronae on Venus are consistent with the surface expression of a rising and spreading buoyant volume of fluid. Short wavelength viscosity variations, or sharp variations of lithosphere thickness, have a large effect on surface stresses. This study also considers the interaction and deformation of buoyancy-driven drops and bubbles in low Reynolds number multiphase systems. Applications include bubbles in magmas, the coalescence of liquid iron drops during core formation, and a wide range of industrial applications. Our methodology involves a combination of numerical boundary integral calculations, experiments and analytical work. For example, we find that for deformable drops the effects of deformation result in the vertical alignment of initially horizontally offset drops, thus enhancing the rate of coalescence.
On some generalization of the area theorem with applications to the problem of rolling balls
NASA Astrophysics Data System (ADS)
Chaplygin, Sergey A.
2012-04-01
This publication contributes to the series of RCD translations of Sergey Alexeevich Chaplygin's scientific heritage. Earlier we published three of his papers on non-holonomic dynamics (vol. 7, no. 2; vol. 13, no. 4) and two papers on hydrodynamics (vol. 12, nos. 1, 2). The present paper deals with mechanical systems that consist of several spheres and discusses generalized conditions for the existence of integrals of motion (linear in velocities) in such systems. First published in 1897 and awarded by the Gold Medal of Russian Academy of Sciences, this work has not lost its scientific significance and relevance. (In particular, its principal ideas are further developed and extended in the recent article "Two Non-holonomic Integrable Problems Tracing Back to Chaplygin", published in this issue, see p. 191). Note that non-holonomic models for rolling motion of spherical shells, including the case where the shells contain intricate mechanisms inside, are currently of particular interest in the context of their application in the design of ball-shaped mobile robots. We hope that this classical work will be estimated at its true worth by the English-speaking world.
NASA Technical Reports Server (NTRS)
Kreinovich, Vladik
1996-01-01
For a space mission to be successful it is vitally important to have a good control strategy. For example, with the Space Shuttle it is necessary to guarantee the success and smoothness of docking, the smoothness and fuel efficiency of trajectory control, etc. For an automated planetary mission it is important to control the spacecraft's trajectory, and after that, to control the planetary rover so that it would be operable for the longest possible period of time. In many complicated control situations, traditional methods of control theory are difficult or even impossible to apply. In general, in uncertain situations, where no routine methods are directly applicable, we must rely on the creativity and skill of the human operators. In order to simulate these experts, an intelligent control methodology must be developed. The research objectives of this project were: to analyze existing control techniques; to find out which of these techniques is the best with respect to the basic optimality criteria (stability, smoothness, robustness); and, if for some problems, none of the existing techniques is satisfactory, to design new, better intelligent control techniques.
NASA Astrophysics Data System (ADS)
Giancotti, Marco; Campagnola, Stefano; Tsuda, Yuichi; Kawaguchi, Jun'ichiro
2014-11-01
This work studies periodic solutions applicable, as an extended phase, to the JAXA asteroid rendezvous mission Hayabusa 2 when it is close to target asteroid 1999 JU3. The motion of a spacecraft close to a small asteroid can be approximated with the equations of Hill's problem modified to account for the strong solar radiation pressure. The identification of families of periodic solutions in such systems is just starting and the field is largely unexplored. We find several periodic orbits using a grid search, then apply numerical continuation and bifurcation theory to a subset of these to explore the changes in the orbit families when the orbital energy is varied. This analysis gives information on their stability and bifurcations. We then compare the various families on the basis of the restrictions and requirements of the specific mission considered, such as the pointing of the solar panels and instruments. We also use information about their resilience against parameter errors and their ground tracks to identify one particularly promising type of solution.
How we "breathed life" into problem-based learning cases using a mobile application.
McLean, Michelle; Brazil, Victoria; Johnson, Patricia
2014-10-01
Problem-based learning (PBL) has been widely adopted in medical education. Learners become bored with paper-based cases as they progress through their studies. To breathe life (i.e. develop virtual patients) into paper-based PBL cases. The "patients" in paper-based PBL cases in one Year 2 were transformed into virtual patients by simulated patients role-playing and the videos and associated patient data uploaded to Bond's Virtual Hospital, a mobile Application. In unsupervised "clinical teams", second-year students undertook "ward rounds" twice a week, prompted by a virtual consultant and registered nurse. Immediately following the "ward rounds", they met with a clinician facilitator to discuss their "patients". Apart from some minor technical issues, the experience was rated positively by students and clinical facilitators. They claimed that it provided students with a sense of what happens in the real world of medicine. The group work skills students had developed during PBL stood them in good stead to self-manage their "clinical teams". This more authentic PBL experience will be extended to earlier semesters as well as later in the curriculum as the virtual hospital can be used to expose learners to a profile of patients that may not be guaranteed during hospital rounds.
NASA Astrophysics Data System (ADS)
Tsuji, Takuya; Yokomine, Takehiko; Shimizu, Akihiko
2002-11-01
We have been engaged in the development of multi-scale adaptive simulation technique for incompressible turbulent flow. This is designed as that important scale components in the flow field are detected automatically by lifting wavelet and solved selectively. In conventional incompressible scheme, it is very common to solve Poisson equation of pressure to meet the divergence free constraints of incompressible flow. It may be not impossible to solve the Poisson eq. in the adaptive way, but this is very troublesome because it requires generation of control volume at each time step. We gave an eye on weakly compressible model proposed by Bao(2001). This model was derived from zero Mach limit asymptotic analysis of compressible Navier-Stokes eq. and does not need to solve the Poisson eq. at all. But it is relatively new and it requires demonstration study before the combination with the adaptation by wavelet. In present study, 2-D and 3-D Backstep flow were selected as test problems and applicability to turbulent flow is verified in detail. Besides, combination of adaptation by wavelet with weakly compressible model towards the adaptive turbulence simulation is discussed.
NASA Astrophysics Data System (ADS)
Templeton, Jeremy A.; Jones, Reese E.; Wagner, Gregory J.
2010-12-01
This paper derives a methodology to enable spatial and temporal control of thermally inhomogeneous molecular dynamics (MD) simulations. The primary goal is to perform non-equilibrium MD of thermal transport analogous to continuum solutions of heat flow which have complex initial and boundary conditions, moving MD beyond quasi-equilibrium simulations using periodic boundary conditions. In our paradigm, the entire spatial domain is filled with atoms and overlaid with a finite element (FE) mesh. The representation of continuous variables on this mesh allows fixed temperature and fixed heat flux boundary conditions to be applied, non-equilibrium initial conditions to be imposed and source terms to be added to the atomistic system. In effect, the FE mesh defines a large length scale over which atomic quantities can be locally averaged to derive continuous fields. Unlike coupling methods which require a surrogate model of thermal transport like Fourier's law, in this work the FE grid is only employed for its projection, averaging and interpolation properties. Inherent in this approach is the assumption that MD observables of interest, e.g. temperature, can be mapped to a continuous representation in a non-equilibrium setting. This assumption is taken advantage of to derive a single, unified set of control forces based on Gaussian isokinetic thermostats to regulate the temperature and heat flux locally in the MD. Example problems are used to illustrate potential applications. In addition to the physical results, data relevant to understanding the numerical effects of the method on these systems are also presented.
Hybrid modeling of spatial continuity for application to numerical inverse problems
Friedel, Michael J.; Iwashita, Fabio
2013-01-01
A novel two-step modeling approach is presented to obtain optimal starting values and geostatistical constraints for numerical inverse problems otherwise characterized by spatially-limited field data. First, a type of unsupervised neural network, called the self-organizing map (SOM), is trained to recognize nonlinear relations among environmental variables (covariates) occurring at various scales. The values of these variables are then estimated at random locations across the model domain by iterative minimization of SOM topographic error vectors. Cross-validation is used to ensure unbiasedness and compute prediction uncertainty for select subsets of the data. Second, analytical functions are fit to experimental variograms derived from original plus resampled SOM estimates producing model variograms. Sequential Gaussian simulation is used to evaluate spatial uncertainty associated with the analytical functions and probable range for constraining variables. The hybrid modeling of spatial continuity is demonstrated using spatially-limited hydrologic measurements at different scales in Brazil: (1) physical soil properties (sand, silt, clay, hydraulic conductivity) in the 42 km2 Vargem de Caldas basin; (2) well yield and electrical conductivity of groundwater in the 132 km2 fractured crystalline aquifer; and (3) specific capacity, hydraulic head, and major ions in a 100,000 km2 transboundary fractured-basalt aquifer. These results illustrate the benefits of exploiting nonlinear relations among sparse and disparate data sets for modeling spatial continuity, but the actual application of these spatial data to improve numerical inverse modeling requires testing.
NASA Astrophysics Data System (ADS)
Bayley, T. W.; Ferré, T. P. A.
2014-12-01
There is growing recognition in the hydrologic community that deterministic hydrologic models are imperfect tools for decision support. Despite this insight, the state of practice for a hydrologic investigation follows this sequence: data collection, conceptual model development, numerical model development, and finally decision making based on model projections. This approach, based on relatively unconsidered design of data collection, may result in uninformative data. As a result, it is commonly repeated several times to resolve critical uncertainties. We present a novel two step multi-model approach to optimizing data collection to aid decision making, risk analysis. Here, we describe the application this approach (Discrimination Inference to Reduce Expected Cost Technique - DIRECT) for a contaminant transport problem. DIRECT has 7 steps. First, outcomes of concern were defined explicitly. Next a probabilistic analysis of the outcomes was conducted that incorporated multiple conceptual and parametric realizations. The likelihood of each model was assessed based on goodness of fit to existing data. A cost function was developed and used to define the projected costs based on the model-predicted outcomes of concern. Data collection was then optimized to identify the data that could test the models of greatest concern (cost) against the other models in the ensemble. Finally a field program was conducted that included gathering lithologic, hydrologic, and chemical data from 22 new wells that were drilled in projected high value locations. The additional data reduced the expected cost of model projections to an acceptable level for defining new site compliance conditions.
Zhu, Jian; Wu, Qing-Ding; Wang, Ping; Li, Ke-Lin; Lei, Ming-Jing; Zhang, Wei-Li
2013-11-01
In order to fully understand adsorption nature of Cu2+, Zn2+, Pb2+, Cd2+, Mn2+, Fe3+ onto natural diatomite, and to find problems of classical isothermal adsorption models' application in liquid/solid system, a series of isothermal adsorption tests were conducted. As results indicate, the most suitable isotherm models for describing adsorption of Pb2+, Cd2+, Cu2+, Zn2+, Mn2+, Fe3+ onto natural diatomite are Tenkin, Tenkin, Langmuir, Tenkin, Freundlich and Freundlich, respectively, the adsorption of each ion onto natural diatomite is mainly a physical process, and the adsorption reaction is favorable. It also can be found that, when using classical isothermal adsorption models to fit the experimental data in liquid/solid system, the equilibrium adsorption amount q(e) is not a single function of ion equilibrium concentration c(e), while is a function of two variables, namely c(e) and the adsorbent concentration W0, q(e) only depends on c(e)/W(0). Results also show that the classical isothermal adsorption models have a significant adsorbent effect, and their parameter values are unstable, the simulation values of parameter differ greatly from the measured values, which is unhelpful for practical use. The tests prove that four-adsorption-components model can be used for describing adsorption behavior of single ion in nature diatomite-liquid system, its parameters k and q(m) have constant values, which is favorable for practical quantitative calculation in a given system.
Sushko, Iurii; Novotarskyi, Sergii; Körner, Robert; Pandey, Anil Kumar; Cherkasov, Artem; Li, Jiazhong; Gramatica, Paola; Hansen, Katja; Schroeter, Timon; Müller, Klaus-Robert; Xi, Lili; Liu, Huanxiang; Yao, Xiaojun; Öberg, Tomas; Hormozdiari, Farhad; Dao, Phuong; Sahinalp, Cenk; Todeschini, Roberto; Polishchuk, Pavel; Artemenko, Anatoliy; Kuz'min, Victor; Martin, Todd M; Young, Douglas M; Fourches, Denis; Muratov, Eugene; Tropsha, Alexander; Baskin, Igor; Horvath, Dragos; Marcou, Gilles; Muller, Christophe; Varnek, Alexander; Prokopenko, Volodymyr V; Tetko, Igor V
2010-12-27
The estimation of accuracy and applicability of QSAR and QSPR models for biological and physicochemical properties represents a critical problem. The developed parameter of "distance to model" (DM) is defined as a metric of similarity between the training and test set compounds that have been subjected to QSAR/QSPR modeling. In our previous work, we demonstrated the utility and optimal performance of DM metrics that have been based on the standard deviation within an ensemble of QSAR models. The current study applies such analysis to 30 QSAR models for the Ames mutagenicity data set that were previously reported within the 2009 QSAR challenge. We demonstrate that the DMs based on an ensemble (consensus) model provide systematically better performance than other DMs. The presented approach identifies 30-60% of compounds having an accuracy of prediction similar to the interlaboratory accuracy of the Ames test, which is estimated to be 90%. Thus, the in silico predictions can be used to halve the cost of experimental measurements by providing a similar prediction accuracy. The developed model has been made publicly available at http://ochem.eu/models/1 .
NASA Astrophysics Data System (ADS)
Kuznetsova, Y. S.; Vorobyev, N. A.; Trufanov, N. A.
2017-02-01
The fundamentals of the geometric immersion stress method for axisymmetric problems of elasticity theory have been stated. The geometric immersion method involves reduction of the initial problem for Clapeyron’s free form elastic body to an iteration sequence of elasticity theory problems on some canonical domain. The iteration procedure for the variational equation of the geometric immersion method has been stated, as well as its discrete analog construction technique suggested with the finite-element stress method for the axisymmetric problem of the elasticity theory in the cylindrical coordinates. A practical application of the method has been demonstrated by a test problem. A reasonably good fit between the stress fields definition and the numerical solution by the traditional finite-element displacement method has been obtained.
A novel transport based model for wire media and its application to scattering problems
NASA Astrophysics Data System (ADS)
Forati, Ebrahim
Artificially engineered materials, known as metamaterials, have attracted the interest of researchers because of the potential for novel applications. Effective modeling of metamaterials is a crucial step for analyzing and synthesizing devices. In this thesis, we focus on wire medium (both isotropic and uniaxial) and validate a novel transport based model for them. Scattering problems involving wire media are computationally intensive due to the spatially dispersive nature of homogenized wire media. However, it will be shown that using the new model to solve scattering problems can simplify the calculations a great deal. For scattering problems, an integro-differential equation based on a transport formulation is proposed instead of the convolution-form integral equation that directly comes from spatial dispersion. The integro-differential equation is much faster to solve than the convolution equation form, and its effectiveness is confirmed by solving several examples in one-, two-, and three-dimensions. Both the integro-differential equation formulation and the homogenized wire medium parameters are experimentaly confirmed. To do so, several isotropic connected wire medium spheres have been fabricated using a rapid-prototyping machine, and their measured extinction cross sections are compared with simulation results. Wire parameters (period and diameter) are varied to the point where homogenization theory breaks down, which is observed in the measurements. The same process is done for three-dimensional cubical objects made of a uniaxail wire medium, and their measured results are compared with the numerical results based on the new model. The new method is extremely fast compared to brute-force numerical methods such as FDTD, and provides more physical insight (within the limits of homogenization), including the idea of a Debye length for wire media. The limits of homogenization are examined by comparing homogenization results and measurement. Then, a novel
Fernandes, L.; Friedlander, A.; Guedes, M.; Judice, J.
2001-07-01
This paper addresses a General Linear Complementarity Problem (GLCP) that has found applications in global optimization. It is shown that a solution of the GLCP can be computed by finding a stationary point of a differentiable function over a set defined by simple bounds on the variables. The application of this result to the solution of bilinear programs and LCPs is discussed. Some computational evidence of its usefulness is included in the last part of the paper.
1974-08-20
are employed at relatively low altitudes , usually below 20, 000 ft (6. 1 kin). Although France recently flew a stratospheric tethered balloon at 55...the start those which do not appear suitable for the high-6 altitude , multi-mode communications application. 2.1 Tethered Balloon Systems The first...free balloon systemsp i and their applicability to the high altitude communications relay problem. 6. Corbin, C. D. (1974) Portable Tethered Balloon
NASA Astrophysics Data System (ADS)
José Villalaín, Juan; Casas, Antonio; Calvín, Pablo; Soto-Marín, Ruth; Torres, Sara; Moussaid, Bennacer
2017-04-01
work we discuss about the methodological problems observed when using SC analysis, such as the effect of the degree of coaxiality of different tectonic events on the uncertainty of the SCI solution and tectonic corrections, the presence of vertical axis rotation, etc. In addition we analyze different examples of application of SC techniques to solve different tectonic problems in areas affected by widespread remagnetizations, such as palinspastic reconstructions of inverted sedimentary basins, distinction of overlapped deformation events, identification of intra-Mesozoic stages in alpine chains, etc.
An application of a linear programing technique to nonlinear minimax problems
NASA Technical Reports Server (NTRS)
Schiess, J. R.
1973-01-01
A differential correction technique for solving nonlinear minimax problems is presented. The basis of the technique is a linear programing algorithm which solves the linear minimax problem. By linearizing the original nonlinear equations about a nominal solution, both nonlinear approximation and estimation problems using the minimax norm may be solved iteratively. Some consideration is also given to improving convergence and to the treatment of problems with more than one measured quantity. A sample problem is treated with this technique and with the least-squares differential correction method to illustrate the properties of the minimax solution. The results indicate that for the sample approximation problem, the minimax technique provides better estimates than the least-squares method if a sufficient amount of data is used. For the sample estimation problem, the minimax estimates are better if the mathematical model is incomplete.
On an iterative ensemble smoother and its application to a reservoir facies estimation problem
NASA Astrophysics Data System (ADS)
Luo, Xiaodong; Chen, Yan; Valestrand, Randi; Stordal, Andreas; Lorentzen, Rolf; Nævdal, Geir
2014-05-01
For data assimilation problems there are different ways in utilizing the available observations. While certain data assimilation algorithms, for instance, the ensemble Kalman filter (EnKF, see, for examples, Aanonsen et al., 2009; Evensen, 2006) assimilate the observations sequentially in time, other data assimilation algorithms may instead collect the observations at different time instants and assimilate them simultaneously. In general such algorithms can be classified as smoothers. In this aspect, the ensemble smoother (ES, see, for example, Evensen and van Leeuwen, 2000) can be considered as an smoother counterpart of the EnKF. The EnKF has been widely used for reservoir data assimilation (history matching) problems since its introduction to the community of petroleum engineering (Nævdal et al., 2002). The applications of the ES to reservoir data assimilation problems are also investigated recently (see, for example, Skjervheim and Evensen, 2011). Compared to the EnKF, the ES has certain technical advantages, including, for instance, avoiding the restarts associated with each update step in the EnKF and also having fewer variables to update, which may result in a significant reduction in simulation time, while providing similar assimilation results to those obtained by the EnKF (Skjervheim and Evensen, 2011). To further improve the performance of the ES, some iterative ensemble smoothers are suggested in the literature, in which the iterations are carried out in the forms of certain iterative optimization algorithms, e.g., the Gaussian-Newton (Chen and Oliver, 2012) or the Levenberg-Marquardt method (Chen and Oliver, 2013; Emerick and Reynolds, 2012), or in the context of adaptive Gaussian mixture (AGM, see Stordal and Lorentzen, 2013). In Emerick and Reynolds (2012) the iteration formula is derived based on the idea that, for linear observations, the final results of the iterative ES should be equal to the estimate of the EnKF. In Chen and Oliver (2013), the
ERIC Educational Resources Information Center
Donohue, Brad; Azrin, Nathan; Allen, Daniel N.; Romero, Valerie; Hill, Heather H.; Tracy, Kendra; Lapota, Holly; Gorney, Suzanne; Abdel-al, Ruweida; Caldas, Diana; Herdzik, Karen; Bradshaw, Kelsey; Valdez, Robby; Van Hasselt, Vincent B.
2009-01-01
A comprehensive evidence-based treatment for substance abuse and other associated problems (Family Behavior Therapy) is described, including its application to both adolescents and adults across a wide range of clinical contexts (i.e., criminal justice, child welfare). Relevant to practitioners and applied clinical researchers, topic areas include…
ERIC Educational Resources Information Center
Blum, Werner; Niss, Mogens
1991-01-01
This paper reviews the present state, recent trends, and prospective lines of development concerning applied problem solving, modeling, and their respective applications. Four major trends are scrutinized with respect to curriculum inclusion: a widened spectrum of arguments, an increased universality, an increased consolidation, and an extended…
followed with the introduction of Bayes Theorem as a model for intelligence analysis. The conjecture is made that Bayes Theorem can also serve as the...nucleus of a formal methodology. The application of Bayes Theorem to several types of problems is demonstrated. However, the implementation of such a
ERIC Educational Resources Information Center
Donohue, Brad; Azrin, Nathan; Allen, Daniel N.; Romero, Valerie; Hill, Heather H.; Tracy, Kendra; Lapota, Holly; Gorney, Suzanne; Abdel-al, Ruweida; Caldas, Diana; Herdzik, Karen; Bradshaw, Kelsey; Valdez, Robby; Van Hasselt, Vincent B.
2009-01-01
A comprehensive evidence-based treatment for substance abuse and other associated problems (Family Behavior Therapy) is described, including its application to both adolescents and adults across a wide range of clinical contexts (i.e., criminal justice, child welfare). Relevant to practitioners and applied clinical researchers, topic areas include…
Radiative transport in plant canopies: Forward and inverse problem for UAV applications
NASA Astrophysics Data System (ADS)
Furfaro, Roberto
This dissertation deals with modeling the radiative regime in vegetation canopies and the possible remote sensing applications derived by solving the forward and inverse canopy transport equation. The aim of the research is to develop a methodology (called "end-to-end problem solution") that, starting from first principles describing the interaction between light and vegetation, constructs, as the final product, a tool that analyzes remote sensing data for precision agriculture (ripeness prediction). The procedure begins by defining the equations that describe the transport of photons inside the leaf and within the canopy. The resulting integro-differential equations are numerically integrated by adapting the conventional discrete-ordinate methods to compute the reflectance at the top of the canopy. The canopy transport equation is also analyzed to explore its spectral properties. The goal here is to apply Case's method to determine eigenvalues and eigenfunctions and to prove completeness. A model inversion is attempted by using neural network algorithms. Using input-outputs generated by running the forward model, a neural network is trained to learn the inverse map. The model-based neural network represents the end product of the overall procedure. During Oct 2002, an Unmanned Aerial Vehicles (UAVs) equipped with a camera system, flew over Kauai to take images of coffee field plantations. Our goal is to predict the amount of ripe coffee cherries for optimal harvesting. The Leaf-Canopy model was modified to include cherries as absorbing and scattering elements and two classes of neural networks were trained on the model to learn the relationship between reflectance and percentage of ripe, over-ripe and under-ripe cherries. The neural networks are interfaced with images coming from Kauai to predict ripeness percentage. Both ground and airborne images are considered. The latter were taken from the on-board Helios UAV camera system flying over the Kauai coffee field
NASA Astrophysics Data System (ADS)
Salakhov, M. Kh; Tagirov, M. S.; Dooglav, A. V.
2013-12-01
In 1997, A S Borovik-Romanov, the Academician of RAS, and A V Aganov, the head of the Physics Department of Kazan State University, suggested that the 'School of Magnetic Resonance', well known in the Soviet Union, should recommence and be regularly held in Kazan. This school was created in 1968 by G V Scrotskii, the prominent scientist in the field of magnetic resonance and the editor of many famous books on magnetic resonance (authored by A Abragam, B. Bleaney, C. Slichter, and many others) translated and edited in the Soviet Union. In 1991 the last, the 12th School, was held under the supervision of G V Scrotskii. Since 1997, more than 600 young scientists, 'schoolboys', have taken part in the School meetings, made their oral reports and participated in heated discussions. Every year a competition among the young scientist takes place and the Program Committee members name the best reports, the authors of which are invited to prepare full-scale scientific papers. The XVI International Youth Scientific School 'Actual problems of the magnetic resonance and its application' in its themes is slightly different from previous ones. A new section has been opened this year: Coherent Optics and Optical Spectroscopy. Many young people have submitted interesting reports on optical research, many of the reports are devoted to the implementation of nanotechnology in optical studies. The XVI International Youth Scientific School has been supported by the Program of development of Kazan Federal University. It is a pleasure to thank the sponsors (BRUKER Ltd, Moscow, the Russian Academy of Science, the Dynasty foundation of Dmitrii Zimin, Russia, Russian Foundation for Basic Research) and all the participants and contributors for making the International School meeting possible and interesting. A V Dooglav, M Kh Salakhov and M S Tagirov The Editors
Application of unstructured grid methods to steady and unsteady aerodynamic problems
NASA Technical Reports Server (NTRS)
Batina, John T.
1989-01-01
The purpose is to describe the development of unstructured grid methods which have several advantages when compared to methods which make use of structured grids. Unstructured grids, for example, easily allow the treatment of complex geometries, allow for general mesh movement for realistic motions and structural deformations of complete aircraft configurations which is important for aeroelastic analysis, and enable adaptive mesh refinement to more accurately resolve the physics of the flow. Steady Euler calculations for a supersonic fighter configuration to demonstrate the complex geometry capability; unsteady Euler calculations for the supersonic fighter undergoing harmonic oscillations in a complete-vehicle bending mode to demonstrate the general mesh movement capability; and vortex-dominated conical-flow calculations for highly-swept delta wings to demonstrate the adaptive mesh refinement capability are discussed. The basic solution algorithm is a multi-stage Runge-Kutta time-stepping scheme with a finite-volume spatial discretization based on an unstructured grid of triangles in 2D or tetrahedra in 3D. The moving mesh capability is a general procedure which models each edge of each triangle (2D) or tetrahedra (3D) with a spring. The resulting static equilibrium equations which result from a summation of forces are then used to move the mesh to allow it to continuously conform to the instantaneous position or shape of the aircraft. The adaptive mesh refinement procedure enriches the unstructured mesh locally to more accurately resolve the vortical flow features. These capabilities are described in detail along with representative results which demonstrate several advantages of unstructured grid methods. The applicability of the unstructured grid methodology to steady and unsteady aerodynamic problems and directions for future work are discussed.
2007-01-01
viscous flows , compressible or incompressible flows . The SPH option in LS-DYNA was used to simulate the Poiseuille flow and Couette flow . The SPH...at a certain constant velocity ( ). The simulations of Poiseuille and Couette flow show that this approach can be furthered to understand the scour... simulating fluid dynamic problems. The SPH method with various formulations can simulate different dynamic fluid flow problems, such as inviscid or
NASA Technical Reports Server (NTRS)
Straeter, T. A.
1972-01-01
The Davidon-Broyden class of rank one, quasi-Newton minimization methods is extended from Euclidean spaces to infinite-dimensional, real Hilbert spaces. For several techniques of choosing the step size, conditions are found which assure convergence of the associated iterates to the location of the minimum of a positive definite quadratic functional. For those techniques, convergence is achieved without the problem of the computation of a one-dimensional minimum at each iteration. The application of this class of minimization methods for the direct computation of the solution of an optimal control problem is outlined. The performance of various members of the class are compared by solving a sample optimal control problem. Finally, the sample problem is solved by other known gradient methods, and the results are compared with those obtained with the rank one quasi-Newton methods.
On the application of deterministic optimization methods to stochastic control problems
NASA Technical Reports Server (NTRS)
Kramer, L. C.; Athans, M.
1974-01-01
A technique is presented by which deterministic optimization techniques, for example, the maximum principle of Pontriagin, can be applied to stochastic optimal control problems formulated around linear systems with Gaussian noises and general cost criteria. Using this technique, the stochastic nature of the problem is suppressed but for two expectation operations, the optimization being deterministic. The use of the technique in treating problems with quadratic and nonquadratic costs is illustrated.
Application of Mixed-Norm Optimal Control to a Multi-Objective Active Suspension Problem
1995-12-01
is to develop a controller for an active suspension system on-board a tractor - semitrailer vehicle. The problem is first approached by using H 2 and H...problem for the control community [dJ95]. The problem is based on designing an active suspension system for a tractor semitrailer vehicle. Both single... tractor semitrailer vehicle must be control- lable by the active suspension system. If not fully controllable, at a minimum the tire and suspension
NASA Astrophysics Data System (ADS)
Roth, Bradley J.; Hobbie, Russell K.
2014-05-01
This article contains a collection of homework problems to help students learn how concepts from electricity and magnetism can be applied to topics in medicine and biology. The problems are at a level typical of an undergraduate electricity and magnetism class, covering topics such as nerve electrophysiology, transcranial magnetic stimulation, and magnetic resonance imaging. The goal of these problems is to train biology and medical students to use quantitative methods, and also to introduce physics and engineering students to biological phenomena.
Understanding the public's health problems: applications of symbolic interaction to public health.
Maycock, Bruce
2015-01-01
Public health has typically investigated health issues using methods from the positivistic paradigm. Yet these approaches, although they are able to quantify the problem, may not be able to explain the social reasons of why the problem exists or the impact on those affected. This article will provide a brief overview of a sociological theory that provides methods and a theoretical framework that has proven useful in understanding public health problems and developing interventions.
NASA Technical Reports Server (NTRS)
Banks, H. T.; Silcox, R. J.; Keeling, S. L.; Wang, C.
1989-01-01
A unified treatment of the linear quadratic tracking (LQT) problem, in which a control system's dynamics are modeled by a linear evolution equation with a nonhomogeneous component that is linearly dependent on the control function u, is presented; the treatment proceeds from the theoretical formulation to a numerical approximation framework. Attention is given to two categories of LQT problems in an infinite time interval: the finite energy and the finite average energy. The behavior of the optimal solution for finite time-interval problems as the length of the interval tends to infinity is discussed. Also presented are the formulations and properties of LQT problems in a finite time interval.
Application of Viral Systems for Single-Machine Total Weighted Tardiness Problem
NASA Astrophysics Data System (ADS)
Santosa, Budi; Affandi, Umar
2013-06-01
In this paper, a relatively new algorithm inspired by the viral replication system called Viral Systems is used to solve the Single-Machine Total Weighted Tardiness (SMTWTP). SMTWTP is a job scheduling problem which is one of classical combinatorial problems known as np-hard problems. This algorithm makes the process of finding solutions through neighborhood and mutation mechanism. The experiment was conducted to evaluate its performance. There are seven parameters which are required to tune in to find best solution. The experiment was implemented on data sets of 40 jobs, 50 jobs, and 100 jobs. The results show that the algorithm can solve 235 optimally out of 275 problems.
NASA Technical Reports Server (NTRS)
Banks, H. T.; Silcox, R. J.; Keeling, S. L.; Wang, C.
1989-01-01
A unified treatment of the linear quadratic tracking (LQT) problem, in which a control system's dynamics are modeled by a linear evolution equation with a nonhomogeneous component that is linearly dependent on the control function u, is presented; the treatment proceeds from the theoretical formulation to a numerical approximation framework. Attention is given to two categories of LQT problems in an infinite time interval: the finite energy and the finite average energy. The behavior of the optimal solution for finite time-interval problems as the length of the interval tends to infinity is discussed. Also presented are the formulations and properties of LQT problems in a finite time interval.
NASA Astrophysics Data System (ADS)
Konno, Hiroshi; Gotoh, Jun-Ya; Uno, Takeaki; Yuki, Atsushi
2002-09-01
We will propose a new cutting plane algorithm for solving a class of semi-definite programming problems (SDP) with a small number of variables and a large number of constraints. Problems of this type appear when we try to classify a large number of multi-dimensional data into two groups by a hyper-ellipsoidal surface. Among such examples are cancer diagnosis, failure discrimination of enterprises. Also, a certain class of option pricing problems can be formulated as this type of problem. We will show that the cutting plane algorithm is much more efficient than the standard interior point algorithms for solving SDP.
A Perturbation Theory for Hamilton's Principal Function: Applications to Boundary Value Problems
NASA Astrophysics Data System (ADS)
Munoa, Oier Penagaricano
This thesis introduces an analytical perturbation theory for Hamilton's principal function and Hamilton's characteristic function. Based on Hamilton's principle and the research carried out by Sir William Rowan Hamilton, a perturbation theory is developed to analytically solve two-point boundary value problems. The principal function is shown to solve the two-point boundary value problem through simple differentiation and elimination. The characteristic function is related to the principal function through a Legendre transformation, and can also be used to solve two-point boundary value problems. In order to obtain the solution to the perturbed two-point boundary value problem the knowledge of the nominal solution is sufficient. The perturbation theory is applied to the two body problem to study the perturbed dynamics in the vicinity of the Hohmann transfer. It is found that the perturbation can actually offer a lower cost two-impulse transfer to the target orbit than the Hohmann transfer. The numerical error analysis of the perturbation theory is shown for different orders of calculation. Coupling Hamilton's principal and characteristic functions yields an analytical perturbation theory for the initial value problem, where the state of the perturbed system can be accurately obtained. The perturbation theory is applied to the restricted three-body problem, where the system is viewed as a two-body problem perturbed by the presence of a third body. It is shown that the first order theory can be sufficient to solve the problem, winch is expressed in terms of Delaunay elements. The solution to the initial value problem is applied to derive a Keplerian periapsis map that can be used for low-energy space mission design problems.
NASA Astrophysics Data System (ADS)
Bazhenov, V. G.; Zhestkov, M. N.
2017-05-01
The applicability of a structurally orthotropic model to the calculation of perforated plates and cylindrical shells subjected to tension and bending is studied by the finite-element method. The parameters of the orthotropic material are used in the form of coefficients of stiffness reduction. They are determined from the solution to the problem on deformation of a cyclically repeating structural element, with a varying degree of perforation (porosity), in tension, shear, and bending. The structural element is investigated by the methods of continuum mechanics and the theory of Timoshenko-type shells, and the limit of applicability of the theory of shells to such problems is found. The numerical results obtained are compared with the analytical estimates given by E. I. Grigolyuk and L. A. Filshtinskii. Verification of the numerically obtained orthotropic parameters is carried out based on the solution to the problem of bending of one quarter of a cylindrical strip and a plate perforated with one row of holes. It is shown that the approach chosen is applicable to perforated plates and shells in bending problems with waves whose length exceeds the characteristic size of their structural element. The stability of a perforated elastic cylindrical shell under external pressure, with two variants of boundary conditions, is investigated. Values of the critical pressure and the corresponding buckling modes in relation to the length of the shell and the degree of perforation are obtained.
Application of Graph Theory in an Intelligent Tutoring System for Solving Mathematical Word Problems
ERIC Educational Resources Information Center
Nabiyev, Vasif V.; Çakiroglu, Ünal; Karal, Hasan; Erümit, Ali K.; Çebi, Ayça
2016-01-01
This study is aimed to construct a model to transform word "motion problems" in to an algorithmic form in order to be processed by an intelligent tutoring system (ITS). First; categorizing the characteristics of motion problems, second; suggesting a model for the categories were carried out. In order to solve all categories of the…
ERIC Educational Resources Information Center
Reese, Simon R.
2015-01-01
This paper reflects upon a three-step process to expand the problem definition in the early stages of an action learning project. The process created a community-powered problem-solving approach within the action learning context. The simple three steps expanded upon in the paper create independence, dependence, and inter-dependence to aid the…
Application of Choice-Making Intervention for a Student with Multiply Maintained Problem Behavior.
ERIC Educational Resources Information Center
Peterson, Stephanie M. Peck; Caniglia, Cyndi; Royster, Amy Jo
2001-01-01
A functional behavioral assessment for a 10-year-old boy with autism found both teacher attention and escape from task demands maintained his problem behavior. A choice-making intervention involving either completing work alone followed by a break with teacher attention versus working with teacher assistance was found to decrease problem behavior…
ERIC Educational Resources Information Center
Reese, Simon R.
2015-01-01
This paper reflects upon a three-step process to expand the problem definition in the early stages of an action learning project. The process created a community-powered problem-solving approach within the action learning context. The simple three steps expanded upon in the paper create independence, dependence, and inter-dependence to aid the…
ERIC Educational Resources Information Center
Ceberio, Mikel; Almudí, José Manuel; Franco, Ángel
2016-01-01
In recent years, interactive computer simulations have been progressively integrated in the teaching of the sciences and have contributed significant improvements in the teaching-learning process. Practicing problem-solving is a key factor in science and engineering education. The aim of this study was to design simulation-based problem-solving…
Application of Graph Theory in an Intelligent Tutoring System for Solving Mathematical Word Problems
ERIC Educational Resources Information Center
Nabiyev, Vasif V.; Çakiroglu, Ünal; Karal, Hasan; Erümit, Ali K.; Çebi, Ayça
2016-01-01
This study is aimed to construct a model to transform word "motion problems" in to an algorithmic form in order to be processed by an intelligent tutoring system (ITS). First; categorizing the characteristics of motion problems, second; suggesting a model for the categories were carried out. In order to solve all categories of the…
Preservice Teachers' Application of a Problem-Solving Approach on Multimedia Case
ERIC Educational Resources Information Center
Kilbane, Clare R.
2008-01-01
This study explored the use of case-based pedagogy to promote preservice teachers' problem-solving proficiency. Students in a web-supported course called CaseNEX learned to use a problem-solving approach when analyzing multimedia case studies. Their performance was compared with students in two groups who had no exposure to case methods--other…
A New Large-Scale Global Optimization Method and Its Application to Lennard-Jones Problems
1992-11-01
stochastic methods. Computational results on Lennard - Jones problems show that the new method is considerably more successful than any other method that...our method does not find as good a solution as has been found by the best special purpose methods for Lennard - Jones problems. This illustrates the inherent difficulty of large scale global optimization.
ERIC Educational Resources Information Center
Piersel, Wayne C.; Kratochwill, Thomas R.
1979-01-01
Self-observation as a behavior change technique was implemented through behavioral consultation in a public elementary school system. The self-observation procedures were introduced to two subjects with academic problems (assignment completion) and two subjects with behavioral problems (disruptive talk and interruptions, respectively).…
Application of a Mixed Consequential Ethical Model to a Problem Regarding Test Standards.
ERIC Educational Resources Information Center
Busch, John Christian
The work of the ethicist Charles Curran and the problem-solving strategy of the mixed consequentialist ethical model are applied to a traditional social science measurement problem--that of how to adjust a recommended standard in order to be fair to the test-taker and society. The focus is on criterion-referenced teacher certification tests.…
A Study on the Application of Creative Problem Solving Teaching to Statistics Teaching
ERIC Educational Resources Information Center
Hu, Ridong; Xiaohui, Su; Shieh, Chich-Jen
2017-01-01
Everyone would encounter the life issue of solving complicated problems generated by economic behaviors among all activities for making a living. Various life problems encountered therefore could be generalized by economic statistics. In other words, a lot of important events in daily life are related to economic statistics. For this reason,…
Students' Understanding and Application of the Area under the Curve Concept in Physics Problems
ERIC Educational Resources Information Center
Nguyen, Dong-Hai; Rebello, N. Sanjay
2011-01-01
This study investigates how students understand and apply the area under the curve concept and the integral-area relation in solving introductory physics problems. We interviewed 20 students in the first semester and 15 students from the same cohort in the second semester of a calculus-based physics course sequence on several problems involving…
NASA Astrophysics Data System (ADS)
Sudakov, Ivan; Vakulenko, Sergey
2015-11-01
The original Rayleigh-Benard convection is a standard example of the system where the critical transitions occur with changing of a control parameter. We will discuss the modified Rayleigh-Benard convection problem which includes the radiative effects as well as the specific gas sources on a surface. Such formulation of this problem leads to identification a new kind of nonlinear phenomenon, besides the well-known Benard cells. Modeling of methane emissions from permafrost into the atmosphere drives to difficult problems, involving the Navier-Stokes equations. Taking into account the modified Rayleigh-Benard convection problem, we will discuss a new approach which makes the problem of a climate catastrophe in the result of a greenhouse effect more tractable and allows us to describe catastrophic transitions in the atmosphere induced by permafrost greenhouse gas sources.
Coorbital Restricted Problem and its Application in the Design of the Orbits of the LISA Spacecraft
NASA Astrophysics Data System (ADS)
Yi, Zhaohua; Li, Guangyu; Heinzel, Gerhard; Rüdiger, Albrecht; Jennrich, Oliver; Wang, Li; Xia, Yan; Zeng, Fei; Zhao, Haibin
On the basis of many coorbital phenomena in astronomy and spacecraft motion, a dynamics model is proposed in this paper — treating the coorbital restricted problem together with method for obtaining a general approximate solution. The design of the LISA spacecraft orbits is a special 2+3 coorbital restricted problem. The problem is analyzed in two steps. First, the motion of the barycenter of the three spacecraft is analyzed, which is a planar coorbital restricted three-body problem. And an approximate analytical solution of the radius and the argument of the center is obtained consequently. Secondly, the configuration of the three spacecraft with minimum arm-length variation is analyzed. The motion of a single spacecraft is a near-planar coorbital restricted three-body problem, allowing approximate analytical solutions for the orbit radius and the argument of a spacecraft. Thus approximative expressions for the arm-length are given.
NASA Astrophysics Data System (ADS)
Ceberio, Mikel; Almudí, José Manuel; Franco, Ángel
2016-08-01
In recent years, interactive computer simulations have been progressively integrated in the teaching of the sciences and have contributed significant improvements in the teaching-learning process. Practicing problem-solving is a key factor in science and engineering education. The aim of this study was to design simulation-based problem-solving teaching materials and assess their effectiveness in improving students' ability to solve problems in university-level physics. Firstly, we analyze the effect of using simulation-based materials in the development of students' skills in employing procedures that are typically used in the scientific method of problem-solving. We found that a significant percentage of the experimental students used expert-type scientific procedures such as qualitative analysis of the problem, making hypotheses, and analysis of results. At the end of the course, only a minority of the students persisted with habits based solely on mathematical equations. Secondly, we compare the effectiveness in terms of problem-solving of the experimental group students with the students who are taught conventionally. We found that the implementation of the problem-solving strategy improved experimental students' results regarding obtaining a correct solution from the academic point of view, in standard textbook problems. Thirdly, we explore students' satisfaction with simulation-based problem-solving teaching materials and we found that the majority appear to be satisfied with the methodology proposed and took on a favorable attitude to learning problem-solving. The research was carried out among first-year Engineering Degree students.
NASA Astrophysics Data System (ADS)
Hetmaniuk, Ulrich Ladislas
Fast solvers are often designed for problems posed on simple domains. Unfortunately, engineering applications deal with arbitrary domains. To allow the use of fast solvers, fictitious domain methods have been developed. They usually define an auxiliary problem on a rectangle or a parallelepiped. In aerospace and military applications, many scatterers are composed of one major axisymmetric component and a few features. Therefore, the aim of this thesis is to define, for the scattering of acoustic waves, fictitious domain methods which exploit such local axisymmetry. The original exterior problem is first approximated by introducing an absorbing boundary condition on an artificial boundary. A family of absorbing conditions is reviewed. For some simple scatterers, numerical experiments on the position of the artificial boundary reveal that the error induced by the absorbing condition is bounded, as the wave number increases, when the artificial boundary is fixed. Then, for a class of partially axisymmetric scatterers, the truncated computational domain is embedded into an axisymmetric domain. Helmholtz problems are formulated inside this axisymmetric domain and inside each feature. Lagrange multipliers are introduced at the interfaces between the features and the axisymmetric domain to enforce a set of carefully constructed constraints. This formulation is analyzed at the continuous level and is shown to be equivalent to the original one. For the Helmholtz equation defined over the axisymmetric domain, the solution is approximated by truncated Fourier series and finite elements. Properties of this discretization method for the Helmholtz equation are also analyzed on a two-dimensional model problem. Numerical experiments are performed to illustrate the analytical results. For the auxiliary problem inside each feature, classical finite elements are used to approximate the solution. The constraints are enforced pointwise. The resulting algebraic system is solved either
NASA Astrophysics Data System (ADS)
Li, C.; Nowack, R. L.; Pyrak-Nolte, L.
2003-12-01
Seismic tomographic experiments in soil and rock are strongly affected by limited and non-uniform ray coverage. We propose a new method to extrapolate data used for seismic tomography to full coverage. The proposed two-stage autoregressive extrapolation technique can be used to extend the available data and provide better tomographic images. The algorithm is based on the principle that the extrapolated data adds minimal information to the existing data. A two-stage autoregressive (AR) extrapolation scheme is then applied to the seismic tomography problem. The first stage of the extrapolation is to find the optimal prediction-error filter (PE filter). For the second stage, we use the PE filter to find the values for the missing data so that the power out of the PE filter is minimized. At the second stage, we are able to estimate missing data values with the same spectrum as the known data. This is similar to maximizing an entropy criterion. Synthetic tomographic experiments have been conducted and demonstrate that the two-stage AR extrapolation technique is a powerful tool for data extrapolation and can improve the quality of tomographic inversions of experimental and field data. Moreover, the two-stage AR extrapolation technique is tolerant to noise in the data and can still extrapolate the data to obtain overall patterns, which is very important for real data applications. In this study, we have applied AR extrapolation to a series of datasets from laboratory tomographic experiments on synthetic sediments with known structure. In these tomographic experiments, glass beads saturated with de-ionized water were used as the synthetic water-saturated background sediments. The synthetic sediments were packed in plastic cylindrical containers with a diameter of 220 mm. Tomographic experiments were then set up to measure transmitted acoustic waves through the sediment samples from multiple directions. We recorded data for sources and receivers with varying angular
Shioiri, Toshiki
2015-01-01
of fears from two or more agoraphobia-related situations is now required, because this is a robust means for distinguishing agoraphobia from specific phobias. Also, the criteria for agoraphobia are now extended to be consistent with criteria sets for other anxiety disorders (e.g., a clinician's judgment of the fears as being out of proportion to the actual danger in the situation, with a typical duration of 6 months or more). From the above, these changes from DSM-IV-TR to DSM-5 in anxiety disorders make our judgments faster and more efficient in clinical practice, and DSM-5 is more useful to elucidate the pathology. In this manuscript, we discuss the application and problems based on clinical and research viewpoints regarding anxiety disorders in DSM-5.
NASA Technical Reports Server (NTRS)
Jacobson, R. A.
1978-01-01
The formulation of the classical Linear-Quadratic-Gaussian stochastic control problem as employed in low thrust navigation analysis is reviewed. A reformulation is then presented which eliminates a potentially unreliable matrix subtraction in the control calculations, improves the computational efficiency, and provides for a cleaner computational interface between the estimation and control processes. Lastly, the application of the U-D factorization method to the reformulated equations is examined with the objective of achieving a complete set of factored equations for the joint estimation and control problem.
Code verification for unsteady 3-D fluid-solid interaction problems
NASA Astrophysics Data System (ADS)
Yu, Kintak Raymond; Étienne, Stéphane; Hay, Alexander; Pelletier, Dominique
2015-12-01
This paper describes a procedure to synthesize Manufactured Solutions for Code Verification of an important class of Fluid-Structure Interaction (FSI) problems whose behaviors can be modeled as rigid body vibrations in incompressible fluids. We refer this class of FSI problems as Fluid-Solid Interaction problems, which can be found in many practical engineering applications. The methodology can be utilized to develop Manufactured Solutions for both 2-D and 3-D cases. We demonstrate the procedure with our numerical code. We present details of the formulation and methodology. We also provide the reasonings behind our proposed approach. Results from grid and time step refinement studies confirm the verification of our solver and demonstrate the versatility of the simple synthesis procedure. In addition, the results also demonstrate that the modified decoupled approach to verify flow problems with high-order time-stepping schemes can be employed equally well to verify code for multi-physics problems (here, those of the Fluid-Solid Interaction) when the numerical discretization is based on the Method of Lines.
Consensus properties and their large-scale applications for the gene duplication problem.
Moon, Jucheol; Lin, Harris T; Eulenstein, Oliver
2016-06-01
Solving the gene duplication problem is a classical approach for species tree inference from gene trees that are confounded by gene duplications. This problem takes a collection of gene trees and seeks a species tree that implies the minimum number of gene duplications. Wilkinson et al. posed the conjecture that the gene duplication problem satisfies the desirable Pareto property for clusters. That is, for every instance of the problem, all clusters that are commonly present in the input gene trees of this instance, called strict consensus, will also be found in every solution to this instance. We prove that this conjecture does not generally hold. Despite this negative result we show that the gene duplication problem satisfies a weaker version of the Pareto property where the strict consensus is found in at least one solution (rather than all solutions). This weaker property contributes to our design of an efficient scalable algorithm for the gene duplication problem. We demonstrate the performance of our algorithm in analyzing large-scale empirical datasets. Finally, we utilize the algorithm to evaluate the accuracy of standard heuristics for the gene duplication problem using simulated datasets.
Haber, Eldad
2014-03-17
The focus of research was: Developing adaptive mesh for the solution of Maxwell's equations; Developing a parallel framework for time dependent inverse Maxwell's equations; Developing multilevel methods for optimization problems with inequal- ity constraints; A new inversion code for inverse Maxwell's equations in the 0th frequency (DC resistivity); A new inversion code for inverse Maxwell's equations in low frequency regime. Although the research concentrated on electromagnetic forward and in- verse problems the results of the research was applied to the problem of image registration.
Line Spring Model and Its Applications to Part-Through Crack Problems in Plates and Shells
NASA Technical Reports Server (NTRS)
Erdogan, F.; Aksel, B.
1986-01-01
The line spring model is described and extended to cover the problem of interaction of multiple internal and surface cracks in plates and shells. The shape functions for various related crack geometries obtained from the plane strain solution and the results of some multiple crack problems are presented. The problems considered include coplanar surface cracks on the same or opposite sides of a plate, nonsymmetrically located coplanar internal elliptic cracks, and in a very limited way the surface and corner cracks in a plate of finite width and a surface crack in a cylindrical shell with fixed end.
Application of Dynamic Programming to Solving K Postmen Chinese Postmen Problem
NASA Astrophysics Data System (ADS)
Fei, Rong; Cui, Duwu; Zhang, Yikun; Wang, Chaoxue
In this paper, Dynamic Programming is used to solve K postmen Chinese postmen problem for the first time. And a novel model for decision- making of KPCPP and the computation models for solving the whole problem are proposed. The arcs of G are changed into the points of G' by CAPA, and the model is converted into another one, which applies to Multistep Decision Process, by MDPMCA. On the base of these two programs, Dynamic Programming algorithm KMPDPA can finally solve the NPC problem-KPCPP. An illustrative example is given to clarify concepts and methods. The accuracy of these algorithms and the relative theories are verified by mathematical language.
NASA Astrophysics Data System (ADS)
Slaughter, A. E.; Permann, C.; Peterson, J. W.; Gaston, D.; Andrs, D.; Miller, J.
2014-12-01
The Idaho National Laboratory (INL)-developed Multiphysics Object Oriented Simulation Environment (MOOSE; www.mooseframework.org), is an open-source, parallel computational framework for enabling the solution of complex, fully implicit multiphysics systems. MOOSE provides a set of computational tools that scientists and engineers can use to create sophisticated multiphysics simulations. Applications built using MOOSE have computed solutions for chemical reaction and transport equations, computational fluid dynamics, solid mechanics, heat conduction, mesoscale materials modeling, geomechanics, and others. To facilitate the coupling of diverse and highly-coupled physical systems, MOOSE employs the Jacobian-free Newton-Krylov (JFNK) method when solving the coupled nonlinear systems of equations arising in multiphysics applications. The MOOSE framework is written in C++, and leverages other high-quality, open-source scientific software packages such as LibMesh, Hypre, and PETSc. MOOSE uses a "hybrid parallel" model which combines both shared memory (thread-based) and distributed memory (MPI-based) parallelism to ensure efficient resource utilization on a wide range of computational hardware. MOOSE-based applications are inherently modular, which allows for simulation expansion (via coupling of additional physics modules) and the creation of multi-scale simulations. Any application developed with MOOSE supports running (in parallel) any other MOOSE-based application. Each application can be developed independently, yet easily communicate with other applications (e.g., conductivity in a slope-scale model could be a constant input, or a complete phase-field micro-structure simulation) without additional code being written. This method of development has proven effective at INL and expedites the development of sophisticated, sustainable, and collaborative simulation tools.
NASA Astrophysics Data System (ADS)
Saif, Ullah; Guan, Zailin; Wang, Baoxi; Mirza, Jahanzeb
2014-09-01
Robustness in most of the literature is associated with min-max or min-max regret criteria. However, these criteria of robustness are conservative and therefore recently new criteria called, lexicographic α-robust method has been introduced in literature which defines the robust solution as a set of solutions whose quality or jth largest cost is not worse than the best possible jth largest cost in all scenarios. These criteria might be significant for robust optimization of single objective optimization problems. However, in real optimization problems, two or more than two conflicting objectives are desired to optimize concurrently and solution of multi objective optimization problems exists in the form of a set of solutions called Pareto solutions and from these solutions it might be difficult to decide which Pareto solution can satisfy min-max, min-max regret or lexicographic α-robust criteria by considering multiple objectives simultaneously. Therefore, lexicographic α-robust method which is a recently introduced method in literature is extended in the current research for Pareto solutions. The proposed method called Pareto lexicographic α-robust approach can define Pareto lexicographic α-robust solutions from different scenarios by considering multiple objectives simultaneously. A simple example and an application of the proposed method on a simple problem of multi objective optimization of simple assembly line balancing problem with task time uncertainty is presented to get their robust solutions. The presented method can be significant to implement on different multi objective robust optimization problems containing uncertainty.
NASA Astrophysics Data System (ADS)
Hébert, Alain
2014-06-01
We are presenting the computer science techniques involved in the integration of codes DRAGON5 and DONJON5 in the SALOME platform. This integration brings new capabilities in designing multi-physics computational schemes, with the possibility to couple our reactor physics codes with thermal-hydraulics or thermo-mechanics codes from other organizations. A demonstration is presented where two code components are coupled using the YACS module of SALOME, based on the CORBA protocol. The first component is a full-core 3D steady-state neuronic calculation in a PWR performed using DONJON5. The second component implement a set of 1D thermal-hydraulics calculations, each performed over a single assembly.
Problem gambling of Chinese college students: application of the theory of planned behavior.
Wu, Anise M S; Tang, Catherine So-kum
2012-06-01
The present study, using the theory of planned behavior (TPB), investigated psychological correlates of intention to gamble and problem gambling among Chinese college students. Nine hundred and thirty two Chinese college students (aged from 18 to 25 years) in Hong Kong and Macao were surveyed. The findings generally support the efficacy of the TPB in explaining gambling intention and problems among Chinese college students. Specifically, the results of the path analysis indicate gambling intention and perceived control over gambling as the most proximal predictors of problem gambling, whereas attitudes, subjective norms, and perceived control, which are TPB components, influence gambling intention. Thus, these three TPB components should make up the core contents of the prevention and intervention efforts against problem gambling for Chinese college students.
Application of Particle Swarm Optimization Algorithm in the Heating System Planning Problem
Ma, Rong-Jiang; Yu, Nan-Yang; Hu, Jun-Yi
2013-01-01
Based on the life cycle cost (LCC) approach, this paper presents an integral mathematical model and particle swarm optimization (PSO) algorithm for the heating system planning (HSP) problem. The proposed mathematical model minimizes the cost of heating system as the objective for a given life cycle time. For the particularity of HSP problem, the general particle swarm optimization algorithm was improved. An actual case study was calculated to check its feasibility in practical use. The results show that the improved particle swarm optimization (IPSO) algorithm can more preferably solve the HSP problem than PSO algorithm. Moreover, the results also present the potential to provide useful information when making decisions in the practical planning process. Therefore, it is believed that if this approach is applied correctly and in combination with other elements, it can become a powerful and effective optimization tool for HSP problem. PMID:23935429
On the application of deterministic optimization methods to stochastic control problems.
NASA Technical Reports Server (NTRS)
Kramer, L. C.; Athans, M.
1972-01-01
A technique is presented by which one can apply the Minimum Principle of Pontryagin to stochastic optimal control problems formulated around linear systems with Gaussian noises and general cost criteria. Using this technique, the stochastic nature of the problem is suppressed but for two expectation operations, the optimization being essentially deterministic. The technique is applied to systems with quadratic and non-quadratic costs to illustrate its use.
Applications of remote sensing to estuarine problems. [estuaries of Chesapeake Bay
NASA Technical Reports Server (NTRS)
Munday, J. C., Jr.
1975-01-01
A variety of siting problems for the estuaries of the lower Chesapeake Bay have been solved with cost beneficial remote sensing techniques. Principal techniques used were repetitive 1:30,000 color photography of dye emitting buoys to map circulation patterns, and investigation of water color boundaries via color and color infrared imagery to scales of 1:120,000. Problems solved included sewage outfall siting, shoreline preservation and enhancement, oil pollution risk assessment, and protection of shellfish beds from dredge operations.
Extreme values and the level-crossing problem: An application to the Feller process
NASA Astrophysics Data System (ADS)
Masoliver, Jaume
2014-04-01
We review the question of the extreme values attained by a random process. We relate it to level crossings to one boundary (first-passage problems) as well as to two boundaries (escape problems). The extremes studied are the maximum, the minimum, the maximum absolute value, and the range or span. We specialize in diffusion processes and present detailed results for the Wiener and Feller processes.
Extreme values and the level-crossing problem: an application to the Feller process.
Masoliver, Jaume
2014-04-01
We review the question of the extreme values attained by a random process. We relate it to level crossings to one boundary (first-passage problems) as well as to two boundaries (escape problems). The extremes studied are the maximum, the minimum, the maximum absolute value, and the range or span. We specialize in diffusion processes and present detailed results for the Wiener and Feller processes.
Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems
NASA Technical Reports Server (NTRS)
Guruswamy, Guru P.; Byun, Chansup; Kwak, Dochan (Technical Monitor)
2001-01-01
A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel super computers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.
Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems
NASA Technical Reports Server (NTRS)
Guruswamy, Guru P.; Kwak, Dochan (Technical Monitor)
2002-01-01
A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel supercomputers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.
Problems with numerical techniques: Application to mid-loop operation transients
Bryce, W.M.; Lillington, J.N.
1997-07-01
There has been an increasing need to consider accidents at shutdown which have been shown in some PSAs to provide a significant contribution to overall risk. In the UK experience has been gained at three levels: (1) Assessment of codes against experiments; (2) Plant studies specifically for Sizewell B; and (3) Detailed review of modelling to support the plant studies for Sizewell B. The work has largely been carried out using various versions of RELAP5 and SCDAP/RELAP5. The paper details some of the problems that have needed to be addressed. It is believed by the authors that these kinds of problems are probably generic to most of the present generation system thermal-hydraulic codes for the conditions present in mid-loop transients. Thus as far as possible these problems and solutions are proposed in generic terms. The areas addressed include: condensables at low pressure, poor time step calculation detection, water packing, inadequate physical modelling, numerical heat transfer and mass errors. In general single code modifications have been proposed to solve the problems. These have been very much concerned with means of improving existing models rather than by formulating a completely new approach. They have been produced after a particular problem has arisen. Thus, and this has been borne out in practice, the danger is that when new transients are attempted, new problems arise which then also require patching.
Zhou Kaiyi; Sheate, William R.
2011-11-15
Since the Law of the People's Republic of China on Environmental Impact Assessment was enacted in 2003 and Huanfa 2004 No. 98 was released in 2004, Strategic Environmental Assessment (SEA) has been officially being implemented in the expressway infrastructure planning field in China. Through scrutinizing two SEA application cases of China's provincial level expressway infrastructure (PLEI) network plans, it is found that current SEA practice in expressway infrastructure planning field has a number of problems including: SEA practitioners do not fully understand the objective of SEA; its potential contributions to strategic planning and decision-making is extremely limited; the employed application procedure and prediction and assessment techniques are too simple to bring objective, unbiased and scientific results; and no alternative options are considered. All these problems directly lead to poor quality SEA and consequently weaken SEA's effectiveness.
Factorizing monolithic applications
Hall, J.H.; Ankeny, L.A.; Clancy, S.P.
1998-12-31
The Blanca project is part of the US Department of Energy`s (DOE) Accelerated Strategic Computing Initiative (ASCI), which focuses on Science-Based Stockpile Stewardship through the large-scale simulation of multi-physics, multi-dimensional problems. Blanca is the only Los Alamos National Laboratory (LANL)-based ASCI project that is written entirely in C++. Tecolote, a new framework used in developing Blanca physics codes, provides an infrastructure for gluing together any number of components; this framework is then used to create applications that encompass a wide variety of physics models, numerical solution options, and underlying data storage schemes. The advantage of this approach is that only the essential components for the given model need be activated at runtime. Tecolote has been designed for code re-use and to isolate the computer science mechanics from the physics aspects as much as possible -- allowing physics model developers to write algorithms in a style quite similar to the underlying physics equations that govern the computational physics. This paper describes the advantages of component architectures and contrasts the Tecolote framework with Microsoft`s OLE and Apple`s OpenDoc. An actual factorization of a traditional monolithic application into its basic components is also described.
2009-01-01
the Poiseuille flow and Couette flow . The results of these simulations showed that this approach can be furthered to understand the scour around a...method with a turbulent stress model of the large-eddy simulation (LES) to compute incompressible viscous multi-phase flows . STM is used to compute...with various formulations can simulate different dynamic fluid flow problems, such as inviscid or viscous flows , compressible or incompressible flows
NASA Astrophysics Data System (ADS)
Nguyen, Dong-Hai
This research project investigates the difficulties students encounter when solving physics problems involving the integral and the area under the curve concepts and the strategies to facilitate students learning to solve those types of problems. The research contexts of this project are calculus-based physics courses covering mechanics and electromagnetism. In phase I of the project, individual teaching/learning interviews were conducted with 20 students in mechanics and 15 students from the same cohort in electromagnetism. The students were asked to solve problems on several topics of mechanics and electromagnetism. These problems involved calculating physical quantities (e.g. velocity, acceleration, work, electric field, electric resistance, electric current) by integrating or finding the area under the curve of functions of related quantities (e.g. position, velocity, force, charge density, resistivity, current density). Verbal hints were provided when students made an error or were unable to proceed. A total number of 140 one-hour interviews were conducted in this phase, which provided insights into students' difficulties when solving the problems involving the integral and the area under the curve concepts and the hints to help students overcome those difficulties. In phase II of the project, tutorials were created to facilitate students' learning to solve physics problems involving the integral and the area under the curve concepts. Each tutorial consisted of a set of exercises and a protocol that incorporated the helpful hints to target the difficulties that students expressed in phase I of the project. Focus group learning interviews were conducted to test the effectiveness of the tutorials in comparison with standard learning materials (i.e. textbook problems and solutions). Overall results indicated that students learning with our tutorials outperformed students learning with standard materials in applying the integral and the area under the curve
Heydari, M.H.; Hooshmandasl, M.R.; Cattani, C.; Maalek Ghaini, F.M.
2015-02-15
Because of the nonlinearity, closed-form solutions of many important stochastic functional equations are virtually impossible to obtain. Thus, numerical solutions are a viable alternative. In this paper, a new computational method based on the generalized hat basis functions together with their stochastic operational matrix of Itô-integration is proposed for solving nonlinear stochastic Itô integral equations in large intervals. In the proposed method, a new technique for computing nonlinear terms in such problems is presented. The main advantage of the proposed method is that it transforms problems under consideration into nonlinear systems of algebraic equations which can be simply solved. Error analysis of the proposed method is investigated and also the efficiency of this method is shown on some concrete examples. The obtained results reveal that the proposed method is very accurate and efficient. As two useful applications, the proposed method is applied to obtain approximate solutions of the stochastic population growth models and stochastic pendulum problem.
NASA Astrophysics Data System (ADS)
Wang, Dan; Qin, Zhongfeng
2016-04-01
Uncertainty is inherent in the newsvendor problem. Most of the existing literature is devoted to characterizing the uncertainty either by randomness or by fuzziness. However, in many cases, randomness and fuzziness simultaneously appear in the same problem. Motivated by this observation, we investigate the multi-product newsvendor problem by considering the demands as hybrid variables which are proposed to describe quantities with double uncertainties. According to the expected value criterion, we formulate an expected profit maximization model and convert it to a deterministic form when the chance distributions are given. We discuss two special cases of hybrid variable demands and give their chance distributions. Then we design hybrid simulation to estimate the chance distribution and use genetic algorithm to solve the proposed models. Finally, we proceed to present numerical examples of purchasing pharmaceutical reference standard materials to illustrate the applicability of our methodology and the effectiveness of genetic algorithm.
An Application of Fuzzy Logic Control to a Classical Military Tracking Problem
1994-05-19
34Probability Measures of Fuzzy Events", Journal of Mathematical Analysis and Applications , vol.23, 1968, p.421. 9. Kosko, Bart. "Fuzziness Versus...January 1973, pp.28-44. Zadeh, L.A. "Probability Measures of Fuzzy Events", Journal of Mathematical Analysis and Applications , vol.23, 1968, pp.421
NASA Astrophysics Data System (ADS)
Beker, B.
1992-12-01
Numerical modeling of electromagnetic (EM) interaction is normally performed by using either differential or integral equation methods. Both techniques can be implemented to solve problems in frequency or time domain. The method of moments (MOM) approach to solving integral equations has matured to the point where it can be used to solve complex problems. In the past, MOM has only been applied to scattering and radiation problems involving perfectly conducting or isotropic penetrable, lossy or lossless objects. However, many materials, (e.g., composites that are used on the Navy's surface ships in practical applications) exhibit anisotropic properties. To account for these new effects, several integral equation formulations for scattering and radiation by anisotropic objects have been developed recently. The differential equation approach to EM interaction studies has seen the emergence of the finite-difference time-domain (FD-TD) method as the method of choice in many of today's scattering and radiation applications. This approach has been applied to study transient as well as steady-state scattering from many complex structures, radiation from wire antennas, and coupling into wires through narrow apertures in conducting cavities. It is important to determine whether or not, and how effectively, the FD-TD can be used to solve EM interaction problems of interest to the Navy, such as investigating potential EM interference in shipboard communication systems. Consequently, this report partly addresses this issue by dealing exclusively with FD-TD modeling of time-domain EM scattering and radiation.
Kushniruk, Andre W; Triola, Marc M; Borycki, Elizabeth M; Stein, Ben; Kannry, Joseph L
2005-08-01
This paper describes an innovative approach to the evaluation of a handheld prescription writing application. Participants (10 physicians) were asked to perform a series of tasks involving entering prescriptions into the application from a medication list. The study procedure involved the collection of data consisting of transcripts of the subjects who were asked to "think aloud" while interacting with the prescription writing program to enter medications. All user interactions with the device were video and audio recorded. Analysis of the protocols was conducted in two phases: (1) usability problems were identified from coding of the transcripts and video data, (2) actual errors in entering prescription data were also identified. The results indicated that there were a variety of usability problems, with most related to interface design issues. In examining the relationship between usability problems and errors, it was found that certain types of usability problems were closely associated with the occurrence of specific types of errors in prescription of medications. Implications for identifying and predicting technology-induced error are discussed in the context of improving the safety of health care information systems.
Leung, Y.-F.; Marion, J.
1999-01-01
The degradation of trail resources associated with expanding recreation and tourism visitation is a growing management problem in protected areas worldwide. In order to make judicious trail and visitor management decisions, protected area managers need objective and timely information on trail resource conditions. This paper introduces a trail survey method that efficiently characterizes the lineal extent of common trail problems. The method was applied to a large sample of trails within Great Smoky Mountains National Park, a highuse protected area in the USA. The Trail ProblemAssessment Method (TPAM) employs a continuous search for multiple indicators of predefined tread problems, yielding census data documenting the location, occurrence and extent of each problem. The present application employed 23 different indicators in three categories to gather inventory, resource condition, and design and maintenance data of each surveyed trail. Seventy-two backcountry hiking trails (528 km), or 35% of the Park's total trail length, were surveyed. Soil erosion and wet soil were found to be the two most common impacts on a lineal extent basis. Trails with serious tread problems were well distributed throughout the Park, although wet muddy treads tended to be concentrated in areas where horse use was high. The effectiveness of maintenance features installed to divert water from trail treads was also evaluated. Water bars were found to be more effective than drainage dips. The TPAM was able to provide Park managers with objective and quantitative information for use in trail planning, management and maintenance decisions, and is applicable to other protected areas elsewhere with different environmental and impact characteristics.
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang
2000-01-01
This project is about the investigation of the development of the discontinuous Galerkin finite element methods, for general geometry and triangulations, for solving convection dominated problems, with applications to aeroacoustics. On the analysis side, we have studied the efficient and stable discontinuous Galerkin framework for small second derivative terms, for example in Navier-Stokes equations, and also for related equations such as the Hamilton-Jacobi equations. This is a truly local discontinuous formulation where derivatives are considered as new variables. On the applied side, we have implemented and tested the efficiency of different approaches numerically. Related issues in high order ENO and WENO finite difference methods and spectral methods have also been investigated. Jointly with Hu, we have presented a discontinuous Galerkin finite element method for solving the nonlinear Hamilton-Jacobi equations. This method is based on the RungeKutta discontinuous Galerkin finite element method for solving conservation laws. The method has the flexibility of treating complicated geometry by using arbitrary triangulation, can achieve high order accuracy with a local, compact stencil, and are suited for efficient parallel implementation. One and two dimensional numerical examples are given to illustrate the capability of the method. Jointly with Hu, we have constructed third and fourth order WENO schemes on two dimensional unstructured meshes (triangles) in the finite volume formulation. The third order schemes are based on a combination of linear polynomials with nonlinear weights, and the fourth order schemes are based on combination of quadratic polynomials with nonlinear weights. We have addressed several difficult issues associated with high order WENO schemes on unstructured mesh, including the choice of linear and nonlinear weights, what to do with negative weights, etc. Numerical examples are shown to demonstrate the accuracies and robustness of the
Application of robust Generalised Cross-Validation to the inverse problem of electrocardiology.
Barnes, Josef P; Johnston, Peter R
2016-02-01
Robust Generalised Cross-Validation was proposed recently as a method for determining near optimal regularisation parameters in inverse problems. It was introduced to overcome a problem with the regular Generalised Cross-Validation method in which the function that is minimised to obtain the regularisation parameter often has a broad, flat minimum, resulting in a poor estimate for the parameter. The robust method defines a new function to be minimised which has a narrower minimum, but at the expense of introducing a new parameter called the robustness parameter. In this study, the Robust Generalised Cross-Validation method is applied to the inverse problem of electrocardiology. It is demonstrated that, for realistic situations, the robustness parameter can be set to zero. With this choice of robustness parameter, it is shown that the robust method is able to obtain estimates of the regularisation parameter in the inverse problem of electrocardiology that are comparable to, or better than, many of the standard methods that are applied to this inverse problem.
Application of software complex turbo problem solver to rayleigh-taylor instability modeling
NASA Astrophysics Data System (ADS)
Fortova, S. V.; Utkin, P. S.; Shepelev, V. V.
2016-10-01
The dynamic processes which take place during high-speed impact of two metal plates with different densities are investigated using three-dimensional numerical simulations. It is shown that as a result of the impact the Rayleigh-Taylor instability forms which leads to the formation of three-dimensional ring-shaped structures on the surface of the metal with smaller density. The comparative analysis of the metals interface deformation process with the use of different equations of state is performed. The numerical study is carried out by means of special software complex Turbo Problem Solver developed by the authors. The software complex Turbo Problem Solver implements generalized approach to the construction of hydrodynamic code for various computational fluid dynamics problems. Turbo Problem Solver provides several numerical schemes and software blocks to set initial, boundary conditions and mass forces. The solution of test problem about Rayleigh-Taylor instability growth and development for the case of very rapid density growth is also presented.
Problems of optimal transportation on the circle and their mechanical applications
NASA Astrophysics Data System (ADS)
Plakhov, Alexander; Tchemisova, Tatiana
2017-02-01
We consider a mechanical problem concerning a 2D axisymmetric body moving forward on the plane and making slow turns of fixed magnitude about its axis of symmetry. The body moves through a medium of non-interacting particles at rest, and collisions of particles with the body's boundary are perfectly elastic (billiard-like). The body has a blunt nose: a line segment orthogonal to the symmetry axis. It is required to make small cavities with special shape on the nose so as to minimize its aerodynamic resistance. This problem of optimizing the shape of the cavities amounts to a special case of the optimal mass transportation problem on the circle with the transportation cost being the squared Euclidean distance. We find the explicit solution for this problem when the amplitude of rotation is smaller than a fixed critical value, and give a numerical solution otherwise. As a by-product, we get explicit description of the solution for a class of optimal transportation problems on the circle.
An application of robust ridge regression model in the presence of outliers to real data problem
NASA Astrophysics Data System (ADS)
Shariff, N. S. Md.; Ferdaos, N. A.
2017-09-01
Multicollinearity and outliers are often leads to inconsistent and unreliable parameter estimates in regression analysis. The well-known procedure that is robust to multicollinearity problem is the ridge regression method. This method however is believed are affected by the presence of outlier. The combination of GM-estimation and ridge parameter that is robust towards both problems is on interest in this study. As such, both techniques are employed to investigate the relationship between stock market price and macroeconomic variables in Malaysia due to curiosity of involving the multicollinearity and outlier problem in the data set. There are four macroeconomic factors selected for this study which are Consumer Price Index (CPI), Gross Domestic Product (GDP), Base Lending Rate (BLR) and Money Supply (M1). The results demonstrate that the proposed procedure is able to produce reliable results towards the presence of multicollinearity and outliers in the real data.
Application of Modified Flower Pollination Algorithm on Mechanical Engineering Design Problem
NASA Astrophysics Data System (ADS)
Kok Meng, Ong; Pauline, Ong; Chee Kiong, Sia; Wahab, Hanani Abdul; Jafferi, Noormaziah
2017-01-01
The aim of the optimization is to obtain the best solution among other solutions in order to achieve the objective of the problem without evaluation on all possible solutions. In this study, an improved flower pollination algorithm, namely, the Modified Flower Pollination Algorithms (MFPA) is developed. Comprising of the elements of chaos theory, frog leaping local search and adaptive inertia weight, the performance of MFPA is evaluated in optimizing five benchmark mechanical engineering design problems - tubular column design, speed reducer, gear train, tension/compression spring design and pressure vessel. The obtained results are listed and compared with the results of the other state-of-art algorithms. Assessment shows that the MFPA gives promising result in finding the optimal design for all considered mechanical engineering problems.
The linearized characteristics method and its application to practical nonlinear supersonic problems
NASA Technical Reports Server (NTRS)
Ferri, Antonio
1952-01-01
The methods of characteristics has been linearized by assuming that the flow field can be represented as a basic flow field determined by nonlinearized methods and a linearized superposed flow field that accounts for small changes of boundary conditions. The method has been applied to two-dimensional rotational flow where the basic flow is potential flow and to axially symmetric problems where conical flows have been used as the basic flows. In both cases the method allows the determination of the flow field to be simplified and the numerical work to be reduced to a few calculations. The calculations of axially symmetric flow can be simplified if tabulated values of some coefficients of the conical flow are obtained. The method has also been applied to slender bodies without symmetry and to some three-dimensional wing problems where two-dimensional flow can be used as the basic flow. Both problems were unsolved before in the approximation of nonlinear flow.
Application of program generation technology in solving heat and flow problems
NASA Astrophysics Data System (ADS)
Wan, Shui; Wu, Bangxian; Chen, Ningning
2007-05-01
Based on a new DIY concept for software development, an automatic program-generating technology attached on a software system called as Finite Element Program Generator (FEPG) provides a platform of developing programs, through which a scientific researcher can submit his special physico-mathematical problem to the system in a more direct and convenient way for solution. For solving flow and heat problems by using finite element method, the stabilization technologies and fraction-step methods are adopted to overcome the numerical difficulties caused mainly due to the dominated convection. A couple of benchmark problems are given in this paper as examples to illustrate the usage and the superiority of the automatic program generation technique, including the flow in a lid-driven cavity, the starting flow in a circular pipe, the natural convection in a square cavity, and the flow past a circular cylinder, etc. They are also shown as the verification of the algorithms.
Predictive models based on sensitivity theory and their application to practical shielding problems
Bhuiyan, S.I.; Roussin, R.W.; Lucius, J.L.; Bartine, D.E.
1983-01-01
Two new calculational models based on the use of cross-section sensitivity coefficients have been devised for calculating radiation transport in relatively simple shields. The two models, one an exponential model and the other a power model, have been applied, together with the traditional linear model, to 1- and 2-m-thick concrete-slab problems in which the water content, reinforcing-steel content, or composition of the concrete was varied. Comparing the results obtained with the three models with those obtained from exact one-dimensional discrete-ordinates transport calculations indicates that the exponential model, named the BEST model (for basic exponential shielding trend), is a particularly promising predictive tool for shielding problems dominated by exponential attenuation. When applied to a deep-penetration sodium problem, the BEST model also yields better results than do calculations based on second-order sensitivity theory.
NASA Astrophysics Data System (ADS)
Kutsenko, Anton A.
2017-06-01
We show that spectral problems for periodic operators on lattices with embedded defects of lower dimensions can be solved with the help of matrix-valued integral continued fractions. While these continued fractions are usual in the approximation theory, they are less known in the context of spectral problems. We show that the spectral points can be expressed as zeros of determinants of the continued fractions. They are also useful in the analysis of inverse problems (one-to-one correspondence between spectral data and defects). Finally, the explicit formula for the resolvent in terms of the continued fractions is provided. We apply some of the results to the Schrödinger operator acting on graphene with line and point defects.
History-Dependent Problems with Applications to Contact Models for Elastic Beams
Bartosz, Krzysztof; Kalita, Piotr; Migórski, Stanisław; Ochal, Anna; Sofonea, Mircea
2016-02-15
We prove an existence and uniqueness result for a class of subdifferential inclusions which involve a history-dependent operator. Then we specialize this result in the study of a class of history-dependent hemivariational inequalities. Problems of such kind arise in a large number of mathematical models which describe quasistatic processes of contact. To provide an example we consider an elastic beam in contact with a reactive obstacle. The contact is modeled with a new and nonstandard condition which involves both the subdifferential of a nonconvex and nonsmooth function and a Volterra-type integral term. We derive a variational formulation of the problem which is in the form of a history-dependent hemivariational inequality for the displacement field. Then, we use our abstract result to prove its unique weak solvability. Finally, we consider a numerical approximation of the model, solve effectively the approximate problems and provide numerical simulations.
The Backup-Gilbert method and its application to the electrical conductivity problem
NASA Technical Reports Server (NTRS)
Parker, R. L.
1972-01-01
The theory of Backus and Gilbert gives a technique for solving the general linear inverse problem. Observational error and lack of data are shown to reduce the reliability of the solution in different ways: the former introduces statistical uncertainties in the model, while the latter smooths out the detail. Precision can be improved by sacrificing resolving power, and vice versa, so that some compromise may be made between the two in choosing the best model. Nonlinear inverse problems can be brought into the domain of the theory by linearizing about a typical solution. The inverse problem of electrical conductivity in the mantle is used to illustrate the Backus-Gilbert technique; an example of the tradeoff diagram is given.
Application of the complex scaling method in solving three-body Coulomb scattering problem
NASA Astrophysics Data System (ADS)
Lazauskas, R.
2017-03-01
The three-body scattering problem in Coulombic systems is a widespread, yet unresolved problem using the mathematically rigorous methods. In this work this long-term challenge has been undertaken by combining distorted waves and Faddeev–Merkuriev equation formalisms in conjunction with the complex scaling technique to overcome the difficulties related with the boundary conditions. Unlike the common belief, it is demonstrated that the smooth complex scaling method can be applied to solve the three-body Coulomb scattering problem in a wide energy region, including the fully elastic domain and extending to the energies well beyond the atom ionization threshold. A newly developed method is used to study electron scattering on the ground states of hydrogen and positronium atoms as well as a {e}++{{H}}({n}=1)\\rightleftarrows {{p}}+{Ps}({n}=1) reaction. Where available, obtained results are compared with the experimental data and theoretical predictions, proving the accuracy and efficiency of the newly developed method.
Application of the pseudostate theory to the three-body Lambert problem
NASA Technical Reports Server (NTRS)
Byrnes, Dennis V.
1989-01-01
The pseudostate theory, which approximates three-body trajectories by overlapping the conic effects of both massive bodies on the third body, has been used to solve boundary-value problems. Frequently, the approach to the secondary is quite close, as in interplanetary gravity-assist or satellite-tour trajectories. In this case, the orbit with respect to the primary is radically changed so that perturbation techniques are time consuming, yet higher accuracy than point-to-point conics is necessary. This method reduces the solution of the three-body Lambert problem to solving two conic Lambert problems and inverting a 7 x 7 matrix, the components of which are all found analytically. Typically 90-95 percent of the point-to-point conic error, with respect to an integrated trajectory, is eliminated.
Boundary value problem for the solution of magnetic cutoff rigidities and some special applications
NASA Technical Reports Server (NTRS)
Edmonds, Larry
1987-01-01
Since a planet's magnetic field can sometimes provide a spacecraft with some protection against cosmic ray and solar flare particles, it is important to be able to quantify this protection. This is done by calculating cutoff rigidities. An alternate to the conventional method (particle trajectory tracing) is introduced, which is to treat the problem as a boundary value problem. In this approach trajectory tracing is only needed to supply boundary conditions. In some special cases, trajectory tracing is not needed at all because the problem can be solved analytically. A differential equation governing cutoff rigidities is derived for static magnetic fields. The presense of solid objects, which can block a trajectory and other force fields are not included. A few qualititative comments, on existence and uniqueness of solutions, are made which may be useful when deciding how the boundary conditions should be set up. Also included are topics on axially symmetric fields.
NASA Technical Reports Server (NTRS)
Britcher, Colin P.
1997-01-01
This paper will briefly review previous work in wind tunnel Magnetic Suspension and Balance Systems (MSBS) and will examine the handful of systems around the world currently known to be in operational condition or undergoing recommissioning. Technical developments emerging from research programs at NASA and elsewhere will be reviewed briefly, where there is potential impact on large-scale MSBSS. The likely aerodynamic applications for large MSBSs will be addressed, since these applications should properly drive system designs. A recently proposed application to ultra-high Reynolds number testing will then be addressed in some detail. Finally, some opinions on the technical feasibility and usefulness of a large MSBS will be given.
2012-02-09
For this, we consider the splitting B=B1-B2, where both B1 and B2 are positive semi - definite . We then introduced a new notion called non-redundant...Volume 20(6), pp. 3408-3426, 2010. [3] J. Peng, T. Zhu, H.Zh. Luo and K.Ch. Toh. Semi - definite Relaxation of Quadratic Assignment Problems based on...Government position , policy or decision, unless so designated by other documentation. 14. ABSTRACT This project deals with quadratic assignment problems
Application of a novel finite difference method to dynamic crack problems
NASA Technical Reports Server (NTRS)
Chen, Y. M.; Wilkins, M. L.
1976-01-01
A versatile finite difference method (HEMP and HEMP 3D computer programs) was developed originally for solving dynamic problems in continuum mechanics. It was extended to analyze the stress field around cracks in a solid with finite geometry subjected to dynamic loads and to simulate numerically the dynamic fracture phenomena with success. This method is an explicit finite difference method applied to the Lagrangian formulation of the equations of continuum mechanics in two and three space dimensions and time. The calculational grid moves with the material and in this way it gives a more detailed description of the physics of the problem than the Eulerian formulation.
Chebyshev polynomials in the spectral Tau method and applications to Eigenvalue problems
NASA Technical Reports Server (NTRS)
Johnson, Duane
1996-01-01
Chebyshev Spectral methods have received much attention recently as a technique for the rapid solution of ordinary differential equations. This technique also works well for solving linear eigenvalue problems. Specific detail is given to the properties and algebra of chebyshev polynomials; the use of chebyshev polynomials in spectral methods; and the recurrence relationships that are developed. These formula and equations are then applied to several examples which are worked out in detail. The appendix contains an example FORTRAN program used in solving an eigenvalue problem.
NASA Astrophysics Data System (ADS)
Donatelli, Marco; Hanke, Martin
2013-09-01
We introduce a new iterative scheme for solving linear ill-posed problems, similar to nonstationary iterated Tikhonov regularization, but with an approximation of the underlying operator to be used for the Tikhonov equations. For image deblurring problems, such an approximation can be a discrete deconvolution that operates entirely in the Fourier domain. We provide a theoretical analysis of the new scheme, using regularization parameters that are chosen by a certain adaptive strategy. The numerical performance of this method turns out to be superior to state-of-the-art iterative methods, including the conjugate gradient iteration for the normal equation, with and without additional preconditioning.