A Global Sensitivity Analysis Methodology for Multi-physics Applications
Tong, C H; Graziani, F R
2007-02-02
Experiments are conducted to draw inferences about an entire ensemble based on a selected number of observations. This applies to both physical experiments as well as computer experiments, the latter of which are performed by running the simulation models at different input configurations and analyzing the output responses. Computer experiments are instrumental in enabling model analyses such as uncertainty quantification and sensitivity analysis. This report focuses on a global sensitivity analysis methodology that relies on a divide-and-conquer strategy and uses intelligent computer experiments. The objective is to assess qualitatively and/or quantitatively how the variabilities of simulation output responses can be accounted for by input variabilities. We address global sensitivity analysis in three aspects: methodology, sampling/analysis strategies, and an implementation framework. The methodology consists of three major steps: (1) construct credible input ranges; (2) perform a parameter screening study; and (3) perform a quantitative sensitivity analysis on a reduced set of parameters. Once identified, research effort should be directed to the most sensitive parameters to reduce their uncertainty bounds. This process is repeated with tightened uncertainty bounds for the sensitive parameters until the output uncertainties become acceptable. To accommodate the needs of multi-physics application, this methodology should be recursively applied to individual physics modules. The methodology is also distinguished by an efficient technique for computing parameter interactions. Details for each step will be given using simple examples. Numerical results on large scale multi-physics applications will be available in another report. Computational techniques targeted for this methodology have been implemented in a software package called PSUADE.
Partitioned coupling strategies for multi-physically coupled radiative heat transfer problems
Wendt, Gunnar; Erbts, Patrick Düster, Alexander
2015-11-01
This article aims to propose new aspects concerning a partitioned solution strategy for multi-physically coupled fields including the physics of thermal radiation. Particularly, we focus on the partitioned treatment of electro–thermo-mechanical problems with an additional fourth thermal radiation field. One of the main goals is to take advantage of the flexibility of the partitioned approach to enable combinations of different simulation software and solvers. Within the frame of this article, we limit ourselves to the case of nonlinear thermoelasticity at finite strains, using temperature-dependent material parameters. For the thermal radiation field, diffuse radiating surfaces and gray participating media are assumed. Moreover, we present a robust and fast partitioned coupling strategy for the fourth field problem. Stability and efficiency of the implicit coupling algorithm are improved drawing on several methods to stabilize and to accelerate the convergence. To conclude and to review the effectiveness and the advantages of the additional thermal radiation field several numerical examples are considered to study the proposed algorithm. In particular we focus on an industrial application, namely the electro–thermo-mechanical modeling of the field-assisted sintering technology.
Specification of the Advanced Burner Test Reactor Multi-Physics Coupling Demonstration Problem
Shemon, E. R.; Grudzinski, J. J.; Lee, C. H.; Thomas, J. W.; Yu, Y. Q.
2015-12-21
This document specifies the multi-physics nuclear reactor demonstration problem using the SHARP software package developed by NEAMS. The SHARP toolset simulates the key coupled physics phenomena inside a nuclear reactor. The PROTEUS neutronics code models the neutron transport within the system, the Nek5000 computational fluid dynamics code models the fluid flow and heat transfer, and the DIABLO structural mechanics code models structural and mechanical deformation. The three codes are coupled to the MOAB mesh framework which allows feedback from neutronics, fluid mechanics, and mechanical deformation in a compatible format.
Multi-Physics Demonstration Problem with the SHARP Reactor Simulation Toolkit
Merzari, E.; Shemon, E. R.; Yu, Y. Q.; Thomas, J. W.; Obabko, A.; Jain, Rajeev; Mahadevan, Vijay; Tautges, Timothy; Solberg, Jerome; Ferencz, Robert Mark; Whitesides, R.
2015-12-21
This report describes to employ SHARP to perform a first-of-a-kind analysis of the core radial expansion phenomenon in an SFR. This effort required significant advances in the framework Multi-Physics Demonstration Problem with the SHARP Reactor Simulation Toolkit used to drive the coupled simulations, manipulate the mesh in response to the deformation of the geometry, and generate the necessary modified mesh files. Furthermore, the model geometry is fairly complex, and consistent mesh generation for the three physics modules required significant effort. Fully-integrated simulations of a 7-assembly mini-core test problem have been performed, and the results are presented here. Physics models of a full-core model of the Advanced Burner Test Reactor have also been developed for each of the three physics modules. Standalone results of each of the three physics modules for the ABTR are presented here, which provides a demonstration of the feasibility of the fully-integrated simulation.
DAG Software Architectures for Multi-Scale Multi-Physics Problems at Petascale and Beyond
NASA Astrophysics Data System (ADS)
Berzins, Martin
2015-03-01
The challenge of computations at Petascale and beyond is to ensure how to make possible efficient calculations on possibly hundreds of thousands for cores or on large numbers of GPUs or Intel Xeon Phis. An important methodology for achieving this is at present thought to be that of asynchronous task-based parallelism. The success of this approach will be demonstrated using the Uintah software framework for the solution of coupled fluid-structure interaction problems with chemical reactions. The layered approach of this software makes it possible for the user to specify the physical problems without parallel code, for that specification to be translated into a parallel set of tasks. These tasks are executed using a runtime system that executes tasks asynchronously and sometimes out-of-order. The scalability and portability of this approach will be demonstrated using examples from large scale combustion problems, industrial detonations and multi-scale, multi-physics models. The challenges of scaling such calculations to the next generations of leadership class computers (with more than a hundred petaflops) will be discussed. Thanks to NSF, XSEDE, DOE NNSA, DOE NETL, DOE ALCC and DOE INCITE.
Development of High-Order Method for Multi-Physics Problems Governed by Hyperbolic Equations
2012-08-01
implicit time marching with large time steps. 4.1 Background The one equation Spalart -Almaras (SA) turbulence model [18-21] in conservative...20] Spalart , P.R., Jou W-H, Strelets, M., Allmaras , S.R.. “Comments on the feasibility of LES for wings, and on a hybrid RANS/LES approach,” In...offer significant advantages for the simulation of complex flows and turbulence in non trivial geometries of interest to practical applications. The
Module-based Hybrid Uncertainty Quantification for Multi-physics Applications: Theory and Software
Tong, Charles; Chen, Xiao; Iaccarino, Gianluca; Mittal, Akshay
2013-10-08
In this project we proposed to develop an innovative uncertainty quantification methodology that captures the best of the two competing approaches in UQ, namely, intrusive and non-intrusive approaches. The idea is to develop the mathematics and the associated computational framework and algorithms to facilitate the use of intrusive or non-intrusive UQ methods in different modules of a multi-physics multi-module simulation model in a way that physics code developers for different modules are shielded (as much as possible) from the chores of accounting for the uncertain ties introduced by the other modules. As the result of our research and development, we have produced a number of publications, conference presentations, and a software product.
A theory manual for multi-physics code coupling in LIME.
Belcourt, Noel; Bartlett, Roscoe Ainsworth; Pawlowski, Roger Patrick; Schmidt, Rodney Cannon; Hooper, Russell Warren
2011-03-01
The Lightweight Integrating Multi-physics Environment (LIME) is a software package for creating multi-physics simulation codes. Its primary application space is when computer codes are currently available to solve different parts of a multi-physics problem and now need to be coupled with other such codes. In this report we define a common domain language for discussing multi-physics coupling and describe the basic theory associated with multiphysics coupling algorithms that are to be supported in LIME. We provide an assessment of coupling techniques for both steady-state and time dependent coupled systems. Example couplings are also demonstrated.
NASA Astrophysics Data System (ADS)
Poulet, Thomas; Paesold, Martin; Veveakis, Manolis
2017-03-01
Faults play a major role in many economically and environmentally important geological systems, ranging from impermeable seals in petroleum reservoirs to fluid pathways in ore-forming hydrothermal systems. Their behavior is therefore widely studied and fault mechanics is particularly focused on the mechanisms explaining their transient evolution. Single faults can change in time from seals to open channels as they become seismically active and various models have recently been presented to explain the driving forces responsible for such transitions. A model of particular interest is the multi-physics oscillator of Alevizos et al. (J Geophys Res Solid Earth 119(6), 4558-4582, 2014) which extends the traditional rate and state friction approach to rate and temperature-dependent ductile rocks, and has been successfully applied to explain spatial features of exposed thrusts as well as temporal evolutions of current subduction zones. In this contribution we implement that model in REDBACK, a parallel open-source multi-physics simulator developed to solve such geological instabilities in three dimensions. The resolution of the underlying system of equations in a tightly coupled manner allows REDBACK to capture appropriately the various theoretical regimes of the system, including the periodic and non-periodic instabilities. REDBACK can then be used to simulate the drastic permeability evolution in time of such systems, where nominally impermeable faults can sporadically become fluid pathways, with permeability increases of several orders of magnitude.
Modeling and simulation of multi-physics multi-scale transport phenomenain bio-medical applications
NASA Astrophysics Data System (ADS)
Kenjereš, Saša
2014-08-01
We present a short overview of some of our most recent work that combines the mathematical modeling, advanced computer simulations and state-of-the-art experimental techniques of physical transport phenomena in various bio-medical applications. In the first example, we tackle predictions of complex blood flow patterns in the patient-specific vascular system (carotid artery bifurcation) and transfer of the so-called "bad" cholesterol (low-density lipoprotein, LDL) within the multi-layered artery wall. This two-way coupling between the blood flow and corresponding mass transfer of LDL within the artery wall is essential for predictions of regions where atherosclerosis can develop. It is demonstrated that a recently developed mathematical model, which takes into account the complex multi-layer arterial-wall structure, produced LDL profiles within the artery wall in good agreement with in-vivo experiments in rabbits, and it can be used for predictions of locations where the initial stage of development of atherosclerosis may take place. The second example includes a combination of pulsating blood flow and medical drug delivery and deposition controlled by external magnetic field gradients in the patient specific carotid artery bifurcation. The results of numerical simulations are compared with own PIV (Particle Image Velocimetry) and MRI (Magnetic Resonance Imaging) in the PDMS (silicon-based organic polymer) phantom. A very good agreement between simulations and experiments is obtained for different stages of the pulsating cycle. Application of the magnetic drug targeting resulted in an increase of up to ten fold in the efficiency of local deposition of the medical drug at desired locations. Finally, the LES (Large Eddy Simulation) of the aerosol distribution within the human respiratory system that includes up to eight bronchial generations is performed. A very good agreement between simulations and MRV (Magnetic Resonance Velocimetry) measurements is obtained
Salko, Robert K.; Schmidt, Rodney C.; Avramova, Maria N.
2014-11-23
This study describes major improvements to the computational infrastructure of the CTF subchannel code so that full-core, pincell-resolved (i.e., one computational subchannel per real bundle flow channel) simulations can now be performed in much shorter run-times, either in stand-alone mode or as part of coupled-code multi-physics calculations. These improvements support the goals of the Department Of Energy Consortium for Advanced Simulation of Light Water Reactors (CASL) Energy Innovation Hub to develop high fidelity multi-physics simulation tools for nuclear energy design and analysis.
NASA Astrophysics Data System (ADS)
Cocheteau, N.; Maurel-Pantel, A.; Lebon, F.; Rosu, I.; Ait-Zaid, S.; Savin de Larclause, I.; Salaun, Y.
2014-06-01
Direct bonding is a well-known process. However in order to use this process in spatial instrument fabrication the mechanical resistance needs to be quantified precisely. In order to improve bonded strength, optimal parameters of the process are found by studying the influence of annealing time, temperature and roughness which are studied using three experimental methods: double shear, cleavage and wedge tests. Those parameters are chosen thanks to the appearance of time/temperature equivalence. All results brought out the implementation of a multi-physic model to predict the mechanical behavior of direct bonding interface.
NASA Astrophysics Data System (ADS)
Jayanthi, Aditya; Coker, Christopher
2016-11-01
In the last decade, CFD simulations have transitioned from the stage where they are used to validate the final designs to the main stream development of products driven by the simulation. However, there are still niche areas of applications liking oiling simulations, where the traditional CFD simulation times are probative to use them in product development and have to rely on experimental methods, which are expensive. In this paper a unique example of Sprocket-Chain simulation will be presented using nanoFluidx a commercial SPH code developed by FluiDyna GmbH and Altair Engineering. The grid less nature of the of SPH method has inherent advantages in the areas of application with complex geometry which pose severe challenge to classical finite volume CFD methods due to complex moving geometries, moving meshes and high resolution requirements leading to long simulation times. The simulations times using nanoFluidx can be reduced from weeks to days allowing the flexibility to run more simulation and can be in used in main stream product development. The example problem under consideration is a classical Multiphysics problem and a sequentially coupled solution of Motion Solve and nanoFluidX will be presented. This abstract is replacing DFD16-2016-000045.
Salko, Robert K; Schmidt, Rodney; Avramova, Maria N
2014-01-01
This paper describes major improvements to the computational infrastructure of the CTF sub-channel code so that full-core sub-channel-resolved simulations can now be performed in much shorter run-times, either in stand-alone mode or as part of coupled-code multi-physics calculations. These improvements support the goals of the Department Of Energy (DOE) Consortium for Advanced Simulations of Light Water (CASL) Energy Innovation Hub to develop high fidelity multi-physics simulation tools for nuclear energy design and analysis. A set of serial code optimizations--including fixing computational inefficiencies, optimizing the numerical approach, and making smarter data storage choices--are first described and shown to reduce both execution time and memory usage by about a factor of ten. Next, a Single Program Multiple Data (SPMD) parallelization strategy targeting distributed memory Multiple Instruction Multiple Data (MIMD) platforms and utilizing domain-decomposition is presented. In this approach, data communication between processors is accomplished by inserting standard MPI calls at strategic points in the code. The domain decomposition approach implemented assigns one MPI process to each fuel assembly, with each domain being represented by its own CTF input file. The creation of CTF input files, both for serial and parallel runs, is also fully automated through use of a pre-processor utility that takes a greatly reduced set of user input over the traditional CTF input file. To run CTF in parallel, two additional libraries are currently needed; MPI, for inter-processor message passing, and the Parallel Extensible Toolkit for Scientific Computation (PETSc), which is leveraged to solve the global pressure matrix in parallel. Results presented include a set of testing and verification calculations and performance tests assessing parallel scaling characteristics up to a full core, sub-channel-resolved model of Watts Bar Unit 1 under hot full-power conditions (193 17x17
NASA Astrophysics Data System (ADS)
Khuwaileh, Bassam
High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL
Multi-physics/scale simulations using particles
NASA Astrophysics Data System (ADS)
Koumoutsakos, Petros
2006-03-01
Particle simulations of continuum and discrete phenomena can be formulated by following the motion of interacting particles that carry the physical properties of the systems that is being approximated (continuum) or modeled (discrete) by the particles. We identify the common computational characteristics of particle methods and emphasize their key properties that enable the formulation of a novel, systematic framework for multiscale simulations, that can be applicable to the simulation of diverse physical problems. We present novel multiresolution particle methods for continuum (fluid/solid) simulations, using adaptive mesh refinement and wavelets, by relaxing the grid-free character of particle methods and discuss the coupling of scales in continuum-atomistic flow simulations.
A self-taught artificial agent for multi-physics computational model personalization.
Neumann, Dominik; Mansi, Tommaso; Itu, Lucian; Georgescu, Bogdan; Kayvanpour, Elham; Sedaghat-Hamedani, Farbod; Amr, Ali; Haas, Jan; Katus, Hugo; Meder, Benjamin; Steidl, Stefan; Hornegger, Joachim; Comaniciu, Dorin
2016-12-01
Personalization is the process of fitting a model to patient data, a critical step towards application of multi-physics computational models in clinical practice. Designing robust personalization algorithms is often a tedious, time-consuming, model- and data-specific process. We propose to use artificial intelligence concepts to learn this task, inspired by how human experts manually perform it. The problem is reformulated in terms of reinforcement learning. In an off-line phase, Vito, our self-taught artificial agent, learns a representative decision process model through exploration of the computational model: it learns how the model behaves under change of parameters. The agent then automatically learns an optimal strategy for on-line personalization. The algorithm is model-independent; applying it to a new model requires only adjusting few hyper-parameters of the agent and defining the observations to match. The full knowledge of the model itself is not required. Vito was tested in a synthetic scenario, showing that it could learn how to optimize cost functions generically. Then Vito was applied to the inverse problem of cardiac electrophysiology and the personalization of a whole-body circulation model. The obtained results suggested that Vito could achieve equivalent, if not better goodness of fit than standard methods, while being more robust (up to 11% higher success rates) and with faster (up to seven times) convergence rate. Our artificial intelligence approach could thus make personalization algorithms generalizable and self-adaptable to any patient and any model.
Integration of Advanced Probabilistic Analysis Techniques with Multi-Physics Models
Cetiner, Mustafa Sacit; none,; Flanagan, George F.; Poore III, Willis P.; Muhlheim, Michael David
2014-07-30
An integrated simulation platform that couples probabilistic analysis-based tools with model-based simulation tools can provide valuable insights for reactive and proactive responses to plant operating conditions. The objective of this work is to demonstrate the benefits of a partial implementation of the Small Modular Reactor (SMR) Probabilistic Risk Assessment (PRA) Detailed Framework Specification through the coupling of advanced PRA capabilities and accurate multi-physics plant models. Coupling a probabilistic model with a multi-physics model will aid in design, operations, and safety by providing a more accurate understanding of plant behavior. This represents the first attempt at actually integrating these two types of analyses for a control system used for operations, on a faster than real-time basis. This report documents the development of the basic communication capability to exchange data with the probabilistic model using Reliability Workbench (RWB) and the multi-physics model using Dymola. The communication pathways from injecting a fault (i.e., failing a component) to the probabilistic and multi-physics models were successfully completed. This first version was tested with prototypic models represented in both RWB and Modelica. First, a simple event tree/fault tree (ET/FT) model was created to develop the software code to implement the communication capabilities between the dynamic-link library (dll) and RWB. A program, written in C#, successfully communicates faults to the probabilistic model through the dll. A systems model of the Advanced Liquid-Metal Reactor–Power Reactor Inherently Safe Module (ALMR-PRISM) design developed under another DOE project was upgraded using Dymola to include proper interfaces to allow data exchange with the control application (ConApp). A program, written in C+, successfully communicates faults to the multi-physics model. The results of the example simulation were successfully plotted.
A multi-physical model of actuation response in dielectric gels
NASA Astrophysics Data System (ADS)
Li, Bo; Chang, LongFei; Asaka, Kinji; Chen, Hualing; Li, Dichen
2016-12-01
Actuation deformation of a dielectric gel is attributed to: the solvent diffusion, the electrical polarization and material hyperelasticity. A multi-physical model, coupling electrical and mechanical quantities, is established, based on the thermodynamics. A set of constitutive relations is derived as an equation of state for characterization. The model is applied to specific cases as effective validations. Physical and chemical parameters affect the performance of the gel, showing nonlinear deformation and instability. This model offers guidance for engineering application.
Modelling transport phenomena in a multi-physics context
NASA Astrophysics Data System (ADS)
Marra, Francesco
2015-01-01
Innovative heating research on cooking, pasteurization/sterilization, defrosting, thawing and drying, often focuses on areas which include the assessment of processing time, evaluation of heating uniformity, studying the impact on quality attributes of the final product as well as considering the energy efficiency of these heating processes. During the last twenty years, so-called electro-heating-processes (radio-frequency - RF, microwaves - MW and ohmic - OH) gained a wide interest in industrial food processing and many applications using the above mentioned technologies have been developed with the aim of reducing processing time, improving process efficiency and, in many cases, the heating uniformity. In the area of innovative heating, electro-heating accounts for a considerable portion of both the scientific literature and commercial applications, which can be subdivided into either direct electro-heating (as in the case of OH heating) where electrical current is applied directly to the food or indirect electro-heating (e.g. MW and RF heating) where the electrical energy is firstly converted to electromagnetic radiation which subsequently generates heat within a product. New software packages, which make easier solution of PDEs based mathematical models, and new computers, capable of larger RAM and more efficient CPU performances, allowed an increasing interest about modelling transport phenomena in systems and processes - as the ones encountered in food processing - that can be complex in terms of geometry, composition, boundary conditions but also - as in the case of electro-heating assisted applications - in terms of interaction with other physical phenomena such as displacement of electric or magnetic field. This paper deals with the description of approaches used in modelling transport phenomena in a multi-physics context such as RF, MW and OH assisted heating.
Modelling transport phenomena in a multi-physics context
Marra, Francesco
2015-01-22
Innovative heating research on cooking, pasteurization/sterilization, defrosting, thawing and drying, often focuses on areas which include the assessment of processing time, evaluation of heating uniformity, studying the impact on quality attributes of the final product as well as considering the energy efficiency of these heating processes. During the last twenty years, so-called electro-heating-processes (radio-frequency - RF, microwaves - MW and ohmic - OH) gained a wide interest in industrial food processing and many applications using the above mentioned technologies have been developed with the aim of reducing processing time, improving process efficiency and, in many cases, the heating uniformity. In the area of innovative heating, electro-heating accounts for a considerable portion of both the scientific literature and commercial applications, which can be subdivided into either direct electro-heating (as in the case of OH heating) where electrical current is applied directly to the food or indirect electro-heating (e.g. MW and RF heating) where the electrical energy is firstly converted to electromagnetic radiation which subsequently generates heat within a product. New software packages, which make easier solution of PDEs based mathematical models, and new computers, capable of larger RAM and more efficient CPU performances, allowed an increasing interest about modelling transport phenomena in systems and processes - as the ones encountered in food processing - that can be complex in terms of geometry, composition, boundary conditions but also - as in the case of electro-heating assisted applications - in terms of interaction with other physical phenomena such as displacement of electric or magnetic field. This paper deals with the description of approaches used in modelling transport phenomena in a multi-physics context such as RF, MW and OH assisted heating.
Multi-Physics Analysis of the Fermilab Booster RF Cavity
Awida, M.; Reid, J.; Yakovlev, V.; Lebedev, V.; Khabiboulline, T.; Champion, M.; /Fermilab
2012-05-14
After about 40 years of operation the RF accelerating cavities in Fermilab Booster need an upgrade to improve their reliability and to increase the repetition rate in order to support a future experimental program. An increase in the repetition rate from 7 to 15 Hz entails increasing the power dissipation in the RF cavities, their ferrite loaded tuners, and HOM dampers. The increased duty factor requires careful modelling for the RF heating effects in the cavity. A multi-physic analysis investigating both the RF and thermal properties of Booster cavity under various operating conditions is presented in this paper.
IMPETUS - Interactive MultiPhysics Environment for Unified Simulations.
Ha, Vi Q; Lykotrafitis, George
2016-12-08
We introduce IMPETUS - Interactive MultiPhysics Environment for Unified Simulations, an object oriented, easy-to-use, high performance, C++ program for three-dimensional simulations of complex physical systems that can benefit a large variety of research areas, especially in cell mechanics. The program implements cross-communication between locally interacting particles and continuum models residing in the same physical space while a network facilitates long-range particle interactions. Message Passing Interface is used for inter-processor communication for all simulations.
Solid Oxide Fuel Cell - Multi-Physics and GUI
2013-10-10
SOFC-MP is a simulation tool developed at PNNL to evaluate the tightly coupled multi-physical phenomena in SOFCs. The purpose of the tool is to allow SOFC manufacturers to numerically test changes in planar stack design to meet DOE technical targets. The SOFC-MP 2D module is designed for computational efficiency to enable rapid engineering evaluations for operation of tall symmetric stacks. It can quickly compute distributions for the current density, voltage, temperature, and species composition in tall stacks with co-flow or counter-flow orientations. The 3D module computes distributions in entire 3D domain and handles all planner configurations: co-flow, counter-flow, and cross-flow. The detailed data from 3D simulation can be used as input for structural analysis. SOFC-MP GUI integrates both 2D and 3D modules, and it provides user friendly pre-processing and post-processing capabilities.
NASA Astrophysics Data System (ADS)
Morsali, Seyedreza; Daryadel, Soheil; Zhou, Zhong; Behroozfar, Ali; Qian, Dong; Minary-Jolandan, Majid
2017-01-01
Capability to print metals at micro/nanoscale in arbitrary 3D patterns at local points of interest will have applications in nano-electronics and sensors. Meniscus-confined electrodeposition (MCED) is a manufacturing process that enables depositing metals from an electrolyte containing nozzle (pipette) in arbitrary 3D patterns. In this process, a meniscus (liquid bridge or capillary) between the pipette tip and the substrate governs the localized electrodeposition process. Fabrication of metallic microstructures using this process is a multi-physics process in which electrodeposition, fluid dynamics, and mass and heat transfer physics are simultaneously involved. We utilized multi-physics finite element simulation, guided by experimental data, to understand the effect of water evaporation from the liquid meniscus at the tip of the nozzle for deposition of free-standing copper microwires in MCED process.
Lithium-Ion Battery Safety Study Using Multi-Physics Internal Short-Circuit Model (Presentation)
Kim, G-.H.; Smith, K.; Pesaran, A.
2009-06-01
This presentation outlines NREL's multi-physics simulation study to characterize an internal short by linking and integrating electrochemical cell, electro-thermal, and abuse reaction kinetics models.
Problems of applicability of statistical methods in cosmology
Levin, S. F.
2015-12-15
The problems arising from the incorrect formulation of measuring problems of identification for cosmological models and violations of conditions of applicability of statistical methods are considered.
Two-Step Multi-Physics Analysis of an Annular Linear Induction Pump for Fission Power Systems
NASA Technical Reports Server (NTRS)
Geng, Steven M.; Reid, Terry V.
2016-01-01
One of the key technologies associated with fission power systems (FPS) is the annular linear induction pump (ALIP). ALIPs are used to circulate liquid-metal fluid for transporting thermal energy from the nuclear reactor to the power conversion device. ALIPs designed and built to date for FPS project applications have not performed up to expectations. A unique, two-step approach was taken toward the multi-physics examination of an ALIP using ANSYS Maxwell 3D and Fluent. This multi-physics approach was developed so that engineers could investigate design variations that might improve pump performance. Of interest was to determine if simple geometric modifications could be made to the ALIP components with the goal of increasing the Lorentz forces acting on the liquid-metal fluid, which in turn would increase pumping capacity. The multi-physics model first calculates the Lorentz forces acting on the liquid metal fluid in the ALIP annulus. These forces are then used in a computational fluid dynamics simulation as (a) internal boundary conditions and (b) source functions in the momentum equations within the Navier-Stokes equations. The end result of the two-step analysis is a predicted pump pressure rise that can be compared with experimental data.
A novel medical image data-based multi-physics simulation platform for computational life sciences.
Neufeld, Esra; Szczerba, Dominik; Chavannes, Nicolas; Kuster, Niels
2013-04-06
Simulating and modelling complex biological systems in computational life sciences requires specialized software tools that can perform medical image data-based modelling, jointly visualize the data and computational results, and handle large, complex, realistic and often noisy anatomical models. The required novel solvers must provide the power to model the physics, biology and physiology of living tissue within the full complexity of the human anatomy (e.g. neuronal activity, perfusion and ultrasound propagation). A multi-physics simulation platform satisfying these requirements has been developed for applications including device development and optimization, safety assessment, basic research, and treatment planning. This simulation platform consists of detailed, parametrized anatomical models, a segmentation and meshing tool, a wide range of solvers and optimizers, a framework for the rapid development of specialized and parallelized finite element method solvers, a visualization toolkit-based visualization engine, a Python scripting interface for customized applications, a coupling framework, and more. Core components are cross-platform compatible and use open formats. Several examples of applications are presented: hyperthermia cancer treatment planning, tumour growth modelling, evaluating the magneto-haemodynamic effect as a biomarker and physics-based morphing of anatomical models.
A novel medical image data-based multi-physics simulation platform for computational life sciences
Neufeld, Esra; Szczerba, Dominik; Chavannes, Nicolas; Kuster, Niels
2013-01-01
Simulating and modelling complex biological systems in computational life sciences requires specialized software tools that can perform medical image data-based modelling, jointly visualize the data and computational results, and handle large, complex, realistic and often noisy anatomical models. The required novel solvers must provide the power to model the physics, biology and physiology of living tissue within the full complexity of the human anatomy (e.g. neuronal activity, perfusion and ultrasound propagation). A multi-physics simulation platform satisfying these requirements has been developed for applications including device development and optimization, safety assessment, basic research, and treatment planning. This simulation platform consists of detailed, parametrized anatomical models, a segmentation and meshing tool, a wide range of solvers and optimizers, a framework for the rapid development of specialized and parallelized finite element method solvers, a visualization toolkit-based visualization engine, a Python scripting interface for customized applications, a coupling framework, and more. Core components are cross-platform compatible and use open formats. Several examples of applications are presented: hyperthermia cancer treatment planning, tumour growth modelling, evaluating the magneto-haemodynamic effect as a biomarker and physics-based morphing of anatomical models. PMID:24427518
Application of boundary integral equations to elastoplastic problems
NASA Technical Reports Server (NTRS)
Mendelson, A.; Albers, L. U.
1975-01-01
The application of boundary integral equations to elastoplastic problems is reviewed. Details of the analysis as applied to torsion problems and to plane problems is discussed. Results are presented for the elastoplastic torsion of a square cross section bar and for the plane problem of notched beams. A comparison of different formulations as well as comparisons with experimental results are presented.
Applications of artificial intelligence to engineering problems
Adey, R.A.; Sriram, D.
1987-01-01
The conference covered general sessions on AI techniques suitable for engineering applications, e.g. knowledge representation, natural language, probability, design methodologies and constraints. This was followed by sessions covering application in mechanical engineering, civil engineering, electrical engineering, and general engineering. Further sessions covered robotics and tools and techniques for building knowledge based systems.
Design and Application of Learning Environments Based on Integrative Problems
ERIC Educational Resources Information Center
Sanchez, Ivan; Neriz, Liliana; Ramis, Francisco
2008-01-01
This work reports on the results obtained from the application of learning environments on the basis of one integrative problem and a series of other smaller problems that limit the contents to be investigated and learned by the students. This methodology, which is a variation to traditional problem-based learning approaches, is here illustrated…
Applications of NASTRAN to nuclear problems
NASA Technical Reports Server (NTRS)
Spreeuw, E.
1972-01-01
The extent to which suitable solutions may be obtained for one physics problem and two engineering type problems is traced. NASTRAN appears to be a practical tool to solve one-group steady-state neutron diffusion equations. Transient diffusion analysis may be performed after new levels that allow time-dependent temperature calculations are developed. NASTRAN piecewise linear anlaysis may be applied to solve those plasticity problems for which a smooth stress-strain curve can be used to describe the nonlinear material behavior. The accuracy decreases when sharp transitions in the stress-strain relations are involved. Improved NASTRAN usefulness will be obtained when nonlinear material capabilities are extended to axisymmetric elements and to include provisions for time-dependent material properties and creep analysis. Rigid formats 3 and 5 proved to be very convenient for the buckling and normal-mode analysis of a nuclear fuel element.
Data-driven prognosis: a multi-physics approach verified via balloon burst experiment
Chandra, Abhijit; Kar, Oliva
2015-01-01
A multi-physics formulation for data-driven prognosis (DDP) is developed. Unlike traditional predictive strategies that require controlled offline measurements or ‘training’ for determination of constitutive parameters to derive the transitional statistics, the proposed DDP algorithm relies solely on in situ measurements. It uses a deterministic mechanics framework, but the stochastic nature of the solution arises naturally from the underlying assumptions regarding the order of the conservation potential as well as the number of dimensions involved. The proposed DDP scheme is capable of predicting onset of instabilities. Because the need for offline testing (or training) is obviated, it can be easily implemented for systems where such a priori testing is difficult or even impossible to conduct. The prognosis capability is demonstrated here via a balloon burst experiment where the instability is predicted using only online visual observations. The DDP scheme never failed to predict the incipient failure, and no false-positives were issued. The DDP algorithm is applicable to other types of datasets. Time horizons of DDP predictions can be adjusted by using memory over different time windows. Thus, a big dataset can be parsed in time to make a range of predictions over varying time horizons. PMID:27547071
Data-driven prognosis: a multi-physics approach verified via balloon burst experiment.
Chandra, Abhijit; Kar, Oliva
2015-04-08
A multi-physics formulation for data-driven prognosis (DDP) is developed. Unlike traditional predictive strategies that require controlled offline measurements or 'training' for determination of constitutive parameters to derive the transitional statistics, the proposed DDP algorithm relies solely on in situ measurements. It uses a deterministic mechanics framework, but the stochastic nature of the solution arises naturally from the underlying assumptions regarding the order of the conservation potential as well as the number of dimensions involved. The proposed DDP scheme is capable of predicting onset of instabilities. Because the need for offline testing (or training) is obviated, it can be easily implemented for systems where such a priori testing is difficult or even impossible to conduct. The prognosis capability is demonstrated here via a balloon burst experiment where the instability is predicted using only online visual observations. The DDP scheme never failed to predict the incipient failure, and no false-positives were issued. The DDP algorithm is applicable to other types of datasets. Time horizons of DDP predictions can be adjusted by using memory over different time windows. Thus, a big dataset can be parsed in time to make a range of predictions over varying time horizons.
The Atmospheric Sciences: Problems and Applications.
ERIC Educational Resources Information Center
National Academy of Sciences - National Research Council, Washington, DC. Committee on Atmospheric Sciences.
Over the years, the Committee on Atmospheric Sciences of the National Research Council has published a number of scientific and technical reports dealing with many aspects of the atmospheric sciences. This publication is an attempt to present to a broad audience this information about problems and research in the atmospheric sciences. Chapters…
Linear Programming and Its Application to Pattern Recognition Problems
NASA Technical Reports Server (NTRS)
Omalley, M. J.
1973-01-01
Linear programming and linear programming like techniques as applied to pattern recognition problems are discussed. Three relatively recent research articles on such applications are summarized. The main results of each paper are described, indicating the theoretical tools needed to obtain them. A synopsis of the author's comments is presented with regard to the applicability or non-applicability of his methods to particular problems, including computational results wherever given.
Quantum game application to spectrum scarcity problems
NASA Astrophysics Data System (ADS)
Zabaleta, O. G.; Barrangú, J. P.; Arizmendi, C. M.
2017-01-01
Recent spectrum-sharing research has produced a strategy to address spectrum scarcity problems. This novel idea, named cognitive radio, considers that secondary users can opportunistically exploit spectrum holes left temporarily unused by primary users. This presents a competitive scenario among cognitive users, making it suitable for game theory treatment. In this work, we show that the spectrum-sharing benefits of cognitive radio can be increased by designing a medium access control based on quantum game theory. In this context, we propose a model to manage spectrum fairly and effectively, based on a multiple-users multiple-choice quantum minority game. By taking advantage of quantum entanglement and quantum interference, it is possible to reduce the probability of collision problems commonly associated with classic algorithms. Collision avoidance is an essential property for classic and quantum communications systems. In our model, two different scenarios are considered, to meet the requirements of different user strategies. The first considers sensor networks where the rational use of energy is a cornerstone; the second focuses on installations where the quality of service of the entire network is a priority.
Fractal applications to complex crustal problems
NASA Technical Reports Server (NTRS)
Turcotte, Donald L.
1989-01-01
Complex scale-invariant problems obey fractal statistics. The basic definition of a fractal distribution is that the number of objects with a characteristic linear dimension greater than r satisfies the relation N = about r exp -D where D is the fractal dimension. Fragmentation often satisfies this relation. The distribution of earthquakes satisfies this relation. The classic relationship between the length of a rocky coast line and the step length can be derived from this relation. Power law relations for spectra can also be related to fractal dimensions. Topography and gravity are examples. Spectral techniques can be used to obtain maps of fractal dimension and roughness amplitude. These provide a quantitative measure of texture analysis. It is argued that the distribution of stress and strength in a complex crustal region, such as the Alps, is fractal. Based on this assumption, the observed frequency-magnitude relation for the seismicity in the region can be derived.
Application of energy stability theory to problems in crystal growth
NASA Technical Reports Server (NTRS)
Neitzel, G. P.; Jankowski, D. F.
1990-01-01
The use of energy stability theory to study problems in crystal growth is outlined and justified in terms of convection mechanisms. An application to the float zone process of crystal growth is given as an illustration.
Application of the Discontinuous Galerkin Method to Acoustic Scatter Problems
NASA Technical Reports Server (NTRS)
Atkins, H. L.
1997-01-01
The application of the quadrature-free form of the discontinuous Galerkin method to two problems from Category 1 of the Second Computational Aeroacoustics Workshop on Benchmark problems is presented. The method and boundary conditions relevant to this work are described followed by two test problems, both of which involve the scattering of an acoustic wave off a cylinder. The numerical test performed to evaluate mesh-resolution requirements and boundary-condition effectiveness are also described.
Applications of Genetic Methods to NASA Design and Operations Problems
NASA Technical Reports Server (NTRS)
Laird, Philip D.
1996-01-01
We review four recent NASA-funded applications in which evolutionary/genetic methods are important. In the process we survey: the kinds of problems being solved today with these methods; techniques and tools used; problems encountered; and areas where research is needed. The presentation slides are annotated briefly at the top of each page.
Analytic semigroups: Applications to inverse problems for flexible structures
NASA Technical Reports Server (NTRS)
Banks, H. T.; Rebnord, D. A.
1990-01-01
Convergence and stability results for least squares inverse problems involving systems described by analytic semigroups are presented. The practical importance of these results is demonstrated by application to several examples from problems of estimation of material parameters in flexible structures using accelerometer data.
AI techniques for a space application scheduling problem
NASA Technical Reports Server (NTRS)
Thalman, N.; Sparn, T.; Jaffres, L.; Gablehouse, D.; Judd, D.; Russell, C.
1991-01-01
Scheduling is a very complex optimization problem which can be categorized as an NP-complete problem. NP-complete problems are quite diverse, as are the algorithms used in searching for an optimal solution. In most cases, the best solutions that can be derived for these combinatorial explosive problems are near-optimal solutions. Due to the complexity of the scheduling problem, artificial intelligence (AI) can aid in solving these types of problems. Some of the factors are examined which make space application scheduling problems difficult and presents a fairly new AI-based technique called tabu search as applied to a real scheduling application. the specific problem is concerned with scheduling application. The specific problem is concerned with scheduling solar and stellar observations for the SOLar-STellar Irradiance Comparison Experiment (SOLSTICE) instrument in a constrained environment which produces minimum impact on the other instruments and maximizes target observation times. The SOLSTICE instrument will gly on-board the Upper Atmosphere Research Satellite (UARS) in 1991, and a similar instrument will fly on the earth observing system (Eos).
Harrison, Cyrus; Larsen, Matt; Brugger, Eric
2016-12-05
Strawman is a system designed to explore the in situ visualization and analysis needs of simulation code teams running multi-physics calculations on many-core HPC architectures. It porvides rendering pipelines that can leverage both many-core CPUs and GPUs to render images of simulation meshes.
Application of computational aero-acoustics to real world problems
NASA Technical Reports Server (NTRS)
Hardin, Jay C.
1996-01-01
The application of computational aeroacoustics (CAA) to real problems is discussed in relation to the analysis performed with the aim of assessing the application of the various techniques. It is considered that the applications are limited by the inability of the computational resources to resolve the large range of scales involved in high Reynolds number flows. Possible simplifications are discussed. It is considered that problems remain to be solved in relation to the efficient use of the power of parallel computers and in the development of turbulent modeling schemes. The goal of CAA is stated as being the implementation of acoustic design studies on a computer terminal with reasonable run times.
A coupled multi-physics modeling framework for induced seismicity
NASA Astrophysics Data System (ADS)
Karra, S.; Dempsey, D. E.
2015-12-01
There is compelling evidence that moderate-magnitude seismicity in the central and eastern US is on the rise. Many of these earthquakes are attributable to anthropogenic injection of fluids into deep formations resulting in incidents where state regulators have even intervened. Earthquakes occur when a high-pressure fluid (water or CO2) enters a fault, reducing its resistance to shear failure and causing runaway sliding. However, induced seismicity does not manifest as a solitary event, but rather as a sequence of earthquakes evolving in time and space. Additionally, one needs to consider the changes in the permeability due to slip within a fault and the subsequent effects on fluid transport and pressure build-up. A modeling framework that addresses the complex two-way coupling between seismicity and fluid-flow is thus needed. In this work, a new parallel physics-based coupled framework for induced seismicity that couples the slip in faults and fluid flow is presented. The framework couples the highly parallel subsurface flow code PFLOTRAN (www.pflotran.org) and a fast Fourier transform based earthquake simulator QK3. Stresses in the fault are evaluated using Biot's formulation in PFLOTRAN and is used to calculate slip in QK3. Permeability is updated based on the slip in the fault which in turn influences flow. Application of the framework to synthetic examples and datasets from Colorado and Oklahoma will also be discussed.
Development of High-Order Methods for Multi-Physics Problems Governed by Hyperbolic Equations
2010-10-01
the conservative variable state vector: U = ρ ρu ρv ρE , and F (U) is the inviscid flux tensor with vector components: f = ρu ρu2 + p ρuv...ρE + p )u , g = ρv ρuv ρv2 + p (ρE + p )v . The specific energy E is the sum of the specific internal energy e and the kinetic energy...the constitutive relations: e = CV T, p = (γ − 1) [ ρE − ρ 2 (u2 + v2) ] . 0.3 Discretization method The governing equations of fluid motion, given
Research on TRIZ and CAIs Application Problems for Technology Innovation
NASA Astrophysics Data System (ADS)
Li, Xiangdong; Li, Qinghai; Bai, Zhonghang; Geng, Lixiao
In order to realize application of invent problem solve theory (TRIZ) and computer aided innovation software (CAIs) , need to solve some key problems, such as the mode choice of technology innovation, establishment of technology innovation organization network(TION), and achievement of innovative process based on TRIZ and CAIs, etc.. This paper shows that the demands for TRIZ and CAIs according to the characteristics and existing problem of the manufacturing enterprises. Have explained that the manufacturing enterprises need to set up an open TION of enterprise leading type, and achieve the longitudinal cooperation innovation with institution of higher learning. The process of technology innovation based on TRIZ and CAIs has been set up from researching and developing point of view. Application of TRIZ and CAIs in FY Company has been summarized. The application effect of TRIZ and CAIs has been explained using technology innovation of the close goggle valve product.
An application of the matching law to severe problem behavior.
Borrero, John C; Vollmer, Timothy R
2002-01-01
We evaluated problem behavior and appropriate behavior using the matching equation with 4 individuals with developmental disabilities. Descriptive observations were conducted during interactions between the participants and their primary care providers in either a clinical laboratory environment (3 participants) or the participant's home (1 participant). Data were recorded on potential reinforcers, problem behavior, and appropriate behavior. After identifying the reinforcers that maintained each participant's problem behavior by way of functional analysis, the descriptive data were analyzed retrospectively, based on the matching equation. Results showed that the proportional rate of problem behavior relative to appropriate behavior approximately matched the proportional rate of reinforcement for problem behavior for all participants. The results extend prior research because a functional analysis was conducted and because multiple sources of reinforcement (other than attention) were evaluated. Methodological constraints were identified, which may limit the application of the matching law on both practical and conceptual levels. PMID:11936543
Problem solving in magnetic field: Animation in mobile application
NASA Astrophysics Data System (ADS)
Najib, A. S. M.; Othman, A. P.; Ibarahim, Z.
2014-09-01
This paper is focused on the development of mobile application for smart phone, Android, tablet, iPhone, and iPad as a problem solving tool in magnetic field. Mobile application designs consist of animations that were created by using Flash8 software which could be imported and compiled to prezi.com software slide. The Prezi slide then had been duplicated in Power Point format and instead question bank with complete answer scheme was also additionally generated as a menu in the application. Results of the published mobile application can be viewed and downloaded at Infinite Monkey website or at Google Play Store from your gadgets. Statistics of the application from Google Play Developer Console shows the high impact of the application usage in all over the world.
Innovative applications of genetic algorithms to problems in accelerator physics
NASA Astrophysics Data System (ADS)
Hofler, Alicia; Terzić, Balša; Kramer, Matthew; Zvezdin, Anton; Morozov, Vasiliy; Roblin, Yves; Lin, Fanglei; Jarvis, Colin
2013-01-01
The genetic algorithm (GA) is a powerful technique that implements the principles nature uses in biological evolution to optimize a multidimensional nonlinear problem. The GA works especially well for problems with a large number of local extrema, where traditional methods (such as conjugate gradient, steepest descent, and others) fail or, at best, underperform. The field of accelerator physics, among others, abounds with problems which lend themselves to optimization via GAs. In this paper, we report on the successful application of GAs in several problems related to the existing Continuous Electron Beam Accelerator Facility nuclear physics machine, the proposed Medium-energy Electron-Ion Collider at Jefferson Lab, and a radio frequency gun-based injector. These encouraging results are a step forward in optimizing accelerator design and provide an impetus for application of GAs to other problems in the field. To that end, we discuss the details of the GAs used, include a newly devised enhancement which leads to improved convergence to the optimum, and make recommendations for future GA developments and accelerator applications.
The Application of Acceptance and Commitment Therapy to Problem Anger
ERIC Educational Resources Information Center
Eifert, Georg H.; Forsyth, John P.
2011-01-01
The goal of this paper is to familiarize clinicians with the use of Acceptance and Commitment Therapy (ACT) for problem anger by describing the application of ACT to a case of a 45-year-old man struggling with anger. ACT is an approach and set of intervention technologies that support acceptance and mindfulness processes linked with commitment and…
An Application of Calculus: Optimum Parabolic Path Problem
ERIC Educational Resources Information Center
Atasever, Merve; Pakdemirli, Mehmet; Yurtsever, Hasan Ali
2009-01-01
A practical and technological application of calculus problem is posed to motivate freshman students or junior high school students. A variable coefficient of friction is used in modelling air friction. The case in which the coefficient of friction is a decreasing function of altitude is considered. The optimum parabolic path for a flying object…
Conceptions of Efficiency: Applications in Learning and Problem Solving
ERIC Educational Resources Information Center
Hoffman, Bobby; Schraw, Gregory
2010-01-01
The purpose of this article is to clarify conceptions, definitions, and applications of learning and problem-solving efficiency. Conceptions of efficiency vary within the field of educational psychology, and there is little consensus as to how to define, measure, and interpret the efficiency construct. We compare three diverse models that differ…
NASA Astrophysics Data System (ADS)
Docktor, Jennifer L.; Dornfeld, Jay; Frodermann, Evan; Heller, Kenneth; Hsu, Leonardo; Jackson, Koblar Alan; Mason, Andrew; Ryan, Qing X.; Yang, Jie
2016-06-01
Problem solving is a complex process valuable in everyday life and crucial for learning in the STEM fields. To support the development of problem-solving skills it is important for researchers and curriculum developers to have practical tools that can measure the difference between novice and expert problem-solving performance in authentic classroom work. It is also useful if such tools can be employed by instructors to guide their pedagogy. We describe the design, development, and testing of a simple rubric to assess written solutions to problems given in undergraduate introductory physics courses. In particular, we present evidence for the validity, reliability, and utility of the instrument. The rubric identifies five general problem-solving processes and defines the criteria to attain a score in each: organizing problem information into a Useful Description, selecting appropriate principles (Physics Approach), applying those principles to the specific conditions in the problem (Specific Application of Physics), using Mathematical Procedures appropriately, and displaying evidence of an organized reasoning pattern (Logical Progression).
Fifth international conference on hyperbolic problems -- theory, numerics, applications: Abstracts
1994-12-31
The conference demonstrated that hyperbolic problems and conservation laws play an important role in many areas including industrial applications and the studying of elasto-plastic materials. Among the various topics covered in the conference, the authors mention: the big bang theory, general relativity, critical phenomena, deformation and fracture of solids, shock wave interactions, numerical simulation in three dimensions, the level set method, multidimensional Riemann problem, application of the front tracking in petroleum reservoir simulations, global solution of the Navier-Stokes equations in high dimensions, recent progress in granular flow, and the study of elastic plastic materials. The authors believe that the new ideas, tools, methods, problems, theoretical results, numerical solutions and computational algorithms presented or discussed at the conference will benefit the participants in their current and future research.
Application of tabu search to deterministic and stochastic optimization problems
NASA Astrophysics Data System (ADS)
Gurtuna, Ozgur
During the past two decades, advances in computer science and operations research have resulted in many new optimization methods for tackling complex decision-making problems. One such method, tabu search, forms the basis of this thesis. Tabu search is a very versatile optimization heuristic that can be used for solving many different types of optimization problems. Another research area, real options, has also gained considerable momentum during the last two decades. Real options analysis is emerging as a robust and powerful method for tackling decision-making problems under uncertainty. Although the theoretical foundations of real options are well-established and significant progress has been made in the theory side, applications are lagging behind. A strong emphasis on practical applications and a multidisciplinary approach form the basic rationale of this thesis. The fundamental concepts and ideas behind tabu search and real options are investigated in order to provide a concise overview of the theory supporting both of these two fields. This theoretical overview feeds into the design and development of algorithms that are used to solve three different problems. The first problem examined is a deterministic one: finding the optimal servicing tours that minimize energy and/or duration of missions for servicing satellites around Earth's orbit. Due to the nature of the space environment, this problem is modeled as a time-dependent, moving-target optimization problem. Two solution methods are developed: an exhaustive method for smaller problem instances, and a method based on tabu search for larger ones. The second and third problems are related to decision-making under uncertainty. In the second problem, tabu search and real options are investigated together within the context of a stochastic optimization problem: option valuation. By merging tabu search and Monte Carlo simulation, a new method for studying options, Tabu Search Monte Carlo (TSMC) method, is
[Problems and countermeasures in the application of constructed wetlands].
Huang, Jin-Lou; Chen, Qin; Xu, Lian-Huang
2013-01-01
Constructed wetlands as a wastewater eco-treatment technology are developed in recent decades. It combines sewage treatment with the eco-environment in an efficient way. It treats the sewage effectively, and meanwhile beautifies the environment, creates ecological landscape, and brings benefits to the environment and economics. The unique advantages of constructed wetlands have attracted intensive attention since developed. Constructed wetlands are widely used in treatment of domestic sewage, industrial wastewater, and wastewater from mining and petroleum production. However, many problems are found in the practical application of constructed wetland, e. g. they are vulnerable to changes in climatic conditions and temperature, their substrates are easily saturated and plugged, they are readily affected by plant species, they often occupy large areas, and there are other problems including irrational management, non-standard design, and a single function of ecological service. These problems to a certain extent influence the efficiency of constructed wetlands in wastewater treatment, shorten the life of the artificial wetland, and hinder the application of artificial wetland. The review presents correlation analysis and countermeasures for these problems, in order to improve the efficiency of constructed wetland in wastewater treatment, and provide reference for the application and promotion of artificial wetland.
Application of boundary integral equations to elastoplastic problems
NASA Technical Reports Server (NTRS)
Mendelson, A.; Albers, L. U.
1975-01-01
The application of the boundary integral equation method (BIE) to the elastoplastic torsion problem is considered. It is found that the BIE is very suitable for the elastoplastic analysis of the torsion of prismatic bars. A comparison of the BIE with the finite difference method shows savings for the BIE concerning the number of unknowns which have to be determined and also a much faster convergence rate. Attention is given to the problem of an edge-notched beam in pure bending, taking into account a biharmonic formulation and a displacement formulation.
Application of essentially nonoscillatory methods to aeroacoustic flow problems
NASA Technical Reports Server (NTRS)
Atkins, Harold L.
1995-01-01
A finite-difference essentially nonoscillatory (ENO) method has been applied to several of the problems prescribed for the workshop sponsored jointly by the Institute for Computer Applications in Science and Engineering and by NASA Langley Research Center entitled 'Benchmark Problems in Computational Aeroacoustics'. The workshop focused on computational challenges specific to aeroacoustics. Among these are long-distance propagation of a short-wavelength disturbance, propagation of small-amplitude disturbances, and nonreflective boundary conditions. The shock capturing-capability inherent to the ENO method effectively eliminates oscillations near shock waves without the need to add and tune dissipation or filter terms. The method-of-lines approach allows the temporal and spatial operators to be chosen separately in accordance with the demands of a particular problem. The ENO method was robust and accurate for all problems in which the propagating wave was resolved with 8 or more points per wavelength. The finite-wave-model boundary condition, a local nonlinear acoustic boundary condition, performed well for the one-dimensional problems. The buffer-domain approach performed well for the two-dimensional test problem. The amplitude of nonphysical reflections were less than 1 percent of the exiting wave's amplitude.
Potentials and problems in space applications of smart structures technology
NASA Astrophysics Data System (ADS)
Eaton, D. C.; Bashford, D. P.
1994-09-01
The well known addage 'don't run before you can walk emerging materials. It typically takes ten years before a material is sufficiently well characterized for commercial aerospace application. Much has to be learnt not only about the material properties and their susceptibility to the effects of their working environment but also about the manufacturing process and the most effective configuration related application. No project will accept a product which has no proven reliability and attractive cost effectiveness in its application. The writers firmly believe that smart structures and their related technologies must follow a similar development pattern. Indeed, faced with a range of interdisciplinary problems it seems likely that 'partially smart' techniques may well be the first applications. These will place emphasis on the more readily realizable features for any structural application. Prior use may well have been achieved in other engineering sectors. Because ground based applications are more readily accessible to check and maintain, these are generally the front runners of smart technology usage. Nevertheless, there is a strong potential for the use of smart techniques in space applications if their capabilities can be advantageously introduced when compared with traditional solutions. This paper endeavors to give a critical appraisal of the possibilities and the accompanying problems. A sample overview of related developing space technology is included. The reader is also referred to chapters 90 to 94 in ESA's Structural Materials Handbook (ESA PSS 03 203, issue 1.). It is envisaged that future space applications may include the realization and maintenance of large deployable reflector profiles, the dimensional stability of optical payloads, active noise and vibration control and in orbit health monitoring and control for largely unmanned spacecraft. The possibility of monitoring the health of items such as large cryogenic fuel tanks is a typical longer
Progress on PRONGHORN Application to NGNP Related Problems
Dana A. Knoll
2009-08-01
We are developing a multiphysics simulation tool for Very High-Temperature gascooled Reactors (VHTR). The simulation tool, PRONGHORN, takes advantages of the Multiphysics Object-Oriented Simulation library, and is capable of solving multidimensional thermal-fluid and neutronics problems implicitly in parallel. Expensive Jacobian matrix formation is alleviated by the Jacobian-free Newton-Krylov method, and physics-based preconditioning is applied to improve the convergence. The initial development of PRONGHORN has been focused on the pebble bed corec concept. However, extensions required to simulate prismatic cores are underway. In this progress report we highlight progress on application of PRONGHORN to PBMR400 benchmark problems, extension and application of PRONGHORN to prismatic core reactors, and progress on simulations of 3-D transients.
Application of the boundary integral method to immiscible displacement problems
Masukawa, J.; Horne, R.N.
1988-08-01
This paper presents an application of the boundary integral method (BIM) to fluid displacement problems to demonstrate its usefulness in reservoir simulation. A method for solving two-dimensional (2D), piston-like displacement for incompressible fluids with good accuracy has been developed. Several typical example problems with repeated five-spot patterns were solved for various mobility ratios. The solutions were compared with the analytical solutions to demonstrate accuracy. Singularity programming was found to be a major advantage in handling flow in the vicinity of wells. The BIM was found to be an excellent way to solve immiscible displacement problems. Unlike analytic methods, it can accommodate complex boundary shapes and does not suffer from numerical dispersion at the front.
SIAM conference on inverse problems: Geophysical applications. Final technical report
1995-12-31
This conference was the second in a series devoted to a particular area of inverse problems. The theme of this series is to discuss problems of major scientific importance in a specific area from a mathematical perspective. The theme of this symposium was geophysical applications. In putting together the program we tried to include a wide range of mathematical scientists and to interpret geophysics in as broad a sense as possible. Our speaker came from industry, government laboratories, and diverse departments in academia. We managed to attract a geographically diverse audience with participation from five continents. There were talks devoted to seismology, hydrology, determination of the earth`s interior on a global scale as well as oceanographic and atmospheric inverse problems.
Application of simulated annealing to some seismic problems
NASA Astrophysics Data System (ADS)
Velis, Danilo Ruben
Wavelet estimation, ray tracing, and traveltime inversion are fundamental problems in seismic exploration. They can be finally reduced to minimizing a highly nonlinear cost function with respect to a certain set of unknown parameters. I use simulated annealing (SA) to avoid local minima and inaccurate solutions often arising by the use of linearizing methods. I illustrate all applications using numerical and/or real data examples. The first application concerns the 4th-order cumulant matching (CM) method for wavelet estimation. Here the reliability of the derived wavelets depends strongly on the amount of data. Tapering the trace cumulant estimate reduces significantly this dependency, and allows for a trace-by-trace implementation. For this purpose, a hybrid strategy that combines SA and gradient-based techniques provides efficiency and accuracy. In the second application I present SART (SA ray tracing), which is a novel method for solving the two-point ray tracing problem. SART overcomes some well known difficulties in standard methods, such as the selection of new take-off angles, and the multipathing problem. SA finds the take-off angles so that the total traveltime between the endpoints is a global minimum. SART is suitable for tracing direct, reflected, and headwaves, through complex 2-D and 3-D media. I also develop a versatile model representation in terms of a number of regions delimited by curved interfaces. Traveltime tomography is the third SA application. I parameterize the subsurface geology by using adaptive-grid bicubic B-splines for smooth models, or parametric 2-D functions for anomaly bodies. The second approach may find application in archaeological and other near-surface studies. The nonlinear inversion process attempts to minimize the rms error between observed and predicted traveltimes.
Space Life Support Technology Applications to Terrestrial Environmental Problems
NASA Technical Reports Server (NTRS)
Schwartzkopf, Steven H.; Sleeper, Howard L.
1993-01-01
Many of the problems now facing the human race on Earth are, in fact, life support issues. Decline of air Quality as a result of industrial and automotive emissions, pollution of ground water by organic pesticides or solvents, and the disposal of solid wastes are all examples of environmental problems that we must solve to sustain human life. The technologies currently under development to solve the problems of supporting human life for advanced space missions are extraordinarily synergistic with these environmental problems. The development of these technologies (including both physicochemical and bioregenerative types) is increasingly focused on closing the life support loop by removing and recycling contaminants and wastes to produce the materials necessary to sustain human life. By so doing, this technology development effort also focuses automatically on reducing resupply logistics requirements and increasing crew safety through increased self-sufficiency. This paper describes several technologies that have been developed to support human life in space and illustrates the applicability of the technologies to environmental problems including environmental remediation and pollution prevention.
Advanced computations of multi-physics, multi-scale effects in beam dynamics
Amundson, J.F.; Macridin, A.; Spentzouris, P.; Stern, E.G.; /Fermilab
2009-01-01
Current state-of-the-art beam dynamics simulations include multiple physical effects and multiple physical length and/or time scales. We present recent developments in Synergia2, an accelerator modeling framework designed for multi-physics, multi-scale simulations. We summarize recent several recent results in multi-physics beam dynamics, including simulations of three Fermilab accelerators: the Tevatron, the Main Injector and the Debuncher. Early accelerator simulations focused on single-particle dynamics. To a first approximation, the forces on the particles in an accelerator beam are dominated by the external fields due to magnets, RF cavities, etc., so the single-particle dynamics are the leading physical effects. Detailed simulations of accelerators must include collective effects such as the space-charge repulsion of the beam particles, the effects of wake fields in the beam pipe walls and beam-beam interactions in colliders. These simulations require the sort of massively parallel computers that have only become available in recent times. We give an overview of the accelerator framework Synergia2, which was designed to take advantage of the capabilities of modern computational resources and enable simulations of multiple physical effects. We also summarize some recent results utilizing Synergia2 and BeamBeam3d, a tool specialized for beam-beam simulations.
A multi-physics study of Li-ion battery material Li1+xTi2O4
NASA Astrophysics Data System (ADS)
Jiang, Tonghu; Falk, Michael; Siva Shankar Rudraraju, Krishna; Garikipati, Krishna; van der Ven, Anton
2013-03-01
Recently, lithium ion batteries have been subject to intense scientific study due to growing demand arising from their utilization in portable electronics, electric vehicles and other applications. Most cathode materials in lithium ion batteries involve a two-phase process during charging and discharging, and the rate of these processes is typically limited by the slow interface mobility. We have undertaken modeling regarding how lithium diffusion in the interface region affects the motion of the phase boundary. We have developed a multi-physics computational method suitable for predicting time evolution of the driven interface. In this method, we calculate formation energies and migration energy barriers by ab initio methods, which are then approximated by cluster expansions. Monte Carlo calculation is further employed to obtain thermodynamic and kinetic information, e.g., anisotropic interfacial energies, and mobilities, which are used to parameterize continuum modeling of the charging and discharging processes. We test this methodology on spinel Li1+xTi2O4. Elastic effects are incorporated into the calculations to determine the effect of variations in modulus and strain on stress concentrations and failure modes within the material. We acknowledge support by the National Science Foundation Cyber Discovery and Innovation Program under Award No. 1027765.
Overview: Applications of numerical optimization methods to helicopter design problems
NASA Technical Reports Server (NTRS)
Miura, H.
1984-01-01
There are a number of helicopter design problems that are well suited to applications of numerical design optimization techniques. Adequate implementation of this technology will provide high pay-offs. There are a number of numerical optimization programs available, and there are many excellent response/performance analysis programs developed or being developed. But integration of these programs in a form that is usable in the design phase should be recognized as important. It is also necessary to attract the attention of engineers engaged in the development of analysis capabilities and to make them aware that analysis capabilities are much more powerful if integrated into design oriented codes. Frequently, the shortcoming of analysis capabilities are revealed by coupling them with an optimization code. Most of the published work has addressed problems in preliminary system design, rotor system/blade design or airframe design. Very few published results were found in acoustics, aerodynamics and control system design. Currently major efforts are focused on vibration reduction, and aerodynamics/acoustics applications appear to be growing fast. The development of a computer program system to integrate the multiple disciplines required in helicopter design with numerical optimization technique is needed. Activities in Britain, Germany and Poland are identified, but no published results from France, Italy, the USSR or Japan were found.
On the Application of the Energy Method to Stability Problems
NASA Technical Reports Server (NTRS)
Marguerre, Karl
1947-01-01
Since stability problems have come into the field of vision of engineers, energy methods have proved to be one of the most powerful aids in mastering them. For finding the especially interesting critical loads special procedures have evolved that depart somewhat from those customary in the usual elasticity theory. A clarification of the connections seemed desirable,especially with regard to the post-critical region, for the treatment of which these special methods are not suited as they are. The present investigation discusses this question-complex (made important by shell construction in aircraft) especially in the classical example of the Euler strut, because in this case - since the basic features are not hidden by difficulties of a mathematical nature - the problem is especially clear. The present treatment differs from that appearing in the Z.f.a.M.M. (1938) under the title "Uber die Behandlung von Stabilittatsproblemen mit Hilfe der energetischen Methode" in that, in order to work out the basic ideas still more clearly,it dispenses with the investigation of behavior at large deflections and of the elastic foundation;in its place the present version gives an elaboration of the 6th section and (in its 7 th and 8th secs.)a new example that shows the applicability of the general criterion to a stability problem that differs from that of Euler in many respects.
Application of invariant integrals to elastostatic inverse problems
NASA Astrophysics Data System (ADS)
Goldstein, Robert; Shifrin, Efim; Shushpannikov, Pavel
2008-01-01
A problem of parameters identification for embedded defects in a linear elastic body using results of static tests is considered. A method, based on the use of invariant integrals is developed for solving this problem. A problem for the spherical inclusion parameters identification is considered as an example of the proposed approach application. It is shown that a radius, elastic moduli and coordinates of a spherical inclusion center are determined from one uniaxial tension (compression) test. The explicit formulae, expressing the spherical inclusion parameters by means of the values of corresponding invariant integrals are obtained. The values of the integrals can be calculated from the experimental data if both applied loads and displacements are measured on the surface of the body in the static test. A numerical analysis of the obtained explicit formulae is fulfilled. It is shown that the formulae give a good approximation of the spherical inclusion parameters even in the case when the inclusion is located close enough to the surface of the body. To cite this article: R. Goldstein et al., C. R. Mecanique 336 (2008).
The application of artificial intelligence to astronomical scheduling problems
NASA Technical Reports Server (NTRS)
Johnston, Mark D.
1992-01-01
Efficient utilization of expensive space- and ground-based observatories is an important goal for the astronomical community; the cost of modern observing facilities is enormous, and the available observing time is much less than the demand from astronomers around the world. The complexity and variety of scheduling constraints and goals has led several groups to investigate how artificial intelligence (AI) techniques might help solve these kinds of problems. The earliest and most successful of these projects was started at Space Telescope Science Institute in 1987 and has led to the development of the Spike scheduling system to support the scheduling of Hubble Space Telescope (HST). The aim of Spike at STScI is to allocate observations to timescales of days to a week observing all scheduling constraints and maximizing preferences that help ensure that observations are made at optimal times. Spike has been in use operationally for HST since shortly after the observatory was launched in Apr. 1990. Although developed specifically for HST scheduling, Spike was carefully designed to provide a general framework for similar (activity-based) scheduling problems. In particular, the tasks to be scheduled are defined in the system in general terms, and no assumptions about the scheduling timescale are built in. The mechanisms for describing, combining, and propagating temporal and other constraints and preferences are quite general. The success of this approach has been demonstrated by the application of Spike to the scheduling of other satellite observatories: changes to the system are required only in the specific constraints that apply, and not in the framework itself. In particular, the Spike framework is sufficiently flexible to handle both long-term and short-term scheduling, on timescales of years down to minutes or less. This talk will discuss recent progress made in scheduling search techniques, the lessons learned from early HST operations, the application of Spike
Advanced Mesh-Enabled Monte carlo capability for Multi-Physics Reactor Analysis
Wilson, Paul; Evans, Thomas; Tautges, Tim
2012-12-24
This project will accumulate high-precision fluxes throughout reactor geometry on a non- orthogonal grid of cells to support multi-physics coupling, in order to more accurately calculate parameters such as reactivity coefficients and to generate multi-group cross sections. This work will be based upon recent developments to incorporate advanced geometry and mesh capability in a modular Monte Carlo toolkit with computational science technology that is in use in related reactor simulation software development. Coupling this capability with production-scale Monte Carlo radiation transport codes can provide advanced and extensible test-beds for these developments. Continuous energy Monte Carlo methods are generally considered to be the most accurate computational tool for simulating radiation transport in complex geometries, particularly neutron transport in reactors. Nevertheless, there are several limitations for their use in reactor analysis. Most significantly, there is a trade-off between the fidelity of results in phase space, statistical accuracy, and the amount of computer time required for simulation. Consequently, to achieve an acceptable level of statistical convergence in high-fidelity results required for modern coupled multi-physics analysis, the required computer time makes Monte Carlo methods prohibitive for design iterations and detailed whole-core analysis. More subtly, the statistical uncertainty is typically not uniform throughout the domain, and the simulation quality is limited by the regions with the largest statistical uncertainty. In addition, the formulation of neutron scattering laws in continuous energy Monte Carlo methods makes it difficult to calculate adjoint neutron fluxes required to properly determine important reactivity parameters. Finally, most Monte Carlo codes available for reactor analysis have relied on orthogonal hexahedral grids for tallies that do not conform to the geometric boundaries and are thus generally not well
Multi-Physics Markov Chain Monte Carlo Methods for Subsurface Flows
NASA Astrophysics Data System (ADS)
Rigelo, J.; Ginting, V.; Rahunanthan, A.; Pereira, F.
2014-12-01
For CO2 sequestration in deep saline aquifers, contaminant transport in subsurface, and oil or gas recovery, we often need to forecast flow patterns. Subsurface characterization is a critical and challenging step in flow forecasting. To characterize subsurface properties we establish a statistical description of the subsurface properties that are conditioned to existing dynamic and static data. A Markov Chain Monte Carlo (MCMC) algorithm is used in a Bayesian statistical description to reconstruct the spatial distribution of rock permeability and porosity. The MCMC algorithm requires repeatedly solving a set of nonlinear partial differential equations describing displacement of fluids in porous media for different values of permeability and porosity. The time needed for the generation of a reliable MCMC chain using the algorithm can be too long to be practical for flow forecasting. In this work we develop fast and effective computational methods for generating MCMC chains in the Bayesian framework for the subsurface characterization. Our strategy consists of constructing a family of computationally inexpensive preconditioners based on simpler physics as well as on surrogate models such that the number of fine-grid simulations is drastically reduced in the generated MCMC chains. In particular, we introduce a huff-puff technique as screening step in a three-stage multi-physics MCMC algorithm to reduce the number of expensive final stage simulations. The huff-puff technique in the algorithm enables a better characterization of subsurface near wells. We assess the quality of the proposed multi-physics MCMC methods by considering Monte Carlo simulations for forecasting oil production in an oil reservoir.
Nonlinear Multidimensional Assignment Problems Efficient Conic Optimization Methods and Applications
2015-06-24
problems. The size 16 three-dimensional quadratic assignment problem Q3AP from wireless communications was solved using a sophisticated approach...combinatorial optimization problem, the Directional Sensor Problem, was solved in two ways. First, heuristically in an engineering fashion and second, exactly...the sensor problem was solved as a nonlinear MINLP problem. Specifically, the information gain obtained was maximized in order to determine the optimal
NASA Astrophysics Data System (ADS)
Formosa, F.; Fréchette, L. G.
2015-12-01
An electrical circuit equivalent (ECE) approach has been set up allowing elementary oscillatory microengine components to be modelled. They cover gas channel/chamber thermodynamics, viscosity and thermal effects, mechanical structure and electromechanical transducers. The proposed tool has been validated on a centimeter scale Free Piston membrane Stirling engine [1]. We propose here new developments taking into account scaling effects to establish models suitable for any microengines. They are based on simplifications derived from the comparison of the hydraulic radius with respect to the viscous and thermal penetration depths respectively).
NASA Astrophysics Data System (ADS)
Yang, Z.; Niu, G.; Jiang, X.
2009-12-01
In this paper we develop a single integrated mesoscale meteorological modeling framework that intimately couples a land surface model and an atmospheric model, both of which are equipped with multi-parameterization options. This framework should enable process-based ensemble weather predictions, identification of optimal combinations of process parameterization schemes, identification of critical processes controlling the coupling strength between the land surface and the atmosphere, and quantification of uncertainties in using regional meteorological models to extract high-resolution climate information for policy decision making. We build on the existing Weather, Research and Forecasting (WRF) atmospheric model, which already has multiple parameterization options for atmospheric processes such as convection, radiation, planetary boundary layer, and microphysics. One of our key model development efforts is to couple the WRF model with our newly developed, ensemble representation of the land surface, i.e., the Noah land surface model that was first enhanced with biophysical and hydrological realism and then equipped with multi-parameterization options (Noah-MP) for a wide spectrum of physical and ecological processes. The Noah-MP LSM is capable of generating thousands of process-based combinations of land surface parameterization schemes as opposed to the traditional approach (e.g. BATS, SiB or VIC) that utilizes only a single combination. Offline Noah-MP tests show a great potential of the model in ensemble hydrological predictions. We perform an analysis of the sensitivity to different parameterizations over the conterminous United States using the single integrated mesoscale modeling framework described above. To prove the concept, we present an ensemble of multi-day integrations using the model at 30-km resolution with varying physical representations for both the land surface (runoff process) and the atmosphere (convective process). The lateral boundary conditions are from North American Regional Reanalysis data. Specifically, we focus on understanding of the interactions and feedbacks between groundwater, soil moisture, vegetation, surface energy and water fluxes, atmospheric boundary layer, convection, mesoscale circulation, and precipitation.
Matrix iteration method for nonlinear eigenvalue problems with applications
NASA Astrophysics Data System (ADS)
Ram, Y. M.
2016-12-01
A simple and intuitive matrix iteration method for solving nonlinear eigenvalue problems is described and demonstrated in detail by two problems: (i) the boundary value problem associated with large deflection of a flexible rod, and (ii) the initial value problem associated with normal mode motion of a double pendulum. The two problems are solved by two approaches, the finite difference approach and a continuous realization approach which is similar in spirit to the Rayleigh-Ritz method.
Inverse Problems in Complex Models and Applications to Earth Sciences
NASA Astrophysics Data System (ADS)
Bosch, M. E.
2015-12-01
The inference of the subsurface earth structure and properties requires the integration of different types of data, information and knowledge, by combined processes of analysis and synthesis. To support the process of integrating information, the regular concept of data inversion is evolving to expand its application to models with multiple inner components (properties, scales, structural parameters) that explain multiple data (geophysical survey data, well-logs, core data). The probabilistic inference methods provide the natural framework for the formulation of these problems, considering a posterior probability density function (PDF) that combines the information from a prior information PDF and the new sets of observations. To formulate the posterior PDF in the context of multiple datasets, the data likelihood functions are factorized assuming independence of uncertainties for data originating across different surveys. A realistic description of the earth medium requires modeling several properties and structural parameters, which relate to each other according to dependency and independency notions. Thus, conditional probabilities across model components also factorize. A common setting proceeds by structuring the model parameter space in hierarchical layers. A primary layer (e.g. lithology) conditions a secondary layer (e.g. physical medium properties), which conditions a third layer (e.g. geophysical data). In general, less structured relations within model components and data emerge from the analysis of other inverse problems. They can be described with flexibility via direct acyclic graphs, which are graphs that map dependency relations between the model components. Examples of inverse problems in complex models can be shown at various scales. At local scale, for example, the distribution of gas saturation is inferred from pre-stack seismic data and a calibrated rock-physics model. At regional scale, joint inversion of gravity and magnetic data is applied
ERIC Educational Resources Information Center
Docktor, Jennifer L.; Dornfeld, Jay; Frodermann, Evan; Heller, Kenneth; Hsu, Leonardo; Jackson, Koblar Alan; Mason, Andrew; Ryan, Qing X.; Yang, Jie
2016-01-01
Problem solving is a complex process valuable in everyday life and crucial for learning in the STEM fields. To support the development of problem-solving skills it is important for researchers and curriculum developers to have practical tools that can measure the difference between novice and expert problem-solving performance in authentic…
Jacobi elliptic functions: A review of nonlinear oscillatory application problems
NASA Astrophysics Data System (ADS)
Kovacic, Ivana; Cveticanin, Livija; Zukovic, Miodrag; Rakaric, Zvonko
2016-10-01
This review paper is concerned with the applications of Jacobi elliptic functions to nonlinear oscillators whose restoring force has a monomial or binomial form that involves cubic and/or quadratic nonlinearity. First, geometric interpretations of three basic Jacobi elliptic functions are given and their characteristics are discussed. It is shown then how their different forms can be utilized to express exact solutions for the response of certain free conservative oscillators. These forms are subsequently used as a starting point for a presentation of different quantitative techniques for obtaining an approximate response for free perturbed nonlinear oscillators. An illustrative example is provided. Further, two types of externally forced nonlinear oscillators are reviewed: (i) those that are excited by elliptic-type excitations with different exact and approximate solutions; (ii) those that are damped and excited by harmonic excitations, but their approximate response is expressed in terms of Jacobi elliptic functions. Characteristics of the steady-state response are discussed and certain qualitative differences with respect to the classical Duffing oscillator excited harmonically are pointed out. Parametric oscillations of the oscillators excited by an elliptic-type forcing are considered as well, and the differences with respect to the stability chart of the classical Mathieu equation are emphasized. The adjustment of the Melnikov method to derive the general condition for the onset of homoclinic bifurcations in a system parametrically excited by an elliptic-type forcing is provided and compared with those corresponding to harmonic excitations. Advantages and disadvantages of the use of Jacobi elliptic functions in nonlinear oscillatory application problems are discussed and some suggestions for future work are given.
Decompositions of information divergences: Recent development, open problems and applications
NASA Astrophysics Data System (ADS)
Stehlík, M.
2012-11-01
What is the optimal statistical decision? And how it is related to the statistical information theory? By trying to answer these difficult questions, we will illustrate the necessity of understanding of structure of information divergences. This may be understand in particular through deconvolutions, leading to an optimal statistical inference. We will illustrate deconvolution of information divergence in the exponential family, which will gave us an optimal tests (optimal in the sense of Bahadur (see [3, 4]). We discuss about the results on the exact density of the I-divergence in the exponential family with gamma distributed observations (see [28]). Since the considered I-divergence is related to the likelihood ratio (LR) statistics, we deal with the exact distribution of the likelihood ratio tests and discuss the optimality of such exact tests. The both tests, the exact LR test of the homogeneity and the exact LR test of the scale parameter, are asymptotically optimal in the Bahadur sense when the observations are distributed exponentially. We also discuss decompositions from a broader perspective. We recall relationship between f-divergence and statistical information in the sense of DeGroot, which was shown in [17]. We formulate an open problem of its generalization. Applications in reliability testing and hydrological prediction are mentioned.
Problem Based Learning: Application to Technology Education in Three Countries
ERIC Educational Resources Information Center
Williams, P. John; Iglesias, Juan; Barak, Moshe
2008-01-01
An increasing variety of professional educational and training disciplines are now problem based (e.g., medicine, nursing, engineering, community health), and they may have a corresponding variety of educational objectives. However, they all have in common the use of problems in the instructional sequence. The problems may be as diverse as a…
Application of Problem Based Learning through Research Investigation
ERIC Educational Resources Information Center
Beringer, Jason
2007-01-01
Problem-based learning (PBL) is a teaching technique that uses problem-solving as the basis for student learning. The technique is student-centred with teachers taking the role of a facilitator. Its general aims are to construct a knowledge base, develop problem-solving skills, teach effective collaboration and provide the skills necessary to be a…
Application of the method of maximum entropy in the mean to classification problems
NASA Astrophysics Data System (ADS)
Gzyl, Henryk; ter Horst, Enrique; Molina, German
2015-11-01
In this note we propose an application of the method of maximum entropy in the mean to solve a class of inverse problems comprising classification problems and feasibility problems appearing in optimization. Such problems may be thought of as linear inverse problems with convex constraints imposed on the solution as well as on the data. The method of maximum entropy in the mean proves to be a very useful tool to deal with this type of problems.
A multi-physical model for charge and mass transport in a flexible ionic polymer sensor
NASA Astrophysics Data System (ADS)
Zhu, Zicai; Asaka, Kinji; Takagi, Kentaro; Aabloo, Alvo; Horiuchi, Tetsuya
2016-04-01
An ionic polymer material can generate electrical potential and function as a bio-sensor under a non-uniform deformation. Ionic polymer-metal composite (IPMC) is a typical flexible ionic polymer sensor material. A multi-physical sensing model is presented at first based on the same physical equations in the physical model for IPMC actuator we obtained before. Under an applied bending deformation, water and cation migrate to the direction of outside electrode immediately. Redistribution of cations causes an electrical potential difference between two electrodes. The cation migration is strongly restrained by the generated electrical potential. And the migrated cations will move back to the inner electrode under the concentration diffusion effect and lead to a relaxation of electrical potential. In the whole sensing process, transport and redistribution of charge and mass are revealed along the thickness direction by numerical analysis. The sensing process is a revised physical process of the actuation, however, the transport properties are quite different from those of the later. And the effective dielectric constant of IPMC, which is related to the morphology of the electrode-ionic polymer interface, is proved to have little relation with the sensing amplitude. All the conclusions are significant for ionic polymer sensing material design.
NASA Astrophysics Data System (ADS)
Ma, Z.; Hou, Z.; Zang, X.
2015-09-01
As a large-scale flexible inflatable structure by a huge inner lifting gas volume of several hundred thousand cubic meters, the stratospheric airship's thermal characteristic of inner gas plays an important role in its structural performance. During the floating flight, the day-night variation of the combined thermal condition leads to the fluctuation of the flow field inside the airship, which will remarkably affect the pressure acted on the skin and the structural safety of the stratospheric airship. According to the multi-physics coupling mechanism mentioned above, a numerical procedure of structural safety analysis of stratospheric airships is developed and the thermal model, CFD model, finite element code and criterion of structural strength are integrated. Based on the computation models, the distributions of the deformations and stresses of the skin are calculated with the variation of day-night time. The effects of loads conditions and structural configurations on the structural safety of stratospheric airships in the floating condition are evaluated. The numerical results can be referenced for the structural design of stratospheric airships.
Periodically specified satisfiability problems with applications: An alternative to domino problems
Marathe, M.V.; Hunt, H.B., III; Rosenkrantz, D.J.; Stearns, R.E.; Radhakrishnann, V.
1995-12-31
We characterize the complexities of several basic generalized CNF satisfiability problems SAT(S), when instances are specified using various kinds of 1- and 2-dimensional periodic specifications. We outline how this characterization can be used to prove a number of new hardness results for the complexity classes DSPACE(n), NSPACE(n), DEXPTIME, NEXPTIME, EXPSPACE etc. The hardness results presented significantly extend the known hardness results for periodically specified problems. Several advantages axe outlined of the use of periodically specified satisfiability problems over the use of domino problems in proving both hardness and easiness results. As one corollary, we show that a number of basic NP-hard problems become EXPSPACE hard when inputs axe represented using 1-dimensional infinite periodic wide specifications. This answers a long standing open question posed by Orlin.
Bhardwaj, M.; Day, D.; Farhat, C.; Lesoinne, M; Pierson, K.; Rixen, D.
1999-04-01
We report on the application of the one-level FETI method to the solution of a class of substructural problems associated with the Department of Energy's Accelerated Strategic Computing Initiative (ASCI). We focus on numerical and parallel scalability issues, and on preliminary performance results obtained on the ASCI Option Red supercomputer configured with as many as one thousand processors, for problems with as many as 5 million degrees of freedom.
An Application of Wedelin's Method to Railway Crew Scheduling Problem
NASA Astrophysics Data System (ADS)
Miura, Rei; Imaizumi, Jun; Fukumura, Naoto; Morito, Susumu
So many scheduling problems arise in railway industries. One of the typical scheduling problems is Crew Scheduling Problem. Much attention has been paid to this problem by a lot of researchers, but many studies have not been done to the problems in railway industries in Japan. In this paper, we consider a railway crew scheduling problem in Japan. The problem can be formulated into Set Covering Problem (SCP). In SCP, a row corresponds to a trip representing a minimal task and a column corresponds to a pairing representing a sequence of trips performed by a certain crew. Many algorithms have been developed and proposed for it. On the other hand, in practical use, it is important to investigate how these algorithms behave and work on a certain problem. Therefore, we focus on Wedelin's algorithm, which is based on Lagrange relaxation and is known as one of the high performance algorithms for SCP, and mainly examine the basic idea of this algorithm. Furthermore, we show effectiveness of this procedure through computational experiments on instances from Japanese railway.
Application of TRIZ approach to machine vibration condition monitoring problems
NASA Astrophysics Data System (ADS)
Cempel, Czesław
2013-12-01
Up to now machine condition monitoring has not been seriously approached by TRIZ1TRIZ= Russian acronym for Inventive Problem Solving System, created by G. Altshuller ca 50 years ago. users, and the knowledge of TRIZ methodology has not been applied there intensively. However, there are some introductory papers of present author posted on Diagnostic Congress in Cracow (Cempel, in press [11]), and Diagnostyka Journal as well. But it seems to be further need to make such approach from different sides in order to see, if some new knowledge and technology will emerge. In doing this we need at first to define the ideal final result (IFR) of our innovation problem. As a next we need a set of parameters to describe the problems of system condition monitoring (CM) in terms of TRIZ language and set of inventive principles possible to apply, on the way to IFR. This means we should present the machine CM problem by means of contradiction and contradiction matrix. When specifying the problem parameters and inventive principles, one should use analogy and metaphorical thinking, which by definition is not exact but fuzzy, and leads sometimes to unexpected results and outcomes. The paper undertakes this important problem again and brings some new insight into system and machine CM problems. This may mean for example the minimal dimensionality of TRIZ engineering parameter set for the description of machine CM problems, and the set of most useful inventive principles applied to given engineering parameter and contradictions of TRIZ.
Applications of decision analysis and related techniques to industrial engineering problems at KSC
NASA Technical Reports Server (NTRS)
Evans, Gerald W.
1995-01-01
This report provides: (1) a discussion of the origination of decision analysis problems (well-structured problems) from ill-structured problems; (2) a review of the various methodologies and software packages for decision analysis and related problem areas; (3) a discussion of how the characteristics of a decision analysis problem affect the choice of modeling methodologies, thus providing a guide as to when to choose a particular methodology; and (4) examples of applications of decision analysis to particular problems encountered by the IE Group at KSC. With respect to the specific applications at KSC, particular emphasis is placed on the use of the Demos software package (Lumina Decision Systems, 1993).
Harmony search algorithm: application to the redundancy optimization problem
NASA Astrophysics Data System (ADS)
Nahas, Nabil; Thien-My, Dao
2010-09-01
The redundancy optimization problem is a well known NP-hard problem which involves the selection of elements and redundancy levels to maximize system performance, given different system-level constraints. This article presents an efficient algorithm based on the harmony search algorithm (HSA) to solve this optimization problem. The HSA is a new nature-inspired algorithm which mimics the improvization process of music players. Two kinds of problems are considered in testing the proposed algorithm, with the first limited to the binary series-parallel system, where the problem consists of a selection of elements and redundancy levels used to maximize the system reliability given various system-level constraints; the second problem for its part concerns the multi-state series-parallel systems with performance levels ranging from perfect operation to complete failure, and in which identical redundant elements are included in order to achieve a desirable level of availability. Numerical results for test problems from previous research are reported and compared. The results of HSA showed that this algorithm could provide very good solutions when compared to those obtained through other approaches.
Towards a multi-physics modelling framework for thrombolysis under the influence of blood flow.
Piebalgs, Andris; Xu, X Yun
2015-12-06
Thrombolytic therapy is an effective means of treating thromboembolic diseases but can also give rise to life-threatening side effects. The infusion of a high drug concentration can provoke internal bleeding while an insufficient dose can lead to artery reocclusion. It is hoped that mathematical modelling of the process of clot lysis can lead to a better understanding and improvement of thrombolytic therapy. To this end, a multi-physics continuum model has been developed to simulate the dissolution of clot over time upon the addition of tissue plasminogen activator (tPA). The transport of tPA and other lytic proteins is modelled by a set of reaction-diffusion-convection equations, while blood flow is described by volume-averaged continuity and momentum equations. The clot is modelled as a fibrous porous medium with its properties being determined as a function of the fibrin fibre radius and voidage of the clot. A unique feature of the model is that it is capable of simulating the entire lytic process from the initial phase of lysis of an occlusive thrombus (diffusion-limited transport), the process of recanalization, to post-canalization thrombolysis under the influence of convective blood flow. The model has been used to examine the dissolution of a fully occluding clot in a simplified artery at different pressure drops. Our predicted lytic front velocities during the initial stage of lysis agree well with experimental and computational results reported by others. Following canalization, clot lysis patterns are strongly influenced by local flow patterns, which are symmetric at low pressure drops, but asymmetric at higher pressure drops, which give rise to larger recirculation regions and extended areas of intense drug accumulation.
Towards a multi-physics modelling framework for thrombolysis under the influence of blood flow
Piebalgs, Andris
2015-01-01
Thrombolytic therapy is an effective means of treating thromboembolic diseases but can also give rise to life-threatening side effects. The infusion of a high drug concentration can provoke internal bleeding while an insufficient dose can lead to artery reocclusion. It is hoped that mathematical modelling of the process of clot lysis can lead to a better understanding and improvement of thrombolytic therapy. To this end, a multi-physics continuum model has been developed to simulate the dissolution of clot over time upon the addition of tissue plasminogen activator (tPA). The transport of tPA and other lytic proteins is modelled by a set of reaction–diffusion–convection equations, while blood flow is described by volume-averaged continuity and momentum equations. The clot is modelled as a fibrous porous medium with its properties being determined as a function of the fibrin fibre radius and voidage of the clot. A unique feature of the model is that it is capable of simulating the entire lytic process from the initial phase of lysis of an occlusive thrombus (diffusion-limited transport), the process of recanalization, to post-canalization thrombolysis under the influence of convective blood flow. The model has been used to examine the dissolution of a fully occluding clot in a simplified artery at different pressure drops. Our predicted lytic front velocities during the initial stage of lysis agree well with experimental and computational results reported by others. Following canalization, clot lysis patterns are strongly influenced by local flow patterns, which are symmetric at low pressure drops, but asymmetric at higher pressure drops, which give rise to larger recirculation regions and extended areas of intense drug accumulation. PMID:26655469
Application of nonlinear Krylov acceleration to radiative transfer problems
Till, A. T.; Adams, M. L.; Morel, J. E.
2013-07-01
The iterative solution technique used for radiative transfer is normally nested, with outer thermal iterations and inner transport iterations. We implement a nonlinear Krylov acceleration (NKA) method in the PDT code for radiative transfer problems that breaks nesting, resulting in more thermal iterations but significantly fewer total inner transport iterations. Using the metric of total inner transport iterations, we investigate a crooked-pipe-like problem and a pseudo-shock-tube problem. Using only sweep preconditioning, we compare NKA against a typical inner / outer method employing GMRES / Newton and find NKA to be comparable or superior. Finally, we demonstrate the efficacy of applying diffusion-based preconditioning to grey problems in conjunction with NKA. (authors)
Application of remote sensing to state and regional problems
NASA Technical Reports Server (NTRS)
Bouchillon, C. W.; Miller, W. F.; Landphair, H.; Zitta, V. L.
1974-01-01
The use of remote sensing techniques to help the state of Mississippi recognize and solve its environmental, resource, and socio-economic problems through inventory, analysis, and monitoring is suggested.
Application of genetics knowledge to the solution of pedigree problems
NASA Astrophysics Data System (ADS)
Hackling, Mark W.
1994-12-01
This paper reports on a study of undergraduate genetics students' conceptual and procedural knowledge and how that knowledge influences students' success in pedigree problem solving. Findings indicate that many students lack the knowledge needed to test hypotheses relating to X-linked modes of inheritance using either patterns of inheritance or genotypes. Case study data illustrate how these knowledge deficiencies acted as an impediment to correct and conclusive solutions of pedigree problems.
Application of remote sensing to hydrological problems and floods
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Novo, E. M. L. M.
1983-01-01
The main applications of remote sensors to hydrology are identified as well as the principal spectral bands and their advantages and disadvantages. Some examples of LANDSAT data applications to flooding-risk evaluation are cited. Because hydrology studies the amount of moisture and water involved in each phase of hydrological cycle, remote sensing must be emphasized as a technique for hydrological data acquisition.
Publication misrepresentation among neurosurgery residency applicants: an increasing problem.
Kistka, Heather M; Nayeri, Arash; Wang, Li; Dow, Jamie; Chandrasekhar, Rameela; Chambless, Lola B
2016-01-01
OBJECT Misrepresentation of scholarly achievements is a recognized phenomenon, well documented in numerous fields, yet the accuracy of reporting remains dependent on the honor principle. Therefore, honest self-reporting is of paramount importance to maintain scientific integrity in neurosurgery. The authors had observed a trend toward increasing numbers of publications among applicants for neurosurgery residency at Vanderbilt University and undertook this study to determine whether this change was a result of increased academic productivity, inflated reporting, or both. They also aimed to identify application variables associated with inaccurate citations. METHODS The authors retrospectively reviewed the residency applications submitted to their neurosurgery department in 2006 (n = 148) and 2012 (n = 194). The applications from 2006 were made via SF Match and those from 2012 were made using the Electronic Residency Application Service. Publications reported as "accepted" or "in press" were verified via online search of Google Scholar, PubMed, journal websites, and direct journal contact. Works were considered misrepresented if they did not exist, incorrectly listed the applicant as first author, or were incorrectly listed as peer reviewed or published in a printed journal rather than an online only or non-peer-reviewed publication. Demographic data were collected, including applicant sex, medical school ranking and country, advanced degrees, Alpha Omega Alpha membership, and USMLE Step 1 score. Zero-inflated negative binomial regression was used to identify predictors of misrepresentation. RESULTS Using univariate analysis, between 2006 and 2012 the percentage of applicants reporting published works increased significantly (47% vs 97%, p < 0.001). However, the percentage of applicants with misrepresentations (33% vs 45%) also increased. In 2012, applicants with a greater total of reported works (p < 0.001) and applicants from unranked US medical schools (those not
Application of decentralized cooperative problem solving in dynamic flexible scheduling
NASA Astrophysics Data System (ADS)
Guan, Zai-Lin; Lei, Ming; Wu, Bo; Wu, Ya; Yang, Shuzi
1995-08-01
The object of this study is to discuss an intelligent solution to the problem of task-allocation in shop floor scheduling. For this purpose, the technique of distributed artificial intelligence (DAI) is applied. Intelligent agents (IAs) are used to realize decentralized cooperation, and negotiation is realized by using message passing based on the contract net model. Multiple agents, such as manager agents, workcell agents, and workstation agents, make game-like decisions based on multiple criteria evaluations. This procedure of decentralized cooperative problem solving makes local scheduling possible. And by integrating such multiple local schedules, dynamic flexible scheduling for the whole shop floor production can be realized.
Tangent bundle geometry from dynamics: Application to the Kepler problem
NASA Astrophysics Data System (ADS)
Cariñena, J. F.; Clemente-Gallardo, J.; Jover-Galtier, J. A.; Marmo, G.
In this paper, we consider a manifold with a dynamical vector field and enquire about the possible tangent bundle structures which would turn the starting vector field into a second-order one. The analysis is restricted to manifolds which are diffeomorphic with affine spaces. In particular, we consider the problem in connection with conformal vector fields of second-order and apply the procedure to vector fields conformally related with the harmonic oscillator (f-oscillators). We select one which covers the vector field describing the Kepler problem.
NASA Astrophysics Data System (ADS)
Corrado, Cesare; Gerbeau, Jean-Frédéric; Moireau, Philippe
2015-02-01
This work addresses the inverse problem of electrocardiography from a new perspective, by combining electrical and mechanical measurements. Our strategy relies on the definition of a model of the electromechanical contraction which is registered on ECG data but also on measured mechanical displacements of the heart tissue typically extracted from medical images. In this respect, we establish in this work the convergence of a sequential estimator which combines for such coupled problems various state of the art sequential data assimilation methods in a unified consistent and efficient framework. Indeed, we aggregate a Luenberger observer for the mechanical state and a Reduced-Order Unscented Kalman Filter applied on the parameters to be identified and a POD projection of the electrical state. Then using synthetic data we show the benefits of our approach for the estimation of the electrical state of the ventricles along the heart beat compared with more classical strategies which only consider an electrophysiological model with ECG measurements. Our numerical results actually show that the mechanical measurements improve the identifiability of the electrical problem allowing to reconstruct the electrical state of the coupled system more precisely. Therefore, this work is intended to be a first proof of concept, with theoretical justifications and numerical investigations, of the advantage of using available multi-modal observations for the estimation and identification of an electromechanical model of the heart.
The Application of Physical Organic Chemistry to Biochemical Problems.
ERIC Educational Resources Information Center
Westheimer, Frank
1986-01-01
Presents the synthesis of the science of enzymology from application of the concepts of physical organic chemistry from a historical perspective. Summarizes enzyme and coenzyme mechanisms elucidated prior to 1963. (JM)
Applications of parallel global optimization to mechanics problems
NASA Astrophysics Data System (ADS)
Schutte, Jaco Francois
Global optimization of complex engineering problems, with a high number of variables and local minima, requires sophisticated algorithms with global search capabilities and high computational efficiency. With the growing availability of parallel processing, it makes sense to address these requirements by increasing the parallelism in optimization strategies. This study proposes three methods of concurrent processing. The first method entails exploiting the structure of population-based global algorithms such as the stochastic Particle Swarm Optimization (PSO) algorithm and the Genetic Algorithm (GA). As a demonstration of how such an algorithm may be adapted for concurrent processing we modify and apply the PSO to several mechanical optimization problems on a parallel processing machine. Desirable PSO algorithm features such as insensitivity to design variable scaling and modest sensitivity to algorithm parameters are demonstrated. A second approach to parallelism and improving algorithm efficiency is by utilizing multiple optimizations. With this method a budget of fitness evaluations is distributed among several independent sub-optimizations in place of a single extended optimization. Under certain conditions this strategy obtains a higher combined probability of converging to the global optimum than a single optimization which utilizes the full budget of fitness evaluations. The third and final method of parallelism addressed in this study is the use of quasiseparable decomposition, which is applied to decompose loosely coupled problems. This yields several sub-problems of lesser dimensionality which may be concurrently optimized with reduced effort.
Application of firefly algorithm to the dynamic model updating problem
NASA Astrophysics Data System (ADS)
Shabbir, Faisal; Omenzetter, Piotr
2015-04-01
Model updating can be considered as a branch of optimization problems in which calibration of the finite element (FE) model is undertaken by comparing the modal properties of the actual structure with these of the FE predictions. The attainment of a global solution in a multi dimensional search space is a challenging problem. The nature-inspired algorithms have gained increasing attention in the previous decade for solving such complex optimization problems. This study applies the novel Firefly Algorithm (FA), a global optimization search technique, to a dynamic model updating problem. This is to the authors' best knowledge the first time FA is applied to model updating. The working of FA is inspired by the flashing characteristics of fireflies. Each firefly represents a randomly generated solution which is assigned brightness according to the value of the objective function. The physical structure under consideration is a full scale cable stayed pedestrian bridge with composite bridge deck. Data from dynamic testing of the bridge was used to correlate and update the initial model by using FA. The algorithm aimed at minimizing the difference between the natural frequencies and mode shapes of the structure. The performance of the algorithm is analyzed in finding the optimal solution in a multi dimensional search space. The paper concludes with an investigation of the efficacy of the algorithm in obtaining a reference finite element model which correctly represents the as-built original structure.
Application of University Resources to Local Government Problems. Final Report.
ERIC Educational Resources Information Center
Shamblin, James E.; And Others
The report details the results of a unique experimental demonstration of applying university resources to local government problems. Faculty-student teams worked with city and county personnel on projects chosen by mutual agreement, including work in areas of traffic management, law enforcement, waste heat utilization, solid waste conversion, and…
Constructive field theory and applications: Perspectives and open problems
NASA Astrophysics Data System (ADS)
Rivasseau, V.
2000-06-01
In this paper we review many interesting open problems in mathematical physics which may be attacked with the help of tools from constructive field theory. They could give work for future mathematical physicists trained with constructive methods well into the 21st century.
On mean value iterations with application to variational inequality problems
Yao, Jen-Chih.
1989-12-01
In this report, we show that in a Hilbert space, a mean value iterative process generated by a continuous quasi-nonexpansive mapping always converges to a fixed point of the mapping without any precondition. We then employ this result to obtain approximating solutions to the variational inequality and the generalized complementarity problems. 7 refs.
Application of Group Theory to Some Problems in Atomic Physics.
NASA Astrophysics Data System (ADS)
Suskin, Mark Albert
This work comprises three problems, each of which lends itself to investigation via the theory of groups and group representations. The first problem is to complete a set of operators used in the fitting of atomic energy levels of atoms whose ground configuration is f ^ 3. The role of group theory in the labelling of these operators and in their construction is explained. Values of parameters associated with a subset of the operators are also calculated via their group labels. The second problem is to explain the term inversion that occurs between states of the configuration of two equivalent electrons and certain of the states of the half-filled shell. This leads to generalizations that make it possible to investigate correspondences between matrix elements of effective operators taken between states of other configurations besides the two mentioned. This is made possible through the notion of quasispin. The third problem is the construction of recoupling coefficients for groups other than SO(3). Questions of phase convention and Kronecker-product multiplicities are taken up. Several methods of calculation are given and their relative advantages discussed. Tables of values of the calculated 6-j symbols are provided.
Application of remote sensing to state and regional problems
NASA Technical Reports Server (NTRS)
Miller, W. F.; Clark, J. R.; Solomon, J. L.; Duffy, B.; Minchew, K.; Wright, L. H. (Principal Investigator)
1981-01-01
The objectives, accomplishments, and future plans of several LANDSAT applications projects in Mississippi are discussed. The applications include land use planning in Lowandes County, strip mine inventory and reclamation, white tailed deer habitat evaluation, data analysis support systems, discrimination of forest habitats in potential lignite areas, changes in gravel operations, and determination of freshwater wetlands for inventory and monitoring. In addition, a conceptual design for a LANDSAT based information system is discussed.
Application of remote sensing to state and regional problems. [Mississippi
NASA Technical Reports Server (NTRS)
Miller, W. F.; Carter, B. D.; Solomon, J. L.; Williams, S. G.; Powers, J. S.; Clark, J. R. (Principal Investigator)
1980-01-01
Progress is reported in the following areas: remote sensing applications to land use planning Lowndes County, applications of LANDSAT data to strip mine inventory and reclamation, white tailed deer habitat evaluation using LANDSAT data, remote sensing data analysis support system, and discrimination of unique forest habitats in potential lignite areas of Mississippi. Other projects discussed include LANDSAT change discrimination in gravel operations, environmental impact modeling for highway corridors, and discrimination of fresh water wetlands for inventory and monitoring.
Design and Analysis of a New Hair Sensor for Multi-Physical Signal Measurement
Yang, Bo; Hu, Di; Wu, Lei
2016-01-01
A new hair sensor for multi-physical signal measurements, including acceleration, angular velocity and air flow, is presented in this paper. The entire structure consists of a hair post, a torsional frame and a resonant signal transducer. The hair post is utilized to sense and deliver the physical signals of the acceleration and the air flow rate. The physical signals are converted into frequency signals by the resonant transducer. The structure is optimized through finite element analysis. The simulation results demonstrate that the hair sensor has a frequency of 240 Hz in the first mode for the acceleration or the air flow sense, 3115 Hz in the third and fourth modes for the resonant conversion, and 3467 Hz in the fifth and sixth modes for the angular velocity transformation, respectively. All the above frequencies present in a reasonable modal distribution and are separated from interference modes. The input-output analysis of the new hair sensor demonstrates that the scale factor of the acceleration is 12.35 Hz/g, the scale factor of the angular velocity is 0.404 nm/deg/s and the sensitivity of the air flow is 1.075 Hz/(m/s)2, which verifies the multifunction sensitive characteristics of the hair sensor. Besides, the structural optimization of the hair post is used to improve the sensitivity of the air flow rate and the acceleration. The analysis results illustrate that the hollow circular hair post can increase the sensitivity of the air flow and the II-shape hair post can increase the sensitivity of the acceleration. Moreover, the thermal analysis confirms the scheme of the frequency difference for the resonant transducer can prominently eliminate the temperature influences on the measurement accuracy. The air flow analysis indicates that the surface area increase of hair post is significantly beneficial for the efficiency improvement of the signal transmission. In summary, the structure of the new hair sensor is proved to be feasible by comprehensive
NASA Astrophysics Data System (ADS)
Yamamoto, H.; Nakajima, K.; Zhang, K.; Nanai, S.
2015-12-01
scalabilities showing almost linear speedup against number of processors up to over ten thousand cores. Generally this allows us to perform coupled multi-physics (THC) simulations on high resolution geologic models with multi-million grid in a practical time (e.g., less than a second per time step).
Design and Analysis of a New Hair Sensor for Multi-Physical Signal Measurement.
Yang, Bo; Hu, Di; Wu, Lei
2016-07-08
A new hair sensor for multi-physical signal measurements, including acceleration, angular velocity and air flow, is presented in this paper. The entire structure consists of a hair post, a torsional frame and a resonant signal transducer. The hair post is utilized to sense and deliver the physical signals of the acceleration and the air flow rate. The physical signals are converted into frequency signals by the resonant transducer. The structure is optimized through finite element analysis. The simulation results demonstrate that the hair sensor has a frequency of 240 Hz in the first mode for the acceleration or the air flow sense, 3115 Hz in the third and fourth modes for the resonant conversion, and 3467 Hz in the fifth and sixth modes for the angular velocity transformation, respectively. All the above frequencies present in a reasonable modal distribution and are separated from interference modes. The input-output analysis of the new hair sensor demonstrates that the scale factor of the acceleration is 12.35 Hz/g, the scale factor of the angular velocity is 0.404 nm/deg/s and the sensitivity of the air flow is 1.075 Hz/(m/s)², which verifies the multifunction sensitive characteristics of the hair sensor. Besides, the structural optimization of the hair post is used to improve the sensitivity of the air flow rate and the acceleration. The analysis results illustrate that the hollow circular hair post can increase the sensitivity of the air flow and the II-shape hair post can increase the sensitivity of the acceleration. Moreover, the thermal analysis confirms the scheme of the frequency difference for the resonant transducer can prominently eliminate the temperature influences on the measurement accuracy. The air flow analysis indicates that the surface area increase of hair post is significantly beneficial for the efficiency improvement of the signal transmission. In summary, the structure of the new hair sensor is proved to be feasible by comprehensive
Hybrid Ant Algorithm and Applications for Vehicle Routing Problem
NASA Astrophysics Data System (ADS)
Xiao, Zhang; Jiang-qing, Wang
Ant colony optimization (ACO) is a metaheuristic method that inspired by the behavior of real ant colonies. ACO has been successfully applied to several combinatorial optimization problems, but it has some short-comings like its slow computing speed and local-convergence. For solving Vehicle Routing Problem, we proposed Hybrid Ant Algorithm (HAA) in order to improve both the performance of the algorithm and the quality of solutions. The proposed algorithm took the advantages of Nearest Neighbor (NN) heuristic and ACO for solving VRP, it also expanded the scope of solution space and improves the global ability of the algorithm through importing mutation operation, combining 2-opt heuristics and adjusting the configuration of parameters dynamically. Computational results indicate that the hybrid ant algorithm can get optimal resolution of VRP effectively.
Simulation, Control, and Applications for Flow and Scattering Problems
2015-04-10
and Optimization (08 2011) Kazufumi Ito, Tomoya Takeuchi. CIP immersed interface methods for hyperbolic equations withdiscontinuous coefficients ...augmented variables along the interface between the fluid flow and the porous media so that the problem can be decoupled as several Poisson equations . The...computational fluid dynamics and control of incompressible flows modeled by Navier-Stokes equations . Under the support of the current ARO grant, we
The application of bifurcation theory to physical problems
NASA Astrophysics Data System (ADS)
Joseph, D. D.
Reference is made to an observation by Lighthill (Thompson, 1982) of the one great complicating feature that introduces major difficulties into mechanics, physics, chemistry, engineering, astronomy, and biology. This is that an equilibrium can be stable but may become unstable and that a process can take place continuously but may become discontinuous. It is argued here that the complications noted by Lighthill occur even in the simplest problems. It is pointed out that a given physical system may have available many modes of operation and that the mathematical model of this system can have many solutions corresponding to the same prescribed data. In physical problems of even moderate complexity, the selection rules by which the actual realized solutions are determined are elusive. To illustrate this point, consideration is given to a simple scalar ordinary differential equation whose solution set is fully defined. It is shown that even in the simplest of problems, it is possible to have the highest degree of degeneracy with many solutions and many discontinuous changes as the control parameter is varied. Also discussed is the bifurcation of a periodic solution.
NASA Astrophysics Data System (ADS)
Jerez, Sonia; Montavez, Juan P.; Gomez-Navarro, Juan J.; Jimenez-Guerrero, Pedro; Lorente, Raquel; Garcia-Valero, Juan A.; Jimenez, Pedro A.; Gonzalez-Rouco, Jose F.; Zorita, Eduardo
2010-05-01
Regional climate change projections are affected by several sources of uncertainty. Some of them come from Global Circulation Models and scenarios.; others come from the downscaling process. In the case of dynamical downscaling, mainly using Regional Climate Models (RCM), the sources of uncertainty may involve nesting strategies, related to the domain position and resolution, soil characterization, internal variability, methods of solving the equations, and the configuration of model physics. Therefore, a probabilistic approach seems to be recommendable when projecting regional climate change. This problem is usually faced by performing an ensemble of simulations. The aim of this study is to evaluate the range of uncertainty in regional climate projections associated to changing the physical configuration in a RCM (MM5) as well as the capability when reproducing the observed climate. This study is performed over the Iberian Peninsula and focuses on the reproduction of the Probability Density Functions (PDFs) of daily mean temperature. The experiments consist on a multi-physics ensemble of high resolution climate simulations (30 km over the target region) for the periods 1970-1999 (present) and 2070-2099 (future). Two sets of simulations for the present have been performed using ERA40 (MM5-ERA40) and ECHAM5-3CM run1 (MM5-E5-PR) as boundary conditions. The future the experiments are driven by ECHAM5-A2-run1 (MM5-E5-A2). The ensemble has a total of eight members, as the result of combining the schemes for PBL (MRF and ETA), cumulus (GRELL and Kain-Fritch) and microphysics (Simple-Ice and Mixed phase). In a previous work this multi-physics ensemble has been analyzed focusing on the seasonal mean values of both temperature and precipitation. The main results indicate that those physics configurations that better reproduce the observed climate project the most dramatic changes for the future (i.e, the largest temperature increase and precipitation decrease). Among the
Multi-physics design and analyses of long life reactors for lunar outposts
NASA Astrophysics Data System (ADS)
Schriener, Timothy M.
event of a launch abort accident. Increasing the amount of fuel in the reactor core, and hence its operational life, would be possible by launching the reactor unfueled and fueling it on the Moon. Such a reactor would, thus, not be subject to launch criticality safety requirements. However, loading the reactor with fuel on the Moon presents a challenge, requiring special designs of the core and the fuel elements, which lend themselves to fueling on the lunar surface. This research investigates examples of both a solid core reactor that would be fueled at launch as well as an advanced concept which could be fueled on the Moon. Increasing the operational life of a reactor fueled at launch is exercised for the NaK-78 cooled Sectored Compact Reactor (SCoRe). A multi-physics design and analyses methodology is developed which iteratively couples together detailed Monte Carlo neutronics simulations with 3-D Computational Fluid Dynamics (CFD) and thermal-hydraulics analyses. Using this methodology the operational life of this compact, fast spectrum reactor is increased by reconfiguring the core geometry to reduce neutron leakage and parasitic absorption, for the same amount of HEU in the core, and meeting launch safety requirements. The multi-physics analyses determine the impacts of the various design changes on the reactor's neutronics and thermal-hydraulics performance. The option of increasing the operational life of a reactor by loading it on the Moon is exercised for the Pellet Bed Reactor (PeBR). The PeBR uses spherical fuel pellets and is cooled by He-Xe gas, allowing the reactor core to be loaded with fuel pellets and charged with working fluid on the lunar surface. The performed neutronics analyses ensure the PeBR design achieves a long operational life, and develops safe launch canister designs to transport the spherical fuel pellets to the lunar surface. The research also investigates loading the PeBR core with fuel pellets on the Moon using a transient Discrete
Li, Xia; Guo, Meifang; Su, Yongfu
2016-01-01
In this article, a new multidirectional monotone hybrid iteration algorithm for finding a solution to the split common fixed point problem is presented for two countable families of quasi-nonexpansive mappings in Banach spaces. Strong convergence theorems are proved. The application of the result is to consider the split common null point problem of maximal monotone operators in Banach spaces. Strong convergence theorems for finding a solution of the split common null point problem are derived. This iteration algorithm can accelerate the convergence speed of iterative sequence. The results of this paper improve and extend the recent results of Takahashi and Yao (Fixed Point Theory Appl 2015:87, 2015) and many others .
The Application of Geocoded Data to Educational Problems.
ERIC Educational Resources Information Center
McIsaac, Donald N.; And Others
The papers presented at a symposium on geocoding describe the preparation of a geocoded data file, some basic applications for education planning, and its use in trend analysis to produce contour maps for any desired characteristic. Geocoding data involves locating each entity, such as students or schools, in terms of grid coordinates on a…
Common Problems of Mobile Applications for Foreign Language Testing
ERIC Educational Resources Information Center
Garcia Laborda, Jesus; Magal-Royo, Teresa; Lopez, Jose Luis Gimenez
2011-01-01
As the use of mobile learning educational applications has become more common anywhere in the world, new concerns have appeared in the classroom, human interaction in software engineering and ergonomics. new tests of foreign languages for a number of purposes have become more and more common recently. However, studies interrelating language tests…
Statistical Risk Assessment: Old Problems and New Applications
ERIC Educational Resources Information Center
Gottfredson, Stephen D.; Moriarty, Laura J.
2006-01-01
Statistically based risk assessment devices are widely used in criminal justice settings. Their promise remains largely unfulfilled, however, because assumptions and premises requisite to their development and application are routinely ignored and/or violated. This article provides a brief review of the most salient of these assumptions and…
Numerical Analysis of a Multi-Physics Model for Trace Gas Sensors
NASA Astrophysics Data System (ADS)
Brennan, Brian
Trace gas sensors are currently used in many applications from leak detection to national security and may some day help with disease diagnosis. These sensors are modelled by a coupled system of complex elliptic partial differential equations for pressure and temperature. Solutions are approximated using the finite element method which we will show admits a continuous and coercive variational problem with optimal H1 and L2 error estimates. Numerically, the finite element discretization yields a skew-Hermitian dominant matrix for which classical algebraic preconditioners quickly degrade. We develop a block preconditioner that requires scalar Helmholtz solutions to apply but gives a very low outer iteration count. To handle this, we explore three preconditoners for the resulting linear system. First we analyze the classical block Jacobi and block Gauss-Seidel preconditions before presenting a custom, physics based preconditioner. We also present analysis showing eigenvalues of the custom preconditioned system are mesh-dependent but with a small coefficient. Numerical experiments confirm our theoretical discussion.
Application of genetic algorithms in nonlinear heat conduction problems.
Kadri, Muhammad Bilal; Khan, Waqar A
2014-01-01
Genetic algorithms are employed to optimize dimensionless temperature in nonlinear heat conduction problems. Three common geometries are selected for the analysis and the concept of minimum entropy generation is used to determine the optimum temperatures under the same constraints. The thermal conductivity is assumed to vary linearly with temperature while internal heat generation is assumed to be uniform. The dimensionless governing equations are obtained for each selected geometry and the dimensionless temperature distributions are obtained using MATLAB. It is observed that GA gives the minimum dimensionless temperature in each selected geometry.
Application the particle method in problems of mechanics deformable media
NASA Astrophysics Data System (ADS)
Berezhnoi, D. V.; Gabsalikova, N. F.; Miheev, V. V.
2016-11-01
The work implemented method of deformation of ground-based particle method, which is a collection of mineral grains, which are linked to some system of forces on the contact areas between the mineral particles. Two-parameter potential Lennard-Jones and it is modified version were selected for describing the behavior of ground. Some model problems of straining layer of ground in the gravity field was decided. The calculations were performed on a heterogeneous computing cluster, on each of the seven components that were installed on three GPU AMD Radeon HD 7970.
Application of computational fluid mechanics to atmospheric pollution problems
NASA Technical Reports Server (NTRS)
Hung, R. J.; Liaw, G. S.; Smith, R. E.
1986-01-01
One of the most noticeable effects of air pollution on the properties of the atmosphere is the reduction in visibility. This paper reports the results of investigations of the fluid dynamical and microphysical processes involved in the formation of advection fog on aerosols from combustion-related pollutants, as condensation nuclei. The effects of a polydisperse aerosol distribution, on the condensation/nucleation processes which cause the reduction in visibility are studied. This study demonstrates how computational fluid mechanics and heat transfer modeling can be applied to simulate the life cycle of the atmosphereic pollution problems.
Stability of charge inversion, Thomson problem, and application to electrophoresis
NASA Astrophysics Data System (ADS)
Patra, Michael; Patriarca, Marco; Karttunen, Mikko
2003-03-01
We analyze charge inversion in colloidal systems at zero temperature using stability concepts, and connect this to the classical Thomson problem of arranging electrons on sphere. We show that for a finite microion charge, the globally stable, lowest-energy state of the complex formed by the colloid and the oppositely charged microions is always overcharged. This effect disappears in the continuous limit. Additionally, a layer of at least twice as many microions as required for charge neutrality is always locally stable. In an applied external electric field the stability of the microion cloud is reduced. Finally, this approach is applied to a system of two colloids at low but finite temperature.
Application of clustering global optimization to thin film design problems.
Lemarchand, Fabien
2014-03-10
Refinement techniques usually calculate an optimized local solution, which is strongly dependent on the initial formula used for the thin film design. In the present study, a clustering global optimization method is used which can iteratively change this initial formula, thereby progressing further than in the case of local optimization techniques. A wide panel of local solutions is found using this procedure, resulting in a large range of optical thicknesses. The efficiency of this technique is illustrated by two thin film design problems, in particular an infrared antireflection coating, and a solar-selective absorber coating.
Signature neural networks: definition and application to multidimensional sorting problems.
Latorre, Roberto; de Borja Rodriguez, Francisco; Varona, Pablo
2011-01-01
In this paper we present a self-organizing neural network paradigm that is able to discriminate information locally using a strategy for information coding and processing inspired in recent findings in living neural systems. The proposed neural network uses: 1) neural signatures to identify each unit in the network; 2) local discrimination of input information during the processing; and 3) a multicoding mechanism for information propagation regarding the who and the what of the information. The local discrimination implies a distinct processing as a function of the neural signature recognition and a local transient memory. In the context of artificial neural networks none of these mechanisms has been analyzed in detail, and our goal is to demonstrate that they can be used to efficiently solve some specific problems. To illustrate the proposed paradigm, we apply it to the problem of multidimensional sorting, which can take advantage of the local information discrimination. In particular, we compare the results of this new approach with traditional methods to solve jigsaw puzzles and we analyze the situations where the new paradigm improves the performance.
Application of remote sensing to state and regional problems
NASA Technical Reports Server (NTRS)
Miller, W. F. (Principal Investigator); Tingle, J.; Wright, L. H.; Tebbs, B.
1984-01-01
Progress was made in the hydroclimatology, habitat modeling and inventory, computer analysis, wildlife management, and data comparison programs that utilize LANDSAT and SEASAT data provided to Mississippi researchers through the remote sensing applications program. Specific topics include water runoff in central Mississippi, habitat models for the endangered gopher tortoise, coyote, and turkey Geographic Information Systems (GIS) development, forest inventory along the Mississipppi River, and the merging of LANDSAT and SEASAT data for enhanced forest type discrimination.
Inference of Stochastic Nonlinear Oscillators with Applications to Physiological Problems
NASA Technical Reports Server (NTRS)
Smelyanskiy, Vadim N.; Luchinsky, Dmitry G.
2004-01-01
A new method of inferencing of coupled stochastic nonlinear oscillators is described. The technique does not require extensive global optimization, provides optimal compensation for noise-induced errors and is robust in a broad range of dynamical models. We illustrate the main ideas of the technique by inferencing a model of five globally and locally coupled noisy oscillators. Specific modifications of the technique for inferencing hidden degrees of freedom of coupled nonlinear oscillators is discussed in the context of physiological applications.
Remote sensing applications to resource problems in South Dakota
NASA Technical Reports Server (NTRS)
Myers, V. I. (Principal Investigator)
1981-01-01
The procedures used as well as the results obtained and conclusions derived are described for the following applications of remote sensing in South Dakota: (1) sage grouse management; (2) censusing Canada geese; (3) monitoring grasshopper infestation in rangeland; (4) detecting Dutch elm disease in an urban environment; (5) determining water usage from the Belle Fourche River; (6) resource management of the Lower James River; and (7) the National Model Implantation Program: Lake Herman watershed.
Application of Papkovich-Neuber potentials to a crack problem.
NASA Technical Reports Server (NTRS)
Kassir, M. K.; Sih, G. C.
1973-01-01
The problem of an elastic solid containing a semi-infinite plane crack subjected to concentrated shears parallel to the edge of the crack is considered in this paper. A closed form solution using four harmonic functions is found to satisfy the finite displacement and inverse square root stress singularity at the edge of the crack. Explicit expressions in terms of elementary functions are given for the distribution of stress and displacement in the solid. These are obtained by employing Fourier and Kontorovich-Lebedev integral transforms and certain singular solutions of Laplace equations in three dimensions. The variations of the intensity of the local stress field along the crack border are shown graphically.
Applications of vacuum technology to novel accelerator problems
Garwin, E.L.
1983-01-01
Vacuum requirements for electron storage rings are most demanding to fulfill, due to the presence of gas desorption caused by large quantities of synchrotron radiation, the very limited area accessible for pumping ports, the need for 10/sup -9/ torr pressures in the ring, and for pressures a decade lower in the interaction regions. Design features of a wide variety of distributed ion sublimation pumps (DIP) developed at SLAC to meet these requirements are discussed, as well as NEG (non-evaporable getter) pumps tested for use in the Large Electron Positron Collider at CERN. Application of DIP to much higher pressures in electron damping rings for the Stanford Linear Collider are discussed.
Applications of phylogenetics to solve practical problems in insect conservation.
Buckley, Thomas R
2016-12-01
Phylogenetic approaches have much promise for the setting of conservation priorities and resource allocation. There has been significant development of analytical methods for the measurement of phylogenetic diversity within and among ecological communities as a way of setting conservation priorities. Application of these tools to insects has been low as has been the uptake by conservation managers. A critical reason for the lack of uptake includes the scarcity of detailed phylogenetic and species distribution data from much of insect diversity. Environmental DNA technologies offer a means for the high throughout collection of phylogenetic data across landscapes for conservation planning.
Application of wave mechanics theory to fluid dynamics problems: Fundamentals
NASA Technical Reports Server (NTRS)
Krzywoblocki, M. Z. V.
1974-01-01
The application of the basic formalistic elements of wave mechanics theory is discussed. The theory is used to describe the physical phenomena on the microscopic level, the fluid dynamics of gases and liquids, and the analysis of physical phenomena on the macroscopic (visually observable) level. The practical advantages of relating the two fields of wave mechanics and fluid mechanics through the use of the Schroedinger equation constitute the approach to this relationship. Some of the subjects include: (1) fundamental aspects of wave mechanics theory, (2) laminarity of flow, (3) velocity potential, (4) disturbances in fluids, (5) introductory elements of the bifurcation theory, and (6) physiological aspects in fluid dynamics.
Application of remote sensing to state and regional problems. [mississippi
NASA Technical Reports Server (NTRS)
Miller, W. F.; Powers, J. S.; Clark, J. R.; Solomon, J. L.; Williams, S. G. (Principal Investigator)
1981-01-01
The methods and procedures used, accomplishments, current status, and future plans are discussed for each of the following applications of LANDSAT in Mississippi: (1) land use planning in Lowndes County; (2) strip mine inventory and reclamation; (3) white-tailed deer habitat evaluation; (4) remote sensing data analysis support systems; (5) discrimination of unique forest habitats in potential lignite areas; (6) changes in gravel operations; and (7) determining freshwater wetlands for inventory and monitoring. The documentation of all existing software and the integration of the image analysis and data base software into a single package are now considered very high priority items.
Application of the artificial bee colony algorithm for solving the set covering problem.
Crawford, Broderick; Soto, Ricardo; Cuesta, Rodrigo; Paredes, Fernando
2014-01-01
The set covering problem is a formal model for many practical optimization problems. In the set covering problem the goal is to choose a subset of the columns of minimal cost that covers every row. Here, we present a novel application of the artificial bee colony algorithm to solve the non-unicost set covering problem. The artificial bee colony algorithm is a recent swarm metaheuristic technique based on the intelligent foraging behavior of honey bees. Experimental results show that our artificial bee colony algorithm is competitive in terms of solution quality with other recent metaheuristic approaches for the set covering problem.
Application of the Artificial Bee Colony Algorithm for Solving the Set Covering Problem
Crawford, Broderick; Soto, Ricardo; Cuesta, Rodrigo; Paredes, Fernando
2014-01-01
The set covering problem is a formal model for many practical optimization problems. In the set covering problem the goal is to choose a subset of the columns of minimal cost that covers every row. Here, we present a novel application of the artificial bee colony algorithm to solve the non-unicost set covering problem. The artificial bee colony algorithm is a recent swarm metaheuristic technique based on the intelligent foraging behavior of honey bees. Experimental results show that our artificial bee colony algorithm is competitive in terms of solution quality with other recent metaheuristic approaches for the set covering problem. PMID:24883356
[Current problems of information technologies application for forces medical service].
Ivanov, V V; Korneenkov, A A; Bogomolov, V D; Borisov, D N; Rezvantsev, M V
2013-06-01
The modern information technologies are the key factors for the upgrading of forces medical service. The aim of this article is the analysis of prospective information technologies application for the upgrading of forces medical service. The authors suggested 3 concepts of information support of Russian military health care on the basis of data about information technologies application in the foreign armed forces, analysis of the regulatory background, prospects of military-medical service and gathered experience of specialists. These three concepts are: development of united telecommunication network of the medical service of the Armed Forces of the Russian Federation medical service, working out and implementation of standard medical information systems for medical units and establishments, monitoring the military personnel health state and military medical service resources. It is noted that on the assumption of sufficient centralized financing and industrial implementation of the military medical service prospective information technologies, by the year 2020 the united information space of the military medical service will be created and the target information support effectiveness will be achieved.
NASA Technical Reports Server (NTRS)
Hidalgo, J. U.
1975-01-01
The applicability of remote sensing to transportation and traffic analysis, urban quality, and land use problems is discussed. Other topics discussed include preliminary user analysis, potential uses, traffic study by remote sensing, and urban condition analysis using ERTS.
On the range of applicability of Baker`s approach to the frame problem
Kartha, G.N.
1996-12-31
We investigate the range of applicability of Baker`s approach to the frame problem using an action language. We show that for temporal projection and deterministic domains, Baker`s approach gives the intuitively expected results.
Application of Lie groups to discretizing nuclear engineering problems
NASA Astrophysics Data System (ADS)
Grove, Travis Justin
A method utilizing groups of point transformations is applied to the three and four group time-independent neutron diffusion equations to obtain invariant difference equations for one-region and composite-region domains in one-dimensional Cartesian, cylindrical, and spherical geometries. Also, the theory behind this particular method will be discussed. A comparison of the invariant difference equations will be made to standard finite difference equations as well as to analytical results. From the analytical results, it will be shown that the invariant difference technique gives exact analytical solutions for the grid point values. The construction of invariant difference operators technique is also applied to the one-dimensional P 3 equations from neutron transport theory in Cartesian geometry, using the FLIP formulation, which allows PL equations to be written in the form of sets of coupled ordinary differential equations. The use of finite transforms will be examined to transform multi-dimensional problems into one-dimension where then the construction of invariant difference operators technique can be used to create difference equations. The solutions to the set of equations can then be transformed back into the multi-dimensional geometries. The use of finite transforms along with the construction of invariant difference operators technique is applied to a simple two-dimensional benchmark problem. In addition, a method using groups of point transformations along with Noether's theorem is shown to generate a conservation law that can be used to create a two-term recurrence relation which calculates numerically exact Green's functions in one dimension for the time-independent neutron diffusion equation for Cartesian, cylindrical, and spherical geometries. This method will be expanded to constructing two-term recurrence relations for an arbitrary number of spatial regions, as well as detailing starting point values for type 2 and type 3 homogeneous endpoint
Applications of mineral surface chemistry to environmental problems
NASA Astrophysics Data System (ADS)
White, Art F.
1995-07-01
Environmental surface chemistry involves processes that occur at the interface between the regolith, hydrosphere and atmosphere. The more limited scope of the present review addresses natural and anthropogenically-induced inorganic geochemical reactions between solutes in surface and ground waters and soil and aquifer substrates. Important surficial reactions include sorption, ion exchange, dissolution, precipitation and heterogeneous oxidation/reduction processes occurring at the solid/aqueous interface. Recent research advances in this field have addressed, both directly and indirectly, societal issues related to water quality, pollution, biogeochemical cycling, nutrient budgets and chemical weathering related to long term global climate change. This review will include recent advances in the fundamental and theoretical understanding of these surficial processes, breakthroughs in experimental and instrumental surface characterization, and development of methodologies for field applications.
Topographic mapping of oral structures - problems and applications in prosthodontics
NASA Astrophysics Data System (ADS)
Young, John M.; Altschuler, Bruce R.
1981-10-01
The diagnosis and treatment of malocclusion, and the proper design of restorations and prostheses, requires the determination of surface topography of the teeth and related oral structures. Surface contour measurements involve not only affected teeth, but adjacent and opposing surface contours composing a complexly interacting occlusal system. No a priori knowledge is predictable as dental structures are largely asymmetrical, non-repetitive, and non-uniform curvatures in 3-D space. Present diagnosis, treatment planning, and fabrication relies entirely on the generation of physical replicas during each stage of treatment. Fabrication is limited to materials that lend themselves to casting or coating, and to hand fitting and finishing. Inspection is primarily by vision and patient perceptual feedback. Production methods are time-consuming. Prostheses are entirely custom designed by manual methods, require costly skilled technical labor, and do not lend themselves to centralization. The potential improvement in diagnostic techniques, improved patient care, increased productivity, and cost-savings in material and man-hours that could result, if rapid and accurate remote measurement and numerical (automated) fabrication methods were devised, would be significant. The unique problems of mapping oral structures, and specific limitations in materials and methods, are reviewed.
COAMPS Application to Global and Homeland Security Threat Problems
Chin, H S; Glascoe, L G
2004-09-14
Atmospheric dispersion problems have received more attention with regard to global and homeland security than their conventional roles in air pollution and local hazard assessment in the post 9/11 era. Consequently, there is growing interest to characterize meteorology uncertainty at both low and high altitudes (below and above 30 km, respectively). A 3-D Coupled Ocean Atmosphere Prediction System (COAMPS, developed by Naval Research Laboratory; Hodur, 1997) is used to address LLNL's task. The objective of this report is focused on the effort at the improvement of COAMPS forecast to address the uncertainty issue, and to provide new capability for high-altitude forecast. To assess the atmospheric dispersion behavior in a wider range of meteorological conditions and to expand its vertical scope for the potential threat at high altitudes, several modifications of COAMPS are needed to meet the project goal. These improvements include (1) the long-range forecast capability to show the variability of meteorological conditions at a much larger time scale (say, a year), and (2) the model physics enhancement to provide new capability for high-altitude forecast.
An Exploratory Application of Neural Networks to the Sortie Generation Forecasting Problem
1991-09-01
AD-A246 626 3MAR 02 19 AN EXPLORATORY APPLICATION OF NEURAL NETWORKS To THE SORTIE GENERATION FORECASTING PROBLEM THESIS James M. Dagg, GS-12 AFIT...2 1992M D AN EXPLORATORY APPLICATION OF NEURAL NETWORKS TO THE SORTIE GENERATION FORECASTING PROBLEM THESIS James M. Dagg, GS-12 AFIT/GLM/LSM/9 1S-11...Approved for public release; distribution unlimited the views expressed in this thesis are those of the authors and do not ref lect the of ficial
Application Problem of Biomass Combustion in Greenhouses for Crop Production
NASA Astrophysics Data System (ADS)
Kawamura, Atsuhiro; Akisawa, Atsushi; Kashiwagi, Takao
It is consumed much energy in fossil fuels to production crops in greenhouses in Japan. And fl ue gas as CO2 fertilization is used for growing crops in modern greenhouses. If biomass as renewable energy can use for production vegetables in greenhouses, more than 800,000 kl of energy a year (in crude oil equivalent) will be saved. In this study, at fi rst, we made the biomass combustion equipment, and performed fundamental examination for various pellet fuel. We performed the examination that considered an application to a real greenhouse next. We considered biomass as both a source of energy and CO2 gas for greenhouses, and the following fi ndings were obtained: 1) Based on the standard of CO2 gas fertilization to greenhouses, it is diffi cult to apply biomass as a CO2 fertilizer, so that biomass should be applied to energy use only, at least for the time being. 2) Practical biomass energy machinery for economy, high reliability and greenhouses satisfying the conservatism that it is easy is necessary. 3) It is necessary to develop crop varieties and cultivation systems requiring less strict environmental control. 4) Disposal of combustion ash occurring abundantly, effective practical use is necessary.
Application of the INSTANT-HPS PN Transport Code to the C5G7 Benchmark Problem
Y. Wang; H. Zhang; R. H. Szilard; R. C. Martineau
2011-06-01
INSTANT is the INL's next generation neutron transport solver to support high-fidelity multi-physics reactor simulation INSTANT is in continuous development to extend its capability Code is designed to take full advantage of middle to large cluster (10-1000 processors) Code is designed to focus on method adaptation while also mesh adaptation will be possible. It utilizes the most modern computing techniques to generate a neutronics tool of full-core transport calculations for reactor analysis and design. It can perform calculations on unstructured 2D/3D triangular, hexagonal and Cartesian geometries. Calculations can be easily extended to more geometries because of the independent mesh framework coded with the model Fortran. This code has a multigroup solver with thermal rebalance and Chebyshev acceleration. It employs second-order PN and Hybrid Finite Element method (PNHFEM) discretization scheme. Three different in-group solvers - preconditioned Conjugate Gradient (CG) method, preconditioned Generalized Minimal Residual Method (GMRES) and Red-Black iteration - have been implemented and parallelized with the spatial domain decomposition in the code. The input is managed with extensible markup language (XML) format. 3D variables including the flux distributions are outputted into VTK files, which can be visualized by tools such as VisIt and ParaView. An extension of the code named INSTANTHPS provides the capability to perform 3D heterogeneous transport calculations within fuel pins. C5G7 is an OECD/NEA benchmark problem created to test the ability of modern deterministic transport methods and codes to treat reactor core problems without spatial homogenization. This benchmark problem had been widely analyzed with various code packages. In this transaction, results of the applying the INSTANT-HPS code to the C5G7 problem are summarized.
Application of fluorescent dyes for some problems of bioelectromagnetics
NASA Astrophysics Data System (ADS)
Babich, Danylo; Kylsky, Alexandr; Pobiedina, Valentina; Yakunov, Andrey
2016-04-01
Fluorescent organic dyes solutions are used for non-contact measurement of the millimeter wave absorption in liquids simulating biological tissue. There is still not any certain idea of the physical mechanism describing this process despite the widespread technology of microwave radiation in the food industry, biotechnology and medicine. For creating adequate physical model one requires an accurate command of knowledge concerning to the relation between millimeter waves and irradiated object. There were three H-bonded liquids selected as the samples with different coefficients of absorption in the millimeter range like water (strong absorption), glycerol (medium absorption) and ethylene glycol (light absorption). The measurements showed that the greatest response to the action of microwaves occurs for glycerol solutions: R6G (building-up luminescence) and RC (fading luminescence). For aqueous solutions the signal is lower due to lower quantum efficiency of luminescence, and for ethylene glycol — due to the low absorption of microwaves. In the area of exposure a local increase of temperature was estimated. For aqueous solutions of both dyes the maximum temperature increase is about 7° C caused with millimeter waves absorption, which coincides with the direct radio physical measurements and confirmed by theoretical calculations. However, for glycerol solution R6G temperature equivalent for building-up luminescence is around 9° C, and for the solution of ethylene glycol it's about 15°. It is assumed the possibility of non-thermal effect of microwaves on the different processes and substances. The application of this non-contact temperature sensing is a simple and novel method to detect temperature change in small biological objects.
Application of CHAD hydrodynamics to shock-wave problems
Trease, H.E.; O`Rourke, P.J.; Sahota, M.S.
1997-12-31
CHAD is the latest in a sequence of continually evolving computer codes written to effectively utilize massively parallel computer architectures and the latest grid generators for unstructured meshes. Its applications range from automotive design issues such as in-cylinder and manifold flows of internal combustion engines, vehicle aerodynamics, underhood cooling and passenger compartment heating, ventilation, and air conditioning to shock hydrodynamics and materials modeling. CHAD solves the full unsteady Navier-Stoke equations with the k-epsilon turbulence model in three space dimensions. The code has four major features that distinguish it from the earlier KIVA code, also developed at Los Alamos. First, it is based on a node-centered, finite-volume method in which, like finite element methods, all fluid variables are located at computational nodes. The computational mesh efficiently and accurately handles all element shapes ranging from tetrahedra to hexahedra. Second, it is written in standard Fortran 90 and relies on automatic domain decomposition and a universal communication library written in standard C and MPI for unstructured grids to effectively exploit distributed-memory parallel architectures. Thus the code is fully portable to a variety of computing platforms such as uniprocessor workstations, symmetric multiprocessors, clusters of workstations, and massively parallel platforms. Third, CHAD utilizes a variable explicit/implicit upwind method for convection that improves computational efficiency in flows that have large velocity Courant number variations due to velocity of mesh size variations. Fourth, CHAD is designed to also simulate shock hydrodynamics involving multimaterial anisotropic behavior under high shear. The authors will discuss CHAD capabilities and show several sample calculations showing the strengths and weaknesses of CHAD.
NASA Astrophysics Data System (ADS)
Zheng, Jiajia; Li, Yancheng; Li, Zhaochun; Wang, Jiong
2015-10-01
This paper presents multi-physics modeling of an MR absorber considering the magnetic hysteresis to capture the nonlinear relationship between the applied current and the generated force under impact loading. The magnetic field, temperature field, and fluid dynamics are represented by the Maxwell equations, conjugate heat transfer equations, and Navier-Stokes equations. These fields are coupled through the apparent viscosity and the magnetic force, both of which in turn depend on the magnetic flux density and the temperature. Based on a parametric study, an inverse Jiles-Atherton hysteresis model is used and implemented for the magnetic field simulation. The temperature rise of the MR fluid in the annular gap caused by core loss (i.e. eddy current loss and hysteresis loss) and fluid motion is computed to investigate the current-force behavior. A group of impulsive tests was performed for the manufactured MR absorber with step exciting currents. The numerical and experimental results showed good agreement, which validates the effectiveness of the proposed multi-physics FEA model.
NASA Technical Reports Server (NTRS)
Rado, B. Q.
1975-01-01
Automatic classification techniques are described in relation to future information and natural resource planning systems with emphasis on application to Georgia resource management problems. The concept, design, and purpose of Georgia's statewide Resource AS Assessment Program is reviewed along with participation in a workshop at the Earth Resources Laboratory. Potential areas of application discussed include: agriculture, forestry, water resources, environmental planning, and geology.
Rethinking the lecture: the application of problem based learning methods to atypical contexts.
Rogal, Sonya M M; Snider, Paul D
2008-05-01
Problem based learning is a teaching and learning strategy that uses a problematic stimulus as a means of motivating and directing students to develop and acquire knowledge. Problem based learning is a strategy that is typically used with small groups attending a series of sessions. This article describes the principles of problem based learning and its application in atypical contexts; large groups attending discrete, stand-alone sessions. The principles of problem based learning are based on Socratic teaching, constructivism and group facilitation. To demonstrate the application of problem based learning in an atypical setting, this article focuses on the graduate nurse intake from a teaching hospital. The groups are relatively large and meet for single day sessions. The modified applications of problem based learning to meet the needs of atypical groups are described. This article contains a step by step guide of constructing a problem based learning package for large, single session groups. Nurse educators facing similar groups will find they can modify problem based learning to suit their teaching context.
Application of symbolic and algebraic manipulation software in solving applied mechanics problems
NASA Technical Reports Server (NTRS)
Tsai, Wen-Lang; Kikuchi, Noboru
1993-01-01
As its name implies, symbolic and algebraic manipulation is an operational tool which not only can retain symbols throughout computations but also can express results in terms of symbols. This report starts with a history of symbolic and algebraic manipulators and a review of the literatures. With the help of selected examples, the capabilities of symbolic and algebraic manipulators are demonstrated. These applications to problems of applied mechanics are then presented. They are the application of automatic formulation to applied mechanics problems, application to a materially nonlinear problem (rigid-plastic ring compression) by finite element method (FEM) and application to plate problems by FEM. The advantages and difficulties, contributions, education, and perspectives of symbolic and algebraic manipulation are discussed. It is well known that there exist some fundamental difficulties in symbolic and algebraic manipulation, such as internal swelling and mathematical limitation. A remedy for these difficulties is proposed, and the three applications mentioned are solved successfully. For example, the closed from solution of stiffness matrix of four-node isoparametrical quadrilateral element for 2-D elasticity problem was not available before. Due to the work presented, the automatic construction of it becomes feasible. In addition, a new advantage of the application of symbolic and algebraic manipulation found is believed to be crucial in improving the efficiency of program execution in the future. This will substantially shorten the response time of a system. It is very significant for certain systems, such as missile and high speed aircraft systems, in which time plays an important role.
Gradient vs. approximation design optimization techniques in low-dimensional convex problems
NASA Astrophysics Data System (ADS)
Fedorik, Filip
2013-10-01
Design Optimization methods' application in structural designing represents a suitable manner for efficient designs of practical problems. The optimization techniques' implementation into multi-physical softwares permits designers to utilize them in a wide range of engineering problems. These methods are usually based on modified mathematical programming techniques and/or their combinations to improve universality and robustness for various human and technical problems. The presented paper deals with the analysis of optimization methods and tools within the frame of one to three-dimensional strictly convex optimization problems, which represent a component of the Design Optimization module in the Ansys program. The First Order method, based on combination of steepest descent and conjugate gradient method, and Supbproblem Approximation method, which uses approximation of dependent variables' functions, accompanying with facilitation of Random, Sweep, Factorial and Gradient Tools, are analyzed, where in different characteristics of the methods are observed.
Applications of numerical optimization methods to helicopter design problems: A survey
NASA Technical Reports Server (NTRS)
Miura, H.
1984-01-01
A survey of applications of mathematical programming methods is used to improve the design of helicopters and their components. Applications of multivariable search techniques in the finite dimensional space are considered. Five categories of helicopter design problems are considered: (1) conceptual and preliminary design, (2) rotor-system design, (3) airframe structures design, (4) control system design, and (5) flight trajectory planning. Key technical progress in numerical optimization methods relevant to rotorcraft applications are summarized.
Applications of numerical optimization methods to helicopter design problems - A survey
NASA Technical Reports Server (NTRS)
Miura, H.
1984-01-01
A survey of applications of mathematical programming methods is used to improve the design of helicopters and their components. Applications of multivariable search techniques in the finite dimensional space are considered. Five categories of helicopter design problems are considered: (1) conceptual and preliminary design, (2) rotor-system design, (3) airframe structures design, (4) control system design, and (5) flight trajectory planning. Key technical progress in numerical optimization methods relevant to rotorcraft applications are summarized.
Applications of numerical optimization methods to helicopter design problems - A survey
NASA Technical Reports Server (NTRS)
Miura, H.
1985-01-01
A survey of applications of mathematical programming methods is used to improve the design of helicopters and their components. Applications of multivariable search techniques in the finite dimensional space are considered. Five categories of helicopter design problems are considered: (1) conceptual and preliminary design, (2) rotor-system design, (3) airframe structures design, (4) control system design, and (5) flight trajectory planning. Key technical progress in numerical optimization methods relevant to rotorcraft applications are summarized.
ERIC Educational Resources Information Center
Dershem, Herbert L.
These modules view aspects of computer use in the problem-solving process, and introduce techniques and ideas that are applicable to other modes of problem solving. The first unit looks at algorithms, flowchart language, and problem-solving steps that apply this knowledge. The second unit describes ways in which computer iteration may be used…
NASA Astrophysics Data System (ADS)
Dorn, O.; Lesselier, D.
2010-07-01
practically relevant inverse problems. The contribution by M Li, A Abubakar and T Habashy, `Application of a two-and-a-half dimensional model-based algorithm to crosswell electromagnetic data inversion', deals with a model-based inversion technique for electromagnetic imaging which addresses novel challenges such as multi-physics inversion, and incorporation of prior knowledge, such as in hydrocarbon recovery. 10. Non-stationary inverse problems, considered as a special class of Bayesian inverse problems, are framed via an orthogonal decomposition representation in the contribution by A Lipponen, A Seppänen and J P Kaipio, `Reduced order estimation of nonstationary flows with electrical impedance tomography'. The goal is to simultaneously estimate, from electrical impedance tomography data, certain characteristics of the Navier--Stokes fluid flow model together with time-varying concentration distribution. 11. Non-iterative imaging methods of thin, penetrable cracks, based on asymptotic expansion of the scattering amplitude and analysis of the multi-static response matrix, are discussed in the contribution by W-K Park, `On the imaging of thin dielectric inclusions buried within a half-space', completing, for a shallow burial case at multiple frequencies, the direct imaging of small obstacles (here, along their transverse dimension), MUSIC and non-MUSIC type indicator functions being used for that purpose. 12. The contribution by R Potthast, `A study on orthogonality sampling' envisages quick localization and shaping of obstacles from (portions of) far-field scattering patterns collected at one or more time-harmonic frequencies, via the simple calculation (and summation) of scalar products between those patterns and a test function. This is numerically exemplified for Neumann/Dirichlet boundary conditions and homogeneous/heterogeneous embedding media. 13. The contribution by J D Shea, P Kosmas, B D Van Veen and S C Hagness, `Contrast-enhanced microwave imaging of breast
Necessary conditions for maximax problems with application to aeroglide of hypervelocity vehicles
NASA Technical Reports Server (NTRS)
Vinh, N. X.; Lu, P.
1986-01-01
This paper presents the necessary conditions for solving Chebyshev minimax (or maximax) problems with bounded control. The jump conditions obtained are applicable to problems with single or multiple maxima. By using Contensou domain of maneuverability, it is shown that when the maxima are isolated single points the control is generally continuous at the jump point in the minimax problems and discontinuous in the maximax problems in which the first time derivative of the maximax function contains the control variable. The theory is applied to the problem of maximizing the flight radius in a closed circuit glide of a hypervelocity vehicle and to a maximax optimal control problem in which the control appears explicitly with the first time derivative of the maximax function.
Application of the SNoW machine learning paradigm to a set of transportation imaging problems
NASA Astrophysics Data System (ADS)
Paul, Peter; Burry, Aaron M.; Wang, Yuheng; Kozitsky, Vladimir
2012-01-01
Machine learning methods have been successfully applied to image object classification problems where there is clear distinction between classes and where a comprehensive set of training samples and ground truth are readily available. The transportation domain is an area where machine learning methods are particularly applicable, since the classification problems typically have well defined class boundaries and, due to high traffic volumes in most applications, massive roadway data is available. Though these classes tend to be well defined, the particular image noises and variations can be challenging. Another challenge is the extremely high accuracy typically required in most traffic applications. Incorrect assignment of fines or tolls due to imaging mistakes is not acceptable in most applications. For the front seat vehicle occupancy detection problem, classification amounts to determining whether one face (driver only) or two faces (driver + passenger) are detected in the front seat of a vehicle on a roadway. For automatic license plate recognition, the classification problem is a type of optical character recognition problem encompassing multiple class classification. The SNoW machine learning classifier using local SMQT features is shown to be successful in these two transportation imaging applications.
NASA Astrophysics Data System (ADS)
Zheng, Xu; Hao, Zhiyong; Wang, Xu; Mao, Jie
2016-06-01
High-speed-railway-train interior noise at low, medium, and high frequencies could be simulated by finite element analysis (FEA) or boundary element analysis (BEA), hybrid finite element analysis-statistical energy analysis (FEA-SEA) and statistical energy analysis (SEA), respectively. First, a new method named statistical acoustic energy flow (SAEF) is proposed, which can be applied to the full-spectrum HST interior noise simulation (including low, medium, and high frequencies) with only one model. In an SAEF model, the corresponding multi-physical-field coupling excitations are firstly fully considered and coupled to excite the interior noise. The interior noise attenuated by sound insulation panels of carriage is simulated through modeling the inflow acoustic energy from the exterior excitations into the interior acoustic cavities. Rigid multi-body dynamics, fast multi-pole BEA, and large-eddy simulation with indirect boundary element analysis are first employed to extract the multi-physical-field excitations, which include the wheel-rail interaction forces/secondary suspension forces, the wheel-rail rolling noise, and aerodynamic noise, respectively. All the peak values and their frequency bands of the simulated acoustic excitations are validated with those from the noise source identification test. Besides, the measured equipment noise inside equipment compartment is used as one of the excitation sources which contribute to the interior noise. Second, a full-trimmed FE carriage model is firstly constructed, and the simulated modal shapes and frequencies agree well with the measured ones, which has validated the global FE carriage model as well as the local FE models of the aluminum alloy-trim composite panel. Thus, the sound transmission loss model of any composite panel has indirectly been validated. Finally, the SAEF model of the carriage is constructed based on the accurate FE model and stimulated by the multi-physical-field excitations. The results show
Applications of space teleoperator technology to the problems of the handicapped
NASA Technical Reports Server (NTRS)
Malone, T. B.; Deutsch, S.; Rubin, G.; Shenk, S. W.
1973-01-01
The identification of feasible and practical applications of space teleoperator technology for the problems of the handicapped were studied. A teleoperator system is defined by NASA as a remotely controlled, cybernetic, man-machine system designed to extend and augment man's sensory, manipulative, and locomotive capabilities. Based on a consideration of teleoperator systems, the scope of the study was limited to an investigation of these handicapped persons limited in sensory, manipulative, and locomotive capabilities. If the technology being developed for teleoperators has any direct application, it must be in these functional areas. Feasible and practical applications of teleoperator technology for the problems of the handicapped are described, and design criteria are presented with each application. A development plan is established to bring the application to the point of use.
NASA Astrophysics Data System (ADS)
Yaakob, Shamshul Bahar; Watada, Junzo
In this paper, a hybrid neural network approach to solve mixed integer quadratic bilevel programming problems is proposed. Bilevel programming problems arise when one optimization problem, the upper problem, is constrained by another optimization, the lower problem. The mixed integer quadratic bilevel programming problem is transformed into a double-layered neural network. The combination of a genetic algorithm (GA) and a meta-controlled Boltzmann machine (BM) enables us to formulate a hybrid neural network approach to solving bilevel programming problems. The GA is used to generate the feasible partial solutions of the upper level and to provide the parameters for the lower level. The meta-controlled BM is employed to cope with the lower level problem. The lower level solution is transmitted to the upper level. This procedure enables us to obtain the whole upper level solution. The iterative processes can converge on the complete solution of this problem to generate an optimal one. The proposed method leads the mixed integer quadratic bilevel programming problem to a global optimal solution. Finally, a numerical example is used to illustrate the application of the method in a power system environment, which shows that the algorithm is feasible and advantageous.
NASA Astrophysics Data System (ADS)
Achiamah-Ampomah, N.; Cheng, Kai
2016-02-01
An investigation was carried out to improve the slow surface finishing times of integrally bladed rotors (IBRs) in the aerospace industry. Traditionally they are finished by hand, or more currently by abrasive flow machining. The use of a vibratory finishing technique to improve process times has been suggested; however as a largely empirical process, very few studies have been done to improve and optimize the cycle times, showing that critical and ongoing research is still needed in this area. An extensive review of the literature was carried out, and the findings used to identify the key parameters and model equations which govern the vibratory process. Recommendations were made towards a multi-physics-based simulation model, as well as projections made for the future of vibratory finishing and optimization of surface finishes and cycle times.
Application of the steepest ascent optimization method to a reentry trajectory problem
NASA Technical Reports Server (NTRS)
Junkin, B. G.
1971-01-01
The direct optimization method is presented in detail. Nominal values of the control variables are input parameters. Perturbations are introduced into the control variables and the resulting first order predictions of changes in the payoff, and constraint functions are then determined. Through a sequence of prescribed cycles, a trajectory is eventually obtained which is reasonably close to the optimum. The method is successfully applied to an Apollo three-dimensional reentry problem. The study of this Apollo application problem has resulted in the development of a highly flexible computer program that can be modified to consider other trajectory optimization problems.
Boundary-value problems for elliptic functional-differential equations and their applications
NASA Astrophysics Data System (ADS)
Skubachevskii, A. L.
2016-10-01
Boundary-value problems are considered for strongly elliptic functional-differential equations in bounded domains. In contrast to the case of elliptic differential equations, smoothness of generalized solutions of such problems can be violated in the interior of the domain and may be preserved only on some subdomains, and the symbol of a self-adjoint semibounded functional-differential operator can change sign. Both necessary and sufficient conditions are obtained for the validity of a Gårding-type inequality in algebraic form. Spectral properties of strongly elliptic functional-differential operators are studied, and theorems are proved on smoothness of generalized solutions in certain subdomains and on preservation of smoothness on the boundaries of neighbouring subdomains. Applications of these results are found to the theory of non-local elliptic problems, to the Kato square-root problem for an operator, to elasticity theory, and to problems in non-linear optics. Bibliography: 137 titles.
Inverse problems with Poisson data: statistical regularization theory, applications and algorithms
NASA Astrophysics Data System (ADS)
Hohage, Thorsten; Werner, Frank
2016-09-01
Inverse problems with Poisson data arise in many photonic imaging modalities in medicine, engineering and astronomy. The design of regularization methods and estimators for such problems has been studied intensively over the last two decades. In this review we give an overview of statistical regularization theory for such problems, the most important applications, and the most widely used algorithms. The focus is on variational regularization methods in the form of penalized maximum likelihood estimators, which can be analyzed in a general setup. Complementing a number of recent convergence rate results we will establish consistency results. Moreover, we discuss estimators based on a wavelet-vaguelette decomposition of the (necessarily linear) forward operator. As most prominent applications we briefly introduce Positron emission tomography, inverse problems in fluorescence microscopy, and phase retrieval problems. The computation of a penalized maximum likelihood estimator involves the solution of a (typically convex) minimization problem. We also review several efficient algorithms which have been proposed for such problems over the last five years.
Preface to foundations of information/decision fusion with applications to engineering problems
Madan, R.N.; Rao, N.S.V.
1996-10-01
In engineering design, it was shown by von Neumann that a reliable system can be built using unreliable components by employing simple majority rule fusers. If error densities are known for individual pattern recognizers then an optimal fuser was shown to be implementable as a threshold function. Many applications have been developed for distributed sensor systems, sensor-based robotics, face recognition, decision fusion, recognition of handwritten characters, and automatic target recognition. Recently, information/decision fusion has been recognized as an independently growing field with its own principles and methods. While some of the fusion problems in engineering systems could be solved by applying existing results from other domains, many others require original approaches and solutions. In turn, these new approaches would lead to new applications in other areas. There are two paradigms at the extrema of the spectrum of the information/decision methods: (i) Fusion as Problem: In certain applications, fusion is explicitly specified in the problem statement. Particularly in robotics applications, many researchers realized the fundamental limitations of single sensor systems, thereby motivating the deployment of multiple sensors. In more general engineering applications, similar sensors are employed for fault tolerance, while in several others, different sensor modalities are required to achieve the given task. In these scenarios, fusion methods have to be first designed to solve the problem at hand. (ii) Fusion as Solution: In many instances (e.g., DNA analysis), a number of different solutions to a particular problem already exist. Often these solutions can be combined to obtain solutions that outperform any individual one. The area of forecasting is a good example of such paradigm. Although fusion is not explicitly specified in these problems, it is used as an ingredient of the solution.
The application of geographical information systems to important public health problems in Africa.
Tanser, Frank C; Le Sueur, David
2002-12-09
Africa is generally held to be in crisis, and the quality of life for the majority of the continent's inhabitants has been declining in both relative and absolute terms. In addition, the majority of the world's disease burden is realised in Africa. Geographical information systems (GIS) technology, therefore, is a tool of great inherent potential for health research and management in Africa. The spatial modelling capacity offered by GIS is directly applicable to understanding the spatial variation of disease, and its relationship to environmental factors and the health care system. Whilst there have been numerous critiques of the application of GIS technology to developed world health problems it has been less clear whether the technology is both applicable and sustainable in an African setting. If the potential for GIS to contribute to health research and planning in Africa is to be properly evaluated then the technology must be applicable to the most pressing health problems in the continent. We briefly outline the work undertaken in HIV, malaria and tuberculosis (diseases of significant public health impact and contrasting modes of transmission), outline GIS trends relevant to Africa and describe some of the obstacles to the sustainable implementation of GIS. We discuss types of viable GIS applications and conclude with a discussion of the types of African health problems of particular relevance to the application of GIS.
Complimentary single technique and multi-physics modeling tools for NDE challenges
NASA Astrophysics Data System (ADS)
Le Lostec, Nechtan; Budyn, Nicolas; Sartre, Bernard; Glass, S. W.
2014-02-01
The challenges of modeling and simulation for Non Destructive Examination (NDE) research and development at AREVA NDE Solutions Technical Center (NETEC) are presented. In particular, the choice of a relevant software suite covering different applications and techniques and the process/scripting tools required for simulation and modeling are discussed. The software portfolio currently in use is then presented along with the limitations of the different software: CIVA for ultrasound (UT) methods, PZFlex for UT probes, Flux for eddy current (ET) probes and methods, plus Abaqus for multiphysics modeling. The finite element code, Abaqus is also considered as the future direction for many of our NDE modeling and simulation tasks. Some application examples are given on modeling of a piezoelectric acoustic phased array transducer and preliminary thermography configurations.
NASA Astrophysics Data System (ADS)
Schuster, Thomas; Hofmann, Bernd; Kaltenbacher, Barbara
2012-10-01
of concrete instances with special properties. The aim of this special section is to provide a forum for highly topical ongoing work in the area of regularization in Banach spaces, its numerics and its applications. Indeed, we have been lucky enough to obtain a number of excellent papers both from colleagues who have previously been contributing to this topic and from researchers entering the field due to its relevance in practical inverse problems. We would like to thank all contributers for enabling us to present a high quality collection of papers on topics ranging from various aspects of regularization via efficient numerical solution to applications in PDE models. We give a brief overview of the contributions included in this issue (here ordered alphabetically by first author). In their paper, Iterative regularization with general penalty term—theory and application to L1 and TV regularization, Radu Bot and Torsten Hein provide an extension of the Landweber iteration for linear operator equations in Banach space to general operators in place of the inverse duality mapping, which corresponds to the use of general regularization functionals in variational regularization. The L∞ topology in data space corresponds to the frequently occuring situation of uniformly distributed data noise. A numerically efficient solution of the resulting Tikhonov regularization problem via a Moreau-Yosida appriximation and a semismooth Newton method, along with a δ-free regularization parameter choice rule, is the topic of the paper L∞ fitting for inverse problems with uniform noise by Christian Clason. Extension of convergence rates results from classical source conditions to their generalization via variational inequalities with a priori and a posteriori stopping rules is the main contribution of the paper Regularization of linear ill-posed problems by the augmented Lagrangian method and variational inequalities by Klaus Frick and Markus Grasmair, again in the context of some
Development and application of unified algorithms for problems in computational science
NASA Technical Reports Server (NTRS)
Shankar, Vijaya; Chakravarthy, Sukumar
1987-01-01
A framework is presented for developing computationally unified numerical algorithms for solving nonlinear equations that arise in modeling various problems in mathematical physics. The concept of computational unification is an attempt to encompass efficient solution procedures for computing various nonlinear phenomena that may occur in a given problem. For example, in Computational Fluid Dynamics (CFD), a unified algorithm will be one that allows for solutions to subsonic (elliptic), transonic (mixed elliptic-hyperbolic), and supersonic (hyperbolic) flows for both steady and unsteady problems. The objectives are: development of superior unified algorithms emphasizing accuracy and efficiency aspects; development of codes based on selected algorithms leading to validation; application of mature codes to realistic problems; and extension/application of CFD-based algorithms to problems in other areas of mathematical physics. The ultimate objective is to achieve integration of multidisciplinary technologies to enhance synergism in the design process through computational simulation. Specific unified algorithms for a hierarchy of gas dynamics equations and their applications to two other areas: electromagnetic scattering, and laser-materials interaction accounting for melting.
Applications of dynamic scheduling technique to space related problems: Some case studies
NASA Technical Reports Server (NTRS)
Nakasuka, Shinichi; Ninomiya, Tetsujiro
1994-01-01
The paper discusses the applications of 'Dynamic Scheduling' technique, which has been invented for the scheduling of Flexible Manufacturing System, to two space related scheduling problems: operation scheduling of a future space transportation system, and resource allocation in a space system with limited resources such as space station or space shuttle.
Thinking about Applications: Effects on Mental Models and Creative Problem-Solving
ERIC Educational Resources Information Center
Barrett, Jamie D.; Peterson, David R.; Hester, Kimberly S.; Robledo, Issac C.; Day, Eric A.; Hougen, Dean P.; Mumford, Michael D.
2013-01-01
Many techniques have been used to train creative problem-solving skills. Although the available techniques have often proven to be effective, creative training often discounts the value of thinking about applications. In this study, 248 undergraduates were asked to develop advertising campaigns for a new high-energy soft drink. Solutions to this…
The Views of Undergraduates about Problem-Based Learning Applications in a Biochemistry Course
ERIC Educational Resources Information Center
Tarhan, Leman; Ayyildiz, Yildizay
2015-01-01
The effect of problem-based learning (PBL) applications in an undergraduate biochemistry course on students' interest in this course was investigated through four modules during one semester. Students' views about active learning and improvement in social skills were also collected and evaluated. We conducted the study with 36 senior students from…
ERIC Educational Resources Information Center
Yang, Eunice
2016-01-01
This paper discusses the use of a free mobile engineering application (app) called Autodesk® ForceEffect™ to provide students assistance with spatial visualization of forces and more practice in solving/visualizing statics problems compared to the traditional pencil-and-paper method. ForceEffect analyzes static rigid-body systems using free-body…
Application of NASA management approach to solve complex problems on earth
NASA Technical Reports Server (NTRS)
Potate, J. S.
1972-01-01
The application of NASA management approach to solving complex problems on earth is discussed. The management of the Apollo program is presented as an example of effective management techniques. Four key elements of effective management are analyzed. Photographs of the Cape Kennedy launch sites and supporting equipment are included to support the discussions.
ERIC Educational Resources Information Center
Hamadneh, Iyad M.; Al-Masaeed, Aslan
2015-01-01
This study aimed at finding out mathematics teachers' attitudes towards photo math application in solving mathematical problems using mobile camera; it also aim to identify significant differences in their attitudes according to their stage of teaching, educational qualifications, and teaching experience. The study used judgmental/purposive…
NASA Technical Reports Server (NTRS)
Kenny, Sean P.; Hou, Gene J. W.
1994-01-01
A method for eigenvalue and eigenvector approximate analysis for the case of repeated eigenvalues with distinct first derivatives is presented. The approximate analysis method developed involves a reparameterization of the multivariable structural eigenvalue problem in terms of a single positive-valued parameter. The resulting equations yield first-order approximations to changes in the eigenvalues and the eigenvectors associated with the repeated eigenvalue problem. This work also presents a numerical technique that facilitates the definition of an eigenvector derivative for the case of repeated eigenvalues with repeated eigenvalue derivatives (of all orders). Examples are given which demonstrate the application of such equations for sensitivity and approximate analysis. Emphasis is placed on the application of sensitivity analysis to large-scale structural and controls-structures optimization problems.
Application of the Sinc method to a dynamic elasto-plastic problem
NASA Astrophysics Data System (ADS)
Abdella, K.; Yu, X.; Kucuk, I.
2009-01-01
This paper presents the application of Sinc bases to simulate numerically the dynamic behavior of a one-dimensional elastoplastic problem. The numerical methods that are traditionally employed to solve elastoplastic problems include finite difference, finite element and spectral methods. However, more recently, biorthogonal wavelet bases have been used to study the dynamic response of a uniaxial elasto-plastic rod [Giovanni F. Naldi, Karsten Urban, Paolo Venini, A wavelet-Galerkin method for elastoplasticity problems, Report 181, RWTH Aachen IGPM, and Math. Modelling and Scient. Computing, vol. 10, 2000]. In this paper the Sinc-Galerkin method is used to solve the straight elasto-plastic rod problem. Due to their exponential convergence rates and their need for a relatively fewer nodal points, Sinc based methods can significantly outperform traditional numerical methods [J. Lund, K.L. Bowers, Sinc Methods for Quadrature and Differential Equations, SIAM, Philadelphia, 1992]. However, the potential of Sinc-based methods for solving elastoplasticity problems has not yet been explored. The aim of this paper is to demonstrate the possible application of Sinc methods through the numerical investigation of the unsteady one dimensional elastic-plastic rod problem.
The Fractional Fourier Transform and Its Application to Energy Localization Problems
NASA Astrophysics Data System (ADS)
Oonincx, Patrick J.; ter Morsche, Hennie G.
2003-12-01
Applying the fractional Fourier transform (FRFT) and the Wigner distribution on a signal in a cascade fashion is equivalent to a rotation of the time and frequency parameters of the Wigner distribution. We presented in ter Morsche and Oonincx, 2002, an integral representation formula that yields affine transformations on the spatial and frequency parameters of the[InlineEquation not available: see fulltext.]-dimensional Wigner distribution if it is applied on a signal with the Wigner distribution as for the FRFT. In this paper, we show how this representation formula can be used to solve certain energy localization problems in phase space. Examples of such problems are given by means of some classical results. Although the results on localization problems are classical, the application of generalized Fourier transform enlarges the class of problems that can be solved with traditional techniques.
An application of the Nash-Moser theorem to the vacuum boundary problem of gaseous stars
NASA Astrophysics Data System (ADS)
Makino, Tetu
2017-01-01
We have been studying spherically symmetric motions of gaseous stars with physical vacuum boundary governed either by the Euler-Poisson equations in the non-relativistic theory or by the Einstein-Euler equations in the relativistic theory. The problems are to construct solutions whose first approximations are small time-periodic solutions to the linearized problem at an equilibrium and to construct solutions to the Cauchy problem near an equilibrium. These problems can be solved when 1 / (γ - 1) is an integer, where γ is the adiabatic exponent of the gas near the vacuum, by the formulation by R. Hamilton of the Nash-Moser theorem. We discuss on an application of the formulation by J.T. Schwartz of the Nash-Moser theorem to the case in which 1 / (γ - 1) is not an integer but sufficiently large.
NASA Astrophysics Data System (ADS)
Costner, Kelly Mitchell
This study developed and piloted the Problem-Solving Approach to program evaluation, which involves the direct application of the problem-solving process as a metaphor for program evaluation. A rationale for a mathematics-specific approach is presented, and relevant literature in both program evaluation and mathematics education is reviewed. The Problem-Solving Approach was piloted with a high-school level integrated course in mathematics and science that used graphing calculators and data collection devices with the goal of helping students to gain better understanding of relationships between mathematics and science. Twelve students participated in the course, which was co-taught by a mathematics teacher and a science teacher. Data collection for the evaluation included observations, a pre- and posttest, student questionnaires, student interviews, teacher interviews, principal interviews, and a focus group that involved both students and their teachers. Results of the evaluation of the course are presented as an evaluation report. Students showed improvement in their understandings of mathematics-science relationships, but also showed growth in terms of self-confidence, independence, and various social factors that were not expected outcomes. The teachers experienced a unique form of professional development by learning and relearning concepts in each other's respective fields and by gaining insights into each other's teaching strengths. Both the results of the evaluation and the evaluation process itself are discussed in light of the proposed problem-solving approach. The use of problem solving and of specific problem-solving strategies was found to be prevalent among the students and the teachers, as well as in the activities of the evaluator. Specific problem-solving strategies are highlighted for their potential value in program evaluation situations. The resulting Problem-Solving Approach, revised through the pilot application, employs problem solving as a
NASA Astrophysics Data System (ADS)
Di Luca, Alejandro; Flaounas, Emmanouil; Drobinski, Philippe; Brossier, Cindy Lebeaupin
2014-11-01
The use of high resolution atmosphere-ocean coupled regional climate models to study possible future climate changes in the Mediterranean Sea requires an accurate simulation of the atmospheric component of the water budget (i.e., evaporation, precipitation and runoff). A specific configuration of the version 3.1 of the weather research and forecasting (WRF) regional climate model was shown to systematically overestimate the Mediterranean Sea water budget mainly due to an excess of evaporation (~1,450 mm yr-1) compared with observed estimations (~1,150 mm yr-1). In this article, a 70-member multi-physics ensemble is used to try to understand the relative importance of various sub-grid scale processes in the Mediterranean Sea water budget and to evaluate its representation by comparing simulated results with observed-based estimates. The physics ensemble was constructed by performing 70 1-year long simulations using version 3.3 of the WRF model by combining six cumulus, four surface/planetary boundary layer and three radiation schemes. Results show that evaporation variability across the multi-physics ensemble (˜10 % of the mean evaporation) is dominated by the choice of the surface layer scheme that explains more than ˜70 % of the total variance and that the overestimation of evaporation in WRF simulations is generally related with an overestimation of surface exchange coefficients due to too large values of the surface roughness parameter and/or the simulation of too unstable surface conditions. Although the influence of radiation schemes on evaporation variability is small (˜13 % of the total variance), radiation schemes strongly influence exchange coefficients and vertical humidity gradients near the surface due to modifications of temperature lapse rates. The precipitation variability across the physics ensemble (˜35 % of the mean precipitation) is dominated by the choice of both cumulus (˜55 % of the total variance) and planetary boundary layer (˜32 % of
A two-phase multi-physics model for simulating plasma discharge in liquids
NASA Astrophysics Data System (ADS)
Charchi, Ali; Farouk, Tanvir
2014-10-01
Plasma discharge in liquids has been a topic of interest in recent years both in terms of fundamental science as well as practical applications. Even though there has been a large amount of experimental work reported in the literature, modeling and simulation studies on plasma discharges in liquids is limited. To obtain a more detailed model for plasma discharge in liquid phase a two-phase multiphysics model has been developed. The model resolves both the liquid and gas phase and solves the mass and momentum conservation of the averaged species in both the phases. The fluid motion equation considers surface tension, electric field force as well as gravitational force. To calculate the electric force, the charge conservation equations for positive and negative ions and also for the electrons are solved. The Possion's equation is solved in each time step for obtaining a self consistent electric field. The obtained electric field and charge distribution is used to calculate the electric body force exerted on the fluid. Simulation show that the coupled effect of plasma, surface and gravity results in a time-evolving bubble shape. The influence of different plasma parameters on the bubble dynamics is studied.
NASA Astrophysics Data System (ADS)
Narwadi, Teguh; Subiyanto
2017-03-01
The Travelling Salesman Problem (TSP) is one of the best known NP-hard problems, which means that no exact algorithm to solve it in polynomial time. This paper present a new variant application genetic algorithm approach with a local search technique has been developed to solve the TSP. For the local search technique, an iterative hill climbing method has been used. The system is implemented on the Android OS because android is now widely used around the world and it is mobile system. It is also integrated with Google API that can to get the geographical location and the distance of the cities, and displays the route. Therefore, we do some experimentation to test the behavior of the application. To test the effectiveness of the application of hybrid genetic algorithm (HGA) is compare with the application of simple GA in 5 sample from the cities in Central Java, Indonesia with different numbers of cities. According to the experiment results obtained that in the average solution HGA shows in 5 tests out of 5 (100%) is better than simple GA. The results have shown that the hybrid genetic algorithm outperforms the genetic algorithm especially in the case with the problem higher complexity.
On well-partial-order theory and its application to combinatorial problems of VLSI design
NASA Technical Reports Server (NTRS)
Fellows, M.; Langston, M.
1990-01-01
We nonconstructively prove the existence of decision algorithms with low-degree polynomial running times for a number of well-studied graph layout, placement, and routing problems. Some were not previously known to be in p at all; others were only known to be in p by way of brute force or dynamic programming formulations with unboundedly high-degree polynomial running times. Our methods include the application of the recent Robertson-Seymour theorems on the well-partial-ordering of graphs under both the minor and immersion orders. We also briefly address the complexity of search versions of these problems.
A hierarchical multi-physics model for design of high toughness steels
NASA Astrophysics Data System (ADS)
Hao, Su; Moran, Brian; Kam Liu, Wing; Olson, Gregory B.
2003-05-01
In support of the computational design of high toughness steels as hierarchically structured materials, a multiscale, multiphysics methodology is developed for a `ductile fracture simulator.' At the nanometer scale, the method unites continuum mechanics with quantum physics, using first-principles calculations to predict the force-distance laws for interfacial separation with both normal and plastic sliding components. The predicted adhesion behavior is applied to the description of interfacial decohesion for both micron-scale primary inclusions governing primary void formation and submicron-scale secondary particles governing microvoid-based shear localization that accelerates primary void coalescence. Fine scale deformation is described by a `Particle Dynamics' method that extends the framework of molecular dynamics to multi-atom aggregates. This is combined with other meshfree and finite-element methods in two-level cell modeling to provide a hierarchical constitutive model for crack advance, combining conventional plasticity, microstructural damage, strain gradient effects and transformation plasticity from dispersed metastable austenite. Detailed results of a parallel experimental study of a commercial steel are used to calibrate the model at multiple scales. An initial application provides a Toughness-Strength-Adhesion diagram defining the relation among alloy strength, inclusion adhesion energy and fracture toughness as an aid to microstructural design. The analysis of this paper introduces an approach of creative steel design that can be stated as the exploration of the effective connections among the five key-components: elements selection, process design, micro/nanostructure optimization, desirable properties and industrial performance by virtue of innovations and inventions.
NASA Astrophysics Data System (ADS)
Simons, Neil Richard Samuel
In this thesis the development and application of general purpose computer simulation techniques for macroscopic electromagnetic phenomena are investigated. These techniques are applicable to a wide variety of practical problems pertaining to: Electromagnetic Compatibility and Interference, Radar-Cross-Section, and the analysis and design of antennas. The goal of this research is to examine methods that are applicable to a wide variety of problems rather than specialized approaches that are only useful for specific problems. A brief review of the computational electromagnetics literature indicates two general types of methods are applicable. These are numerical approximation of integral-equation formulations and numerical approximation of differential-equation formulations. Because of their relative efficiency for inhomogeneous geometries, the direction of the thesis proceeds with numerical approximations to differential-equation based formulations. The differential-equation based numerical methods include various finite-difference, finite-element, finite -volume, and transmission line matrix methods. A literature review and overview of these numerical methods is provided. The goal of the overview is to provide the capability for the classification for existing and future differential equation based numerical methods to identify relative advantages and disadvantages. Extensions to the two-dimensional transmission line matrix method are presented. The extensions are intended to provide some of the flexibility traditionally associated with finite-difference and finite-element methods. Three new two-dimensional models are presented. Two of the new models utilize triangular rather than the usual rectangular spatial discretization. The third model introduces the capability of higher-order spatial accuracy. The efficiency and application of the new models are discussed. The development of two general-purpose electromagnetic simulation programs is presented. Both are
The potential application of the blackboard model of problem solving to multidisciplinary design
NASA Technical Reports Server (NTRS)
Rogers, James L.
1989-01-01
The potential application of the blackboard model of problem solving to multidisciplinary design is discussed. Multidisciplinary design problems are complex, poorly structured, and lack a predetermined decision path from the initial starting point to the final solution. The final solution is achieved using data from different engineering disciplines. Ideally, for the final solution to be the optimum solution, there must be a significant amount of communication among the different disciplines plus intradisciplinary and interdisciplinary optimization. In reality, this is not what happens in today's sequential approach to multidisciplinary design. Therefore it is highly unlikely that the final solution is the true optimum solution from an interdisciplinary optimization standpoint. A multilevel decomposition approach is suggested as a technique to overcome the problems associated with the sequential approach, but no tool currently exists with which to fully implement this technique. A system based on the blackboard model of problem solving appears to be an ideal tool for implementing this technique because it offers an incremental problem solving approach that requires no a priori determined reasoning path. Thus it has the potential of finding a more optimum solution for the multidisciplinary design problems found in today's aerospace industries.
NASA Astrophysics Data System (ADS)
Soleimani, Meisam; Wriggers, Peter; Rath, Henryke; Stiesch, Meike
2016-10-01
In this paper, a 3D computational model has been developed to investigate biofilms in a multi-physics framework using smoothed particle hydrodynamics (SPH) based on a continuum approach. Biofilm formation is a complex process in the sense that several physical phenomena are coupled and consequently different time-scales are involved. On one hand, biofilm growth is driven by biological reaction and nutrient diffusion and on the other hand, it is influenced by fluid flow causing biofilm deformation and interface erosion in the context of fluid and deformable solid interaction. The geometrical and numerical complexity arising from these phenomena poses serious complications and challenges in grid-based techniques such as finite element. Here the solution is based on SPH as one of the powerful meshless methods. SPH based computational modeling is quite new in the biological community and the method is uniquely robust in capturing the interface-related processes of biofilm formation such as erosion. The obtained results show a good agreement with experimental and published data which demonstrates that the model is capable of simulating and predicting overall spatial and temporal evolution of biofilm.
Gasmi, A.; Sprague, M. A.; Jonkman, J. M.; Jones, W. B.
2013-02-01
In this paper we examine the stability and accuracy of numerical algorithms for coupling time-dependent multi-physics modules relevant to computer-aided engineering (CAE) of wind turbines. This work is motivated by an in-progress major revision of FAST, the National Renewable Energy Laboratory's (NREL's) premier aero-elastic CAE simulation tool. We employ two simple examples as test systems, while algorithm descriptions are kept general. Coupled-system governing equations are framed in monolithic and partitioned representations as differential-algebraic equations. Explicit and implicit loose partition coupling is examined. In explicit coupling, partitions are advanced in time from known information. In implicit coupling, there is dependence on other-partition data at the next time step; coupling is accomplished through a predictor-corrector (PC) approach. Numerical time integration of coupled ordinary-differential equations (ODEs) is accomplished with one of three, fourth-order fixed-time-increment methods: Runge-Kutta (RK), Adams-Bashforth (AB), and Adams-Bashforth-Moulton (ABM). Through numerical experiments it is shown that explicit coupling can be dramatically less stable and less accurate than simulations performed with the monolithic system. However, PC implicit coupling restored stability and fourth-order accuracy for ABM; only second-order accuracy was achieved with RK integration. For systems without constraints, explicit time integration with AB and explicit loose coupling exhibited desired accuracy and stability.
Procassini, R.J.
1997-12-31
The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution of particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.
NASA Astrophysics Data System (ADS)
Kuroda, Shinjiro; Suzuki, Naoya; Tanigawa, Hiroshi; Suzuki, Kenichiro
2013-06-01
In this paper, we present and demonstrate the principle of variable resonance frequency selection by using a fishbone-shaped microelectromechanical system (MEMS) resonator. To analyze resonator displacement caused by an electrostatic force, a multi-physics simulation, which links the applied voltage load to the mechanical domain, is carried out. The simulation clearly shows that resonators are operated by three kinds of electrostatic force exerted on the beam. A new frequency selection algorithm that selects only one among various resonant modes is also presented. The conversion matrix that transforms the voltages applied to each driving electrode into the resonant beam displacement at each resonant mode is first derived by experimental measurements. Following this, the matrix is used to calculate a set of voltages for maximizing the rejection ratio in each resonant mode. This frequency selection method is applied in a fishbone-shaped MEMS resonator with five driving electrodes and the frequency selection among the 1st resonant mode to the 5th resonant mode is successfully demonstrated. From a fine adjustment of the voltage set, a 42 dB rejection ratio is obtained.
Powell, Adam; Pati, Soobhankar
2012-03-11
Solid Oxide Membrane (SOM) Electrolysis is a new energy-efficient zero-emissions process for producing high-purity magnesium and high-purity oxygen directly from industrial-grade MgO. SOM Recycling combines SOM electrolysis with electrorefining, continuously and efficiently producing high-purity magnesium from low-purity partially oxidized scrap. In both processes, electrolysis and/or electrorefining take place in the crucible, where raw material is continuously fed into the molten salt electrolyte, producing magnesium vapor at the cathode and oxygen at the inert anode inside the SOM. This paper describes a three-dimensional multi-physics finite-element model of ionic current, fluid flow driven by argon bubbling and thermal buoyancy, and heat and mass transport in the crucible. The model predicts the effects of stirring on the anode boundary layer and its time scale of formation, and the effect of natural convection at the outer wall. MOxST has developed this model as a tool for scale-up design of these closely-related processes.
NASA Technical Reports Server (NTRS)
Muravyov, Alexander A.
1999-01-01
In this paper, a method for obtaining nonlinear stiffness coefficients in modal coordinates for geometrically nonlinear finite-element models is developed. The method requires application of a finite-element program with a geometrically non- linear static capability. The MSC/NASTRAN code is employed for this purpose. The equations of motion of a MDOF system are formulated in modal coordinates. A set of linear eigenvectors is used to approximate the solution of the nonlinear problem. The random vibration problem of the MDOF nonlinear system is then considered. The solutions obtained by application of two different versions of a stochastic linearization technique are compared with linear and exact (analytical) solutions in terms of root-mean-square (RMS) displacements and strains for a beam structure.
On the application of pseudo-spectral FFT technique to non-periodic problems
NASA Technical Reports Server (NTRS)
Biringen, S.; Kao, K. H.
1988-01-01
The reduction-to-periodicity method using the pseudo-spectral Fast Fourier Transform (FFT) technique is applied to the solution of nonperiodic problems including the two-dimensional Navier-Stokes equations. The accuracy of the method is demonstrated by calculating derivatives of given functions, one- and two-dimensional convective-diffusive problems, and by comparing the relative errors due to the FFT method with seocnd order Finite Difference Methods (FDM). Finally, the two-dimensional Navier-Stokes equations are solved by a fractional step procedure using both the FFT and the FDM methods for the driven cavity flow and the backward facing step problems. Comparisons of these solutions provide a realistic assessment of the FFT method indicating its range of applicability.
Optimization-based additive decomposition of weakly coercive problems with applications
Bochev, Pavel B.; Ridzal, Denis
2016-01-27
In this study, we present an abstract mathematical framework for an optimization-based additive decomposition of a large class of variational problems into a collection of concurrent subproblems. The framework replaces a given monolithic problem by an equivalent constrained optimization formulation in which the subproblems define the optimization constraints and the objective is to minimize the mismatch between their solutions. The significance of this reformulation stems from the fact that one can solve the resulting optimality system by an iterative process involving only solutions of the subproblems. Consequently, assuming that stable numerical methods and efficient solvers are available for every subproblem, our reformulation leads to robust and efficient numerical algorithms for a given monolithic problem by breaking it into subproblems that can be handled more easily. An application of the framework to the Oseen equations illustrates its potential.
Optimization-based additive decomposition of weakly coercive problems with applications
Bochev, Pavel B.; Ridzal, Denis
2016-01-27
In this study, we present an abstract mathematical framework for an optimization-based additive decomposition of a large class of variational problems into a collection of concurrent subproblems. The framework replaces a given monolithic problem by an equivalent constrained optimization formulation in which the subproblems define the optimization constraints and the objective is to minimize the mismatch between their solutions. The significance of this reformulation stems from the fact that one can solve the resulting optimality system by an iterative process involving only solutions of the subproblems. Consequently, assuming that stable numerical methods and efficient solvers are available for every subproblem,more » our reformulation leads to robust and efficient numerical algorithms for a given monolithic problem by breaking it into subproblems that can be handled more easily. An application of the framework to the Oseen equations illustrates its potential.« less
NASA Astrophysics Data System (ADS)
Al-Zanaidi, M. A.; Grossmann, C.; Noack, A.
2006-04-01
As a rule, parabolic problems with nonsmooth data show rapid changes of its solution or even possess solutions of reduced smoothness. While for smooth data various time integration methods, e.g. the trapezoidal rule or the Euler backwards scheme, work efficiently, but in case of jumps effects of high-frequency oscillations are observable over a long time horizon or steep changes are smeared out. Implicit Taylor methods (ITM), which are mostly applied in specific applications, like interval methods, but not commonly used for general cases, combine high accuracy with strong damping of unwanted oscillations. These properties make them a good choice in case of nonsmooth data. In the present paper ITM are investigated in detail for semi-discrete linear parabolic problems. In ITM at each time level a large-scale linear system has to be solved and preconditioned conjugate gradient methods (PCG) can efficiently be applied. Here adapted preconditioners are constructed, and tight spectral bounds are derived which are independent of the discretization parameters of the parabolic problem. As an important application ITM are considered in case of boundary heat control. Occurring control constraints are involved by means of penalty functions. To solve the completely discretized problem gradient-based numerical algorithms are used where the gradient of the objective is partially evaluated via discrete adjoints and partially by explicitly available terms corresponding to the penalties. Some test examples illustrate the efficiency of the considered algorithms.
Aditya, Satabdi; DasGupta, Bhaskar; Karpinski, Marek
2013-01-01
In this survey paper, we will present a number of core algorithmic questions concerning several transitive reduction problems on network that have applications in network synthesis and analysis involving cellular processes. Our starting point will be the so-called minimum equivalent digraph problem, a classic computational problem in combinatorial algorithms. We will subsequently consider a few non-trivial extensions or generalizations of this problem motivated by applications in systems biology. We will then discuss the applications of these algorithmic methodologies in the context of three major biological research questions: synthesizing and simplifying signal transduction networks, analyzing disease networks, and measuring redundancy of biological networks. PMID:24833332
NASA Technical Reports Server (NTRS)
Jackson, C. E., Jr.
1977-01-01
A sample problem library containing 20 problems covering most facets of Nastran Thermal Analyzer modeling is presented. Areas discussed include radiative interchange, arbitrary nonlinear loads, transient temperature and steady-state structural plots, temperature-dependent conductivities, simulated multi-layer insulation, and constraint techniques. The use of the major control options and important DMAP alters is demonstrated.
Application of advanced plasma technology to energy materials and environmental problems
NASA Astrophysics Data System (ADS)
Kobayashi, Akira
2015-04-01
Advanced plasma system has been proposed for various energy materials and for its application to environmental problems. The gas tunnel type plasma device developed by the author exhibits high energy density and also high efficiency. Regarding the application to thermal processing, one example is the plasma spraying of ceramics such as Al2O3 and ZrO2 as thermal barrier coatings (TBCs). The performances of these ceramic coatings are superior to conventional ones, namely, the properties such as the mechanical and chemical properties, thermal behavior and high temperature oxidation resistance of the alumina/zirconia thermal barrier coatings (TBCs) have been clarified and discussed. The ZrO2 composite coating has a possibility for the development of high functionally graded TBC. The results showed that the alumina/zirconia composite system exhibited an improvement of mechanical properties and oxidation resistance. Another application of gas tunnel type plasma to a functional material is the surface modification of metals. TiN films were formed in a short time of 5 s on Ti and its alloy. Also, thick TiN coatings were easily obtained by gas tunnel type plasma reactive spraying on any metals. Regarding the application to the environmental problems, the decomposition of CO2 gas is also introduced by applying the gas tunnel type plasma system.
Solutions to the Inverse LQR Problem with Application to Biological Systems Analysis.
Priess, M Cody; Conway, Richard; Choi, Jongeun; Popovich, John M; Radcliffe, Clark
2015-03-01
In this paper, we present a set of techniques for finding a cost function to the time-invariant Linear Quadratic Regulator (LQR) problem in both continuous- and discrete-time cases. Our methodology is based on the solution to the inverse LQR problem, which can be stated as: does a given controller K describe the solution to a time-invariant LQR problem, and if so, what weights Q and R produce K as the optimal solution? Our motivation for investigating this problem is the analysis of motion goals in biological systems. We first describe an efficient Linear Matrix Inequality (LMI) method for determining a solution to the general case of this inverse LQR problem when both the weighting matrices Q and R are unknown. Our first LMI-based formulation provides a unique solution when it is feasible. Additionally, we propose a gradient-based, least-squares minimization method that can be applied to approximate a solution in cases when the LMIs are infeasible. This new method is very useful in practice since the estimated gain matrix K from the noisy experimental data could be perturbed by the estimation error, which may result in the infeasibility of the LMIs. We also provide an LMI minimization problem to find a good initial point for the minimization using the proposed gradient descent algorithm. We then provide a set of examples to illustrate how to apply our approaches to several different types of problems. An important result is the application of the technique to human subject posture control when seated on a moving robot. Results show that we can recover a cost function which may provide a useful insight on the human motor control goal.
Solutions to the Inverse LQR Problem with Application to Biological Systems Analysis
Priess, M Cody; Conway, Richard; Choi, Jongeun; Popovich, John M; Radcliffe, Clark
2015-01-01
In this paper, we present a set of techniques for finding a cost function to the time-invariant Linear Quadratic Regulator (LQR) problem in both continuous- and discrete-time cases. Our methodology is based on the solution to the inverse LQR problem, which can be stated as: does a given controller K describe the solution to a time-invariant LQR problem, and if so, what weights Q and R produce K as the optimal solution? Our motivation for investigating this problem is the analysis of motion goals in biological systems. We first describe an efficient Linear Matrix Inequality (LMI) method for determining a solution to the general case of this inverse LQR problem when both the weighting matrices Q and R are unknown. Our first LMI-based formulation provides a unique solution when it is feasible. Additionally, we propose a gradient-based, least-squares minimization method that can be applied to approximate a solution in cases when the LMIs are infeasible. This new method is very useful in practice since the estimated gain matrix K from the noisy experimental data could be perturbed by the estimation error, which may result in the infeasibility of the LMIs. We also provide an LMI minimization problem to find a good initial point for the minimization using the proposed gradient descent algorithm. We then provide a set of examples to illustrate how to apply our approaches to several different types of problems. An important result is the application of the technique to human subject posture control when seated on a moving robot. Results show that we can recover a cost function which may provide a useful insight on the human motor control goal. PMID:26640359
Multi-Scale Multi-physics Methods Development for the Calculation of Hot-Spots in the NGNP
Downar, Thomas; Seker, Volkan
2013-04-30
Radioactive gaseous fission products are released out of the fuel element at a significantly higher rate when the fuel temperature exceeds 1600°C in high-temperature gas-cooled reactors (HTGRs). Therefore, it is of paramount importance to accurately predict the peak fuel temperature during all operational and design-basis accident conditions. The current methods used to predict the peak fuel temperature in HTGRs, such as the Next-Generation Nuclear Plant (NGNP), estimate the average fuel temperature in a computational mesh modeling hundreds of fuel pebbles or a fuel assembly in a pebble-bed reactor (PBR) or prismatic block type reactor (PMR), respectively. Experiments conducted in operating HTGRs indicate considerable uncertainty in the current methods and correlations used to predict actual temperatures. The objective of this project is to improve the accuracy in the prediction of local "hot" spots by developing multi-scale, multi-physics methods and implementing them within the framework of established codes used for NGNP analysis.The multi-scale approach which this project will implement begins with defining suitable scales for a physical and mathematical model and then deriving and applying the appropriate boundary conditions between scales. The macro scale is the greatest length that describes the entire reactor, whereas the meso scale models only a fuel block in a prismatic reactor and ten to hundreds of pebbles in a pebble bed reactor. The smallest scale is the micro scale--the level of a fuel kernel of the pebble in a PBR and fuel compact in a PMR--which needs to be resolved in order to calculate the peak temperature in a fuel kernel.
Applicability of the flow-net program to solution of Space Station fluid dynamics problems
NASA Astrophysics Data System (ADS)
Navickas, J.; Rivard, W. C.
The Space Station design encompasses a variety of fluid systems that require extensive flow and combined flow-thermal analyses. The types of problems encountered range from two-phase cryogenic to high-pressure gaseous systems. Design of such systems requires the most advanced analytical tools. Because Space Station applications are a new area for existing two-phase flow programs, typically developed for nuclear safety applications, a careful evaluation of their capabilities to treat generic Space Station flows is appropriate. The results from an assessment of one particular program, FLOW-NET, developed by Flow Science, In., are presented. Three typical problems are analyzed: (1) fill of a hyperbaric module with gaseous nitrogen from a high-pressure supply system, (2) response of a liquid ammonia line to a rapid pressure decrease, and (3) performance of a basic two-phase, thermal control network. The three problems were solved successfully. Comparison of the results with those obtained by analytical methods supports the FLOW-NET calculations.
Fujikake, K; Tago, S; Plasson, R; Nakazawa, R; Okano, K; Maezawa, D; Mukawa, T; Kuroda, A; Asakura, K
2014-01-01
Up to date, no worldwide standard in vitro method has been established for the determination of the sun protection factor (SPF), since there are many problems in terms of its repeatability and reliability. Here, we have studied the problems on the in vitro SPF measurements brought about by the phenomenon called viscous fingering. A spatially periodic stripe pattern is usually formed spontaneously when a viscous fluid is applied onto a solid substrate. For the in vitro SPF measurements, the recommended amount of sunscreen is applied onto a substrate, and the intensity of the transmitted UV light through the sunscreen layer is evaluated. Our theoretical analysis indicated that the nonuniformity of the thickness of the sunscreen layer varied the net UV absorbance. Pseudo-sunscreen composites having no phase separation structures were prepared and applied on a quartz plate for the measurements of the UV absorbance. Two types of applicators, a block applicator and a 4-sided applicator were used. The flat surface was always obtained when the 4-sided applicator was used, while the spatially periodic stripe pattern was always generated spontaneously when the block applicator was used. The net UV absorbance of the layer on which the stripe pattern was formed was found to be lower than that of the flat layer having the same average thickness. Theoretical simulations quantitatively reproduced the variation of the net UV absorbance led by the change of the geometry of the layer. The results of this study propose the definite necessity of strict regulations on the coating method of sunscreens for the establishment of the in vitro SPF test method.
NASA Astrophysics Data System (ADS)
Ivanyshyn Yaman, Olha; Le Louër, Frédérique
2016-09-01
This paper deals with the material derivative analysis of the boundary integral operators arising from the scattering theory of time-harmonic electromagnetic waves and its application to inverse problems. We present new results using the Piola transform of the boundary parametrisation to transport the integral operators on a fixed reference boundary. The transported integral operators are infinitely differentiable with respect to the parametrisations and simplified expressions of the material derivatives are obtained. Using these results, we extend a nonlinear integral equations approach developed for solving acoustic inverse obstacle scattering problems to electromagnetism. The inverse problem is formulated as a pair of nonlinear and ill-posed integral equations for the unknown boundary representing the boundary condition and the measurements, for which the iteratively regularized Gauss-Newton method can be applied. The algorithm has the interesting feature that it avoids the numerous numerical solution of boundary value problems at each iteration step. Numerical experiments are presented in the special case of star-shaped obstacles.
NASA Astrophysics Data System (ADS)
Marwati, Rini; Yulianti, Kartika; Pangestu, Herny Wulandari
2016-02-01
A fuzzy evolutionary algorithm is an integration of an evolutionary algorithm and a fuzzy system. In this paper, we present an application of a genetic algorithm to a fuzzy evolutionary algorithm to detect and to solve chromosomes conflict. A chromosome conflict is identified by existence of any two genes in a chromosome that has the same values as two genes in another chromosome. Based on this approach, we construct an algorithm to solve a lecture scheduling problem. Time codes, lecture codes, lecturer codes, and room codes are defined as genes. They are collected to become chromosomes. As a result, the conflicted schedule turns into chromosomes conflict. Built in the Delphi program, results show that the conflicted lecture schedule problem is solvable by this algorithm.
Lande subtraction method with finite integration limits and application to strong-field problems.
Jiang, Tsin-Fu; Jheng, Shih-Da; Lee, Yun-Min; Su, Zheng-Yao
2012-12-01
The Lande subtraction method has been widely used in Coulomb problems, but the momentum coordinate p∈(0,∞) is assumed. In past applications, a very large range of p was used for accuracy. We derive the supplementary formulation with p∈(0,p_{max}) at reasonably small p_{max} for practical calculations. With the recipe, accuracy of the hydrogenic eigenspectrum is dramatically improved compared to the ordinary Lande formula by the same momentum grids. We apply the present formulation to strong-field atomic above-threshold ionization and high-order harmonic generations. We demonstrate that the proposed momentum space method can be another practical theoretical tool for atomic strong-field problems in addition to the existing methods.
Applications of Fourier Analysis in Homogenization of the Dirichlet Problem: L p Estimates
NASA Astrophysics Data System (ADS)
Aleksanyan, Hayk; Shahgholian, Henrik; Sjölin, Per
2015-01-01
Let u ɛ be a solution to the system where , is a smooth uniformly convex domain, and g is 1-periodic in its second variable, and both A ɛ and g are sufficiently smooth. Our results in this paper are twofold. First we prove L p convergence results for solutions of the above system and for the non oscillating operator , with the following convergence rate for all which we prove is (generically) sharp for . Here u 0 is the solution to the averaging problem. Second, combining our method with the recent results due to Kenig, Lin and Shen (Commun Pure Appl Math 67(8):1219-1262, 2014), we prove (for certain class of operators and when ) for both the oscillating operator and boundary data. For this case, we take , where A is 1-periodic as well. Some further applications of the method to the homogenization of the Neumann problem with oscillating boundary data are also considered.
Bíró, Oszkár; Koczka, Gergely; Preis, Kurt
2014-05-01
An efficient finite element method to take account of the nonlinearity of the magnetic materials when analyzing three-dimensional eddy current problems is presented in this paper. The problem is formulated in terms of vector and scalar potentials approximated by edge and node based finite element basis functions. The application of Galerkin techniques leads to a large, nonlinear system of ordinary differential equations in the time domain. The excitations are assumed to be time-periodic and the steady-state periodic solution is of interest only. This is represented either in the frequency domain as a finite Fourier series or in the time domain as a set of discrete time values within one period for each finite element degree of freedom. The former approach is the (continuous) harmonic balance method and, in the latter one, discrete Fourier transformation will be shown to lead to a discrete harmonic balance method. Due to the nonlinearity, all harmonics, both continuous and discrete, are coupled to each other. The harmonics would be decoupled if the problem were linear, therefore, a special nonlinear iteration technique, the fixed-point method is used to linearize the equations by selecting a time-independent permeability distribution, the so-called fixed-point permeability in each nonlinear iteration step. This leads to uncoupled harmonics within these steps. As industrial applications, analyses of large power transformers are presented. The first example is the computation of the electromagnetic field of a single-phase transformer in the time domain with the results compared to those obtained by traditional time-stepping techniques. In the second application, an advanced model of the same transformer is analyzed in the frequency domain by the harmonic balance method with the effect of the presence of higher harmonics on the losses investigated. Finally a third example tackles the case of direct current (DC) bias in the coils of a single-phase transformer.
ERIC Educational Resources Information Center
Seyhan, Hatice Güngör
2015-01-01
This study was conducted with 98 prospective science teachers, who were composed of 50 prospective teachers that had participated in problem-solving applications and 48 prospective teachers who were taught within a more researcher-oriented teaching method in science laboratories. The first aim of this study was to determine the levels of…
Application of a substructuring technique to the problem of crack extension and closure
NASA Technical Reports Server (NTRS)
Armen, H., Jr.
1974-01-01
A substructuring technique, originally developed for the efficient reanalysis of structures, is incorporated into the methodology associated with the plastic analysis of structures. An existing finite-element computer program that accounts for elastic-plastic material behavior under cyclic loading was modified to account for changing kinematic constraint conditions - crack growth and intermittent contact of crack surfaces in two dimensional regions. Application of the analysis is presented for a problem of a centercrack panel to demonstrate the efficiency and accuracy of the technique.
NASA Technical Reports Server (NTRS)
Horton, F. E.
1970-01-01
The utility of remote sensing techniques to urban data acquisition problems in several distinct areas was identified. This endeavor included a comparison of remote sensing systems for urban data collection, the extraction of housing quality data from aerial photography, utilization of photographic sensors in urban transportation studies, urban change detection, space photography utilization, and an application of remote sensing techniques to the acquisition of data concerning intra-urban commercial centers. The systematic evaluation of variable extraction for urban modeling and planning at several different scales, and the model derivation for identifying and predicting economic growth and change within a regional system of cities are also studied.
Application of remote sensing to state and regional problems. [for Mississippi
NASA Technical Reports Server (NTRS)
Miller, W. F.; Bouchillon, C. W.; Harris, J. C.; Carter, B.; Whisler, F. D.; Robinette, R.
1974-01-01
The primary purpose of the remote sensing applications program is for various members of the university community to participate in activities that improve the effective communication between the scientific community engaged in remote sensing research and development and the potential users of modern remote sensing technology. Activities of this program are assisting the State of Mississippi in recognizing and solving its environmental, resource and socio-economic problems through inventory, analysis, and monitoring by appropriate remote sensing systems. Objectives, accomplishments, and current status of the following individual projects are reported: (1) bark beetle project; (2) state park location planning; and (3) waste source location and stream channel geometry monitoring.
Application of a hybrid generation/utility assessment heuristic to a class of scheduling problems
NASA Technical Reports Server (NTRS)
Heyward, Ann O.
1989-01-01
A two-stage heuristic solution approach for a class of multiobjective, n-job, 1-machine scheduling problems is described. Minimization of job-to-job interference for n jobs is sought. The first stage generates alternative schedule sequences by interchanging pairs of schedule elements. The set of alternative sequences can represent nodes of a decision tree; each node is reached via decision to interchange job elements. The second stage selects the parent node for the next generation of alternative sequences through automated paired comparison of objective performance for all current nodes. An application of the heuristic approach to communications satellite systems planning is presented.
METLIN-PC: An applications-program package for problems of mathematical programming
Pshenichnyi, B.N.; Sobolenko, L.A.; Sosnovskii, A.A.; Aleksandrova, V.M.; Shul`zhenko, Yu.V.
1994-05-01
The METLIN-PC applications-program package (APP) was developed at the V.M. Glushkov Institute of Cybernetics of the Academy of Sciences of Ukraine on IBM PC XT and AT computers. The present version of the package was written in Turbo Pascal and Fortran-77. The METLIN-PC is chiefly designed for the solution of smooth problems of mathematical programming and is a further development of the METLIN prototype, which was created earlier on a BESM-6 computer. The principal property of the previous package is retained - the applications modules employ a single approach based on the linearization method of B.N. Pschenichnyi. Hence the name {open_quotes}METLIN.{close_quotes}
NASA Astrophysics Data System (ADS)
Mo, Chao-jie; Qin, Li-zi; Zhao, Fei; Yang, Li-jun
2016-12-01
We investigate the application of the dissipative particle dynamics method to the instability problem of a long liquid thread surrounded by another fluid. The dispersion curves obtained from simulations are compared with classic theoretical predictions. The results from standard dissipative particle dynamics (DPD) simulations at first have a tendency of gradually approaching to Tomotika's Stokes flow prediction when the Reynolds number is decreased. But they then abnormally deviate again when the viscosity is very large. The same phenomenon is also confirmed in droplet retraction simulations when also compared with theoretical Stokes flow results. On the other hand, when a hard-core DPD model is used, with the decrease of the Reynolds number the simulation results did finally approach Tomotika's predictions when Re ≈0.1 . A combined presentation of the hard-core DPD results and the standard DPD results, excluding the abnormal ones, demonstrates that they are approximately on a continuum when labeled with Reynolds number. These results suggest that the standard DPD method is a suitable method for investigation of the instability problem of immersed liquid thread in the inertioviscous regime (0.1
Mo, Chao-Jie; Qin, Li-Zi; Zhao, Fei; Yang, Li-Jun
2016-12-01
We investigate the application of the dissipative particle dynamics method to the instability problem of a long liquid thread surrounded by another fluid. The dispersion curves obtained from simulations are compared with classic theoretical predictions. The results from standard dissipative particle dynamics (DPD) simulations at first have a tendency of gradually approaching to Tomotika's Stokes flow prediction when the Reynolds number is decreased. But they then abnormally deviate again when the viscosity is very large. The same phenomenon is also confirmed in droplet retraction simulations when also compared with theoretical Stokes flow results. On the other hand, when a hard-core DPD model is used, with the decrease of the Reynolds number the simulation results did finally approach Tomotika's predictions when Re≈0.1. A combined presentation of the hard-core DPD results and the standard DPD results, excluding the abnormal ones, demonstrates that they are approximately on a continuum when labeled with Reynolds number. These results suggest that the standard DPD method is a suitable method for investigation of the instability problem of immersed liquid thread in the inertioviscous regime (0.1
Non-Linear Problems in NMR: Application of the DFM Variation of Parameters Method
NASA Astrophysics Data System (ADS)
Erker, Jay Charles
This Dissertation introduces, develops, and applies the Dirac-McLachlan-Frenkel (DFM) time dependent variation of parameters approach to Nuclear Magnetic Resonance (NMR) problems. Although never explicitly used in the treatment of time domain NMR problems to date, the DFM approach has successfully predicted the dynamics of optically prepared wave packets on excited state molecular energy surfaces. Unlike the Floquet, average Hamiltonian, and Van Vleck transformation methods, the DFM approach is not restricted by either the size or symmetry of the time domain perturbation. A particularly attractive feature of the DFM method is that measured data can be used to motivate a parameterized trial function choice and that the DFM theory provides the machinery to provide the optimum, minimum error choices for these parameters. Indeed a poor parameterized trial function choice will lead to a poor match with real experiments, even with optimized parameters. Although there are many NMR problems available to demonstrate the application of the DFM variation of parameters, five separate cases that have escaped analytical solution and thus require numerical methods are considered here: molecular diffusion in a magnetic field gradient, radiation damping in the presence of inhomogeneous broadening, multi-site chemical exchange, and the combination of molecular diffusion in a magnetic field gradient with chemical exchange. The application to diffusion in a gradient is used as an example to develop the DFM method for application to NMR. The existence of a known analytical solution and experimental results allows for direct comparison between the theoretical results of the DFM method and Torrey's solution to the Bloch equations corrected for molecular diffusion. The framework of writing classical Bloch equations in matrix notation is then applied to problems without analytical solution. The second example includes the generation of a semi-analytical functional form for the free
Resolving all-order method convergence problems for atomic physics applications
Gharibnejad, H.; Derevianko, A.; Eliav, E.; Safronova, M. S.
2011-05-15
The development of the relativistic all-order method where all single, double, and partial triple excitations of the Dirac-Hartree-Fock wave function are included to all orders of perturbation theory led to many important results for the study of fundamental symmetries, development of atomic clocks, ultracold atom physics, and others, as well as provided recommended values of many atomic properties critically evaluated for their accuracy for a large number of monovalent systems. This approach requires iterative solutions of the linearized coupled-cluster equations leading to convergence issues in some cases where correlation corrections are particularly large or lead to an oscillating pattern. Moreover, these issues also lead to similar problems in the configuration-interaction (CI)+all-order method for many-particle systems. In this work, we have resolved most of the known convergence problems by applying two different convergence stabilizer methods, namely, reduced linear equation and direct inversion of iterative subspace. Examples are presented for B, Al, Zn{sup +}, and Yb{sup +}. Solving these convergence problems greatly expands the number of atomic species that can be treated with the all-order methods and is anticipated to facilitate many interesting future applications.
NASA Astrophysics Data System (ADS)
Yang, Eunice
2016-02-01
This paper discusses the use of a free mobile engineering application (app) called Autodesk® ForceEffect™ to provide students assistance with spatial visualization of forces and more practice in solving/visualizing statics problems compared to the traditional pencil-and-paper method. ForceEffect analyzes static rigid-body systems using free-body diagrams (FBDs) and provides solutions in real time. It is a cost-free software that is available for download on the Internet. The software is supported on the iOS™, Android™, and Google Chrome™ platforms. It is easy to use and the learning curve is approximately two hours using the tutorial provided within the app. The use of ForceEffect has the ability to provide students different problem modalities (textbook, real-world, and design) to help them acquire and improve on skills that are needed to solve force equilibrium problems. Although this paper focuses on the engineering mechanics statics course, the technology discussed is also relevant to the introductory physics course.
A special application of absolute value techniques in authentic problem solving
NASA Astrophysics Data System (ADS)
Stupel, Moshe
2013-06-01
There are at least five different equivalent definitions of the absolute value concept. In instances where the task is an equation or inequality with only one or two absolute value expressions, it is a worthy educational experience for learners to solve the task using each one of the definitions. On the other hand, if more than two absolute value expressions are involved, the definition that is most helpful is the one involving solving by intervals and evaluating critical points. In point of fact, application of this technique is one reason that the topic of absolute value is important in mathematics in general and in mathematics teaching in particular. We present here an authentic practical problem that is solved using absolute values and the 'intervals' method, after which the solution is generalized with surprising results. This authentic problem also lends itself to investigation using educational technological tools such as GeoGebra dynamic geometry software: mathematics teachers can allow their students to initially cope with the problem by working in an inductive environment in which they conduct virtual experiments until a solid conjecture has been reached, after which they should prove the conjecture deductively, using classic theoretical mathematical tools.
NASA Technical Reports Server (NTRS)
Johnson, O. W.
1964-01-01
A modified spray gun, with separate containers for resin and additive components, solves the problems of quick hardening and nozzle clogging. At application, separate atomizers spray the liquids in front of the nozzle face where they blend.
Zörnig, Peter
2015-08-01
We present integer programming models for some variants of the farthest string problem. The number of variables and constraints is substantially less than that of the integer linear programming models known in the literature. Moreover, the solution of the linear programming-relaxation contains only a small proportion of noninteger values, which considerably simplifies the rounding process. Numerical tests have shown excellent results, especially when a small set of long sequences is given.
NASA Technical Reports Server (NTRS)
Tamma, Kumar K.; Namburu, Raju R.
1990-01-01
The present paper describes recent advances and trends in finite element developments and applications for solidification problems. In particular, in comparison to traditional methods of approach, new enthalpy-based architectures based on a generalized trapezoidal family of representations are presented which provide different perspectives, physical interpretation and solution architectures for effective numerical simulation of phase change processes encountered in solidification problems. Various numerical test models are presented and the results support the proposition for employing such formulations for general phase change applications.
Bhardwaj, M.; Day, D.; Farhat, C.; Lesoinne, M.; Pierson, K; Rixen, D.
1999-04-01
We report on the application of the one-level FETI method to the solution of a class of structural problems associated with the Department of Energy's Accelerated Strategic Computing Initiative (ASCI). We focus on numerical and parallel scalability issues,and discuss the treatment by FETI of severe structural heterogeneities. We also report on preliminary performance results obtained on the ASCI Option Red supercomputer configured with as many as one thousand processors, for problems with as many as 5 million degrees of freedom.
Parallel satellite orbital situational problems solver for space missions design and control
NASA Astrophysics Data System (ADS)
Atanassov, Atanas Marinov
2016-11-01
Solving different scientific problems for space applications demands implementation of observations, measurements or realization of active experiments during time intervals in which specific geometric and physical conditions are fulfilled. The solving of situational problems for determination of these time intervals when the satellite instruments work optimally is a very important part of all activities on every stage of preparation and realization of space missions. The elaboration of universal, flexible and robust approach for situation analysis, which is easily portable toward new satellite missions, is significant for reduction of missions' preparation times and costs. Every situation problem could be based on one or more situation conditions. Simultaneously solving different kinds of situation problems based on different number and types of situational conditions, each one of them satisfied on different segments of satellite orbit requires irregular calculations. Three formal approaches are presented. First one is related to situation problems description that allows achieving flexibility in situation problem assembling and presentation in computer memory. The second formal approach is connected with developing of situation problem solver organized as processor that executes specific code for every particular situational condition. The third formal approach is related to solver parallelization utilizing threads and dynamic scheduling based on "pool of threads" abstraction and ensures a good load balance. The developed situation problems solver is intended for incorporation in the frames of multi-physics multi-satellite space mission's design and simulation tools.
Nash, Stephen G.
2013-11-11
The research focuses on the modeling and optimization of nanoporous materials. In systems with hierarchical structure that we consider, the physics changes as the scale of the problem is reduced and it can be important to account for physics at the fine level to obtain accurate approximations at coarser levels. For example, nanoporous materials hold promise for energy production and storage. A significant issue is the fabrication of channels within these materials to allow rapid diffusion through the material. One goal of our research is to apply optimization methods to the design of nanoporous materials. Such problems are large and challenging, with hierarchical structure that we believe can be exploited, and with a large range of important scales, down to atomistic. This requires research on large-scale optimization for systems that exhibit different physics at different scales, and the development of algorithms applicable to designing nanoporous materials for many important applications in energy production, storage, distribution, and use. Our research has two major research thrusts. The first is hierarchical modeling. We plan to develop and study hierarchical optimization models for nanoporous materials. The models have hierarchical structure, and attempt to balance the conflicting aims of model fidelity and computational tractability. In addition, we analyze the general hierarchical model, as well as the specific application models, to determine their properties, particularly those properties that are relevant to the hierarchical optimization algorithms. The second thrust was to develop, analyze, and implement a class of hierarchical optimization algorithms, and apply them to the hierarchical models we have developed. We adapted and extended the optimization-based multigrid algorithms of Lewis and Nash to the optimization models exemplified by the hierarchical optimization model. This class of multigrid algorithms has been shown to be a powerful tool for
Applications of Quantum Theory of Atomic and Molecular Scattering to Problems in Hypersonic Flow
NASA Technical Reports Server (NTRS)
Malik, F. Bary
1995-01-01
The general status of a grant to investigate the applications of quantum theory in atomic and molecular scattering problems in hypersonic flow is summarized. Abstracts of five articles and eleven full-length articles published or submitted for publication are included as attachments. The following topics are addressed in these articles: fragmentation of heavy ions (HZE particles); parameterization of absorption cross sections; light ion transport; emission of light fragments as an indicator of equilibrated populations; quantum mechanical, optical model methods for calculating cross sections for particle fragmentation by hydrogen; evaluation of NUCFRG2, the semi-empirical nuclear fragmentation database; investigation of the single- and double-ionization of He by proton and anti-proton collisions; Bose-Einstein condensation of nuclei; and a liquid drop model in HZE particle fragmentation by hydrogen.
On multidisciplinary research on the application of remote sensing to water resources problems
NASA Technical Reports Server (NTRS)
1972-01-01
This research is directed toward development of a practical, operational remote sensing water quality monitoring system. To accomplish this, five fundamental aspects of the problem have been under investigation during the past three years. These are: (1) development of practical and economical methods of obtaining, handling and analyzing remote sensing data; (2) determination of the correlation between remote sensed imagery and actual water quality parameters; (3) determination of the optimum technique for monitoring specific water pollution parameters and for evaluating the reliability with which this can be accomplished; (4) determination of the extent of masking due to depth of penetration, bottom effects, film development effects, and angle falloff, and development of techniques to eliminate or minimize them; and (5) development of operational procedures which might be employed by a municipal, state or federal agency for the application of remote sensing to water quality monitoring, including space-generated data.
A review of vector convergence acceleration methods, with applications to linear algebra problems
NASA Astrophysics Data System (ADS)
Brezinski, C.; Redivo-Zaglia, M.
In this article, in a few pages, we will try to give an idea of convergence acceleration methods and extrapolation procedures for vector sequences, and to present some applications to linear algebra problems and to the treatment of the Gibbs phenomenon for Fourier series in order to show their effectiveness. The interested reader is referred to the literature for more details. In the bibliography, due to space limitation, we will only give the more recent items, and, for older ones, we refer to Brezinski and Redivo-Zaglia, Extrapolation methods. (Extrapolation Methods. Theory and Practice, North-Holland, 1991). This book also contains, on a magnetic support, a library (in Fortran 77 language) for convergence acceleration algorithms and extrapolation methods.
NASA Technical Reports Server (NTRS)
Rabitz, Herschel
1987-01-01
The use of parametric and functional gradient sensitivity analysis techniques is considered for models described by partial differential equations. By interchanging appropriate dependent and independent variables, questions of inverse sensitivity may be addressed to gain insight into the inversion of observational data for parameter and function identification in mathematical models. It may be argued that the presence of a subset of dominantly strong coupled dependent variables will result in the overall system sensitivity behavior collapsing into a simple set of scaling and self similarity relations amongst elements of the entire matrix of sensitivity coefficients. These general tools are generic in nature, but herein their application to problems arising in selected areas of physics and chemistry is presented.
The application of the statistical theory of extreme values to gust-load problems
NASA Technical Reports Server (NTRS)
Press, Harry
1950-01-01
An analysis is presented which indicates that the statistical theory of extreme values is applicable to the problems of predicting the frequency of encountering the larger gust loads and gust velocities for both specific test conditions as well as commercial transport operations. The extreme-value theory provides an analytic form for the distributions of maximum values of gust load and velocity. Methods of fitting the distribution are given along with a method of estimating the reliability of the predictions. The theory of extreme values is applied to available load data from commercial transport operations. The results indicate that the estimates of the frequency of encountering the larger loads are more consistent with the data and more reliable than those obtained in previous analyses. (author)
Application of spectral Lanczos decomposition method to large scale problems arising geophysics
Tamarchenko, T.
1996-12-31
This paper presents an application of Spectral Lanczos Decomposition Method (SLDM) to numerical modeling of electromagnetic diffusion and elastic waves propagation in inhomogeneous media. SLDM approximates an action of a matrix function as a linear combination of basis vectors in Krylov subspace. I applied the method to model electromagnetic fields in three-dimensions and elastic waves in two dimensions. The finite-difference approximation of the spatial part of differential operator reduces the initial boundary-value problem to a system of ordinary differential equations with respect to time. The solution to this system requires calculating exponential and sine/cosine functions of the stiffness matrices. Large scale numerical examples are in a good agreement with the theoretical error bounds and stability estimates given by Druskin, Knizhnerman, 1987.
Applicability extent of 2-D heat equation for numerical analysis of a multiphysics problem
NASA Astrophysics Data System (ADS)
Khawaja, H.
2017-01-01
This work focuses on thermal problems, solvable using the heat equation. The fundamental question being answered here is: what are the limits of the dimensions that will allow a 3-D thermal problem to be accurately modelled using a 2-D Heat Equation? The presented work solves 2-D and 3-D heat equations using the Finite Difference Method, also known as the Forward-Time Central-Space (FTCS) method, in MATLAB®. For this study, a cuboidal shape domain with a square cross-section is assumed. The boundary conditions are set such that there is a constant temperature at its center and outside its boundaries. The 2-D and 3-D heat equations are solved in a time dimension to develop a steady state temperature profile. The method is tested for its stability using the Courant-Friedrichs-Lewy (CFL) criteria. The results are compared by varying the thickness of the 3-D domain. The maximum error is calculated, and recommendations are given on the applicability of the 2-D heat equation.
NASA Technical Reports Server (NTRS)
Antoniewicz, Robert F.; Duke, Eugene L.; Menon, P. K. A.
1991-01-01
The design of nonlinear controllers has relied on the use of detailed aerodynamic and engine models that must be associated with the control law in the flight system implementation. Many of these controllers were applied to vehicle flight path control problems and have attempted to combine both inner- and outer-loop control functions in a single controller. An approach to the nonlinear trajectory control problem is presented. This approach uses linearizing transformations with measurement feedback to eliminate the need for detailed aircraft models in outer-loop control applications. By applying this approach and separating the inner-loop and outer-loop functions two things were achieved: (1) the need for incorporating detailed aerodynamic models in the controller is obviated; and (2) the controller is more easily incorporated into existing aircraft flight control systems. An implementation of the controller is discussed, and this controller is tested on a six degree-of-freedom F-15 simulation and in flight on an F-15 aircraft. Simulation data are presented which validates this approach over a large portion of the F-15 flight envelope. Proof of this concept is provided by flight-test data that closely matches simulation results. Flight-test data are also presented.
A Bayesian approach to Fourier Synthesis inverse problem with application in SAR imaging
NASA Astrophysics Data System (ADS)
Zhu, Sha; Mohammad-Djafari, Ali
2011-03-01
In this paper we propose a Bayesian approach to the ill-posed inverse problem of Fourier synthesis (FS) which consists in reconstructing a function from partial knowledge of its Fourier Transform (FT) with application in SAR (Synthetic Aperture Radar) imaging. The function to be estimated represents an image of the observed scene. Considering this observed scene is mainly composed of point sources, we propose to use a Generalized Gaussian (GG) prior model, and then the Maximum A posterior (MAP) estimator as the desired solution. In particular, we are interested in bi-static case of spotlight-mode SAR data. In a first step, we consider real valued reflectivities but we account for the complex value of the measured data. The relation between the Fourier transform of the measured data and the unknown scene reflectivity is modeled by a 2D spatial FT. The inverse problem becomes then a FS and depending on the geometry of the data acquisition, only the set of locations in the Fourier space are different. We give a detailed modeling of the data acquisition process that we simulated, then apply the proposed method on those synthetic data to measure its performances compared to some other classical methods. Finally, we demonstrate the performance of the method on experimental SAR data obtained in a collaborative work by ONERA.
NASA Astrophysics Data System (ADS)
Lawrence, Chris C.; Febbraro, Michael; Flaska, Marek; Pozzi, Sara A.; Becchetti, F. D.
2016-08-01
Verification of future warhead-dismantlement treaties will require detection of certain warhead attributes without the disclosure of sensitive design information, and this presents an unusual measurement challenge. Neutron spectroscopy—commonly eschewed as an ill-posed inverse problem—may hold special advantages for warhead verification by virtue of its insensitivity to certain neutron-source parameters like plutonium isotopics. In this article, we investigate the usefulness of unfolded neutron spectra obtained from organic-scintillator data for verifying a particular treaty-relevant warhead attribute: the presence of high-explosive and neutron-reflecting materials. Toward this end, several improvements on current unfolding capabilities are demonstrated: deuterated detectors are shown to have superior response-matrix condition to that of standard hydrogen-base scintintillators; a novel data-discretization scheme is proposed which removes important detector nonlinearities; and a technique is described for re-parameterizing the unfolding problem in order to constrain the parameter space of solutions sought, sidestepping the inverse problem altogether. These improvements are demonstrated with trial measurements and verified using accelerator-based time-of-flight calculation of reference spectra. Then, a demonstration is presented in which the elemental compositions of low-Z neutron-attenuating materials are estimated to within 10%. These techniques could have direct application in verifying the presence of high-explosive materials in a neutron-emitting test item, as well as other for treaty verification challenges.
An extended theory of thin airfoils and its application to the biplane problem
NASA Technical Reports Server (NTRS)
Millikan, Clark B
1931-01-01
The report presents a new treatment, due essentially to von Karman, of the problem of the thin airfoil. The standard formulae for the angle of zero lift and zero moment are first developed and the analysis is then extended to give the effect of disturbing or interference velocities, corresponding to an arbitrary potential flow, which are superimposed on a normal rectilinear flow over the airfoil. An approximate method is presented for obtaining the velocities induced by a 2-dimensional airfoil at a point some distance away. In certain cases this method has considerable advantage over the simple "lifting line" procedure usually adopted. The interference effects for a 2-dimensional biplane are considered in the light of the previous analysis. The results of the earlier sections are then applied to the general problem of the interference effects for a 3-dimensional biplane, and formulae and charts are given which permit the characteristics of the individual wings of an arbitrary biplane without sweepback or dihedral to be calculated. In the final section the conclusions drawn from the application of the theory to a considerable number of special cases are discussed, and curves are given illustrating certain of these conclusions and serving as examples to indicate the nature of the agreement between the theory and experiment.
Cao, Jianping; Du, Zhengjian; Mo, Jinhan; Li, Xinxiao; Xu, Qiujian; Zhang, Yinping
2016-12-20
Passive sampling is an alternative to active sampling for measuring concentrations of gas-phase volatile organic compounds (VOCs). However, the uncertainty or relative error of the measurements have not been minimized due to the limitations of existing design methods. In this paper, we have developed a novel method, the inverse problem optimization method, to address the problems associated with designing accurate passive samplers. The principle is to determine the most appropriate physical properties of the materials, and the optimal geometry of a passive sampler, by minimizing the relative sampling error based on the mass transfer model of VOCs for a passive sampler. As an example application, we used our proposed method to optimize radial passive samplers for the sampling of benzene and formaldehyde in a normal indoor environment. A new passive sampler, which we have called the Tsinghua Passive Diffusive Sampler (THPDS), for indoor benzene measurement was developed according to the optimized results. Silica zeolite was selected as the sorbent for the THPDS. The measured overall uncertainty of THPDS (22% for benzene) is lower than that of most commercially available passive samplers but is quite a bit larger than the modeled uncertainty (4.8% for benzene, the optimized result), suggesting that further research is required.
Design of fiber optic communication systems for IVHS applications: problems and recommendations
NASA Astrophysics Data System (ADS)
Arya, Vivek; Hobeika, Antoine G.; de Vries, Marten J.; Claus, Richard O.
1995-01-01
The objective of this paper is to highlight recommendations made by fifteen experts from industry and State Departments of Transportation (DOT) regarding the design, implementation, operations, and maintenance of fiber optic communication links presently being used in their transportation management systems. This paper also brings forth the problems faced during the deployment of these systems. The procedure followed for this research was to review the specifications and design guidelines for various Federal Highway Administration (FHWA) projects which have implemented, or are in the process of implementing, fiber optic communication links for their traffic management systems. DOT officials and industry design consultants who were directly involved in the implementation of the FHWA projects were then interviewed on issues concerning system design, operations and management, bidding, and other institutional aspects. The result of these interviews is a set of recommendations ranging from increased use of the latest standards to suggestions for more efficient planning of the traffic management center. These problems and recommendations are presented in this paper. This paper thus offers valuable guidelines for the design and implementation of fiber optic communication systems for future IVHS and transportation management applications.
Localized suffix array and its application to genome mapping problems for paired-end short reads.
Kimura, Kouichi; Koike, Asako
2009-10-01
We introduce a new data structure, a localized suffix array, based on which occurrence information is dynamically represented as the combination of global positional information and local lexicographic order information in text search applications. For the search of a pair of words within a given distance, many candidate positions that share a coarse-grained global position can be compactly represented in term of local lexicographic orders as in the conventional suffix array, and they can be simultaneously examined for violation of the distance constraint at the coarse-grained resolution. Trade-off between the positional and lexicographical information is progressively shifted towards finer positional resolution, and the distance constraint is reexamined accordingly. Thus the paired search can be efficiently performed even if there are a large number of occurrences for each word. The localized suffix array itself is in fact a reordering of bits inside the conventional suffix array, and their memory requirements are essentially the same. We demonstrate an application to genome mapping problems for paired-end short reads generated by new-generation DNA sequencers. When paired reads are highly repetitive, it is time-consuming to naïvely calculate, sort, and compare all of the coordinates. For a human genome re-sequencing data of 36 base pairs, more than 10 times speedups over the naïve method were observed in almost half of the cases where the sums of redundancies (number of individual occurrences) of paired reads were greater than 2,000.
On the spectra of certain integro-differential-delay problems with applications in neurodynamics
NASA Astrophysics Data System (ADS)
Grindrod, P.; Pinotsis, D. A.
2011-01-01
We investigate the spectrum of certain integro-differential-delay equations (IDDEs) which arise naturally within spatially distributed, nonlocal, pattern formation problems. Our approach is based on the reformulation of the relevant dispersion relations with the use of the Lambert function. As a particular application of this approach, we consider the case of the Amari delay neural field equation which describes the local activity of a population of neurons taking into consideration the finite propagation speed of the electric signal. We show that if the kernel appearing in this equation is symmetric around some point a≠0 or consists of a sum of such terms, then the relevant dispersion relation yields spectra with an infinite number of branches, as opposed to finite sets of eigenvalues considered in previous works. Also, in earlier works the focus has been on the most rightward part of the spectrum and the possibility of an instability driven pattern formation. Here, we numerically survey the structure of the entire spectra and argue that a detailed knowledge of this structure is important within neurodynamical applications. Indeed, the Amari IDDE acts as a filter with the ability to recognise and respond whenever it is excited in such a way so as to resonate with one of its rightward modes, thereby amplifying such inputs and dampening others. Finally, we discuss how these results can be generalised to the case of systems of IDDEs.
Trajectory evolution in the multi-body problem with applications in the Saturnian system
NASA Astrophysics Data System (ADS)
Craig Davis, Diane; Howell, Kathleen C.
2011-12-01
Recent discoveries by the Cassini spacecraft have generated interest in future missions to further explore the moons of Saturn as well as other small bodies in the solar system. Incorporating multi-body dynamics into the preliminary design can aid the design process and potentially reduce the cost of maneuvers that are required to achieve certain objectives. The focus in this investigation is the development and application of additional design tools to facilitate preliminary trajectory design in a multi-body environment where the gravitational influence of both primaries is quite significant. Within the context of the circular restricted 3-body problem, then, the evolution of trajectories in the vicinity of the smaller primary (P 2) that are strongly influenced by the distant larger primary (P 1) is investigated. By parameterizing the orbits in terms of radius and periapse orientation relative to the P 1-P 2 line, the short- and long-term behaviors of the trajectories are predictable. Initial conditions that yield a trajectory with a particular set of desired characteristics are easily selected from periapsis Poincaré maps for both short- and long-term orbits. Analysis in the Sun-Saturn and Saturn-Titan systems serves as the basis for examples of mission design applications.
NASA Astrophysics Data System (ADS)
Andersen, Anders H.; Rayens, William S.; Li, Ren-Cang; Blonder, Lee X.
2000-10-01
In this paper we describe the enormous potential that multilinear models hold for the analysis of data from neuroimaging experiments that rely on functional magnetic resonance imaging (MRI) or other imaging modalities. A case is made for why one might fully expect that the successful introduction of these models to the neuroscience community could define the next generation of structure-seeking paradigms in the area. In spite of the potential for immediate application, there is much to do from the perspective of statistical science. That is, although multilinear models have already been particularly successful in chemistry and psychology, relatively little is known about their statistical properties. To that end, our research group at the University of Kentucky has made significant progress. In particular, we are in the process of developing formal influence measures for multilinear methods as well as associated classification models and effective implementations. We believe that these problems will be among the most important and useful to the scientific community. Details are presented herein and an application is given in the context of facial emotion processing experiments.
Multi-fluid problems in magnetohydrodynamics with applications to astrophysical processes
NASA Astrophysics Data System (ADS)
Greenfield, Eric John
2016-01-01
I begin this study by presenting an overview of the theory of magnetohydrodynamics and the necessary conditions to justify the fluid treatment of a plasma. Upon establishing the fluid description of a plasma we move on to a discussion of magnetohydrodynamics in both the ideal and Hall regimes. This framework is then extended to include multiple plasmas in order to consider two problems of interest in the field of theoretical space physics. The first is a study on the evolution of a partially ionized plasma, a topic with many applications in space physics. A multi-fluid approach is necessary in this case to account for the motions of an ion fluid, electron fluid and neutral atom fluid; all of which are coupled to one another by collisions and/or electromagnetic forces. The results of this study have direct application towards an open question concerning the cascade of Kolmogorov-like turbulence in the interstellar plasma which we will discuss below. The second application of multi-fluid magnetohydrodynamics that we consider in this thesis concerns the amplification of magnetic field upstream of a collisionless, parallel shock. The relevant fluids here are the ions and electrons comprising the interstellar plasma and the galactic cosmic ray ions. Previous works predict that the streaming of cosmic rays lead to an instability resulting in significant amplification of the interstellar magnetic field at supernova blastwaves. This prediction is routinely invoked to explain the acceleration of galactic cosmic rays up to energies of 1015 eV. I will examine this phenomenon in detail using the multi-fluid framework outlined below. The purpose of this work is to first confirm the existence of an instability using a purely fluid approach with no additional approximations. If confirmed, I will determine the necessary conditions for it to operate.
The physical and mathematical aspects of inverse problems in radiation detection and applications.
Hussein, Esam M A
2012-07-01
The inverse problem is the problem of converting detectable measurements into useful quantifiable indications. It is the problem of spectrum unfolding, image reconstruction, identifying a threat material, or devising a radiotherapy plan. The solution of an inverse problem requires a forward model that relates the quantities of interest to measurements. This paper explores the physical issues associated with formulating a radiation-transport forward model best suited for inversion, and the mathematical challenges associated with the solution of the corresponding inverse problem.
Applications of a finite-volume algorithm for incompressible MHD problems
NASA Astrophysics Data System (ADS)
Vantieghem, S.; Sheyko, A.; Jackson, A.
2016-02-01
We present the theory, algorithms and implementation of a parallel finite-volume algorithm for the solution of the incompressible magnetohydrodynamic (MHD) equations using unstructured grids that are applicable for a wide variety of geometries. Our method implements a mixed Adams-Bashforth/Crank-Nicolson scheme for the nonlinear terms in the MHD equations and we prove that it is stable independent of the time step. To ensure that the solenoidal condition is met for the magnetic field, we use a method whereby a pseudo-pressure is introduced into the induction equation; since we are concerned with incompressible flows, the resulting Poisson equation for the pseudo-pressure is solved alongside the equivalent Poisson problem for the velocity field. We validate our code in a variety of geometries including periodic boxes, spheres, spherical shells, spheroids and ellipsoids; for the finite geometries we implement the so-called ferromagnetic or pseudo-vacuum boundary conditions appropriate for a surrounding medium with infinite magnetic permeability. This implies that the magnetic field must be purely perpendicular to the boundary. We present a number of comparisons against previous results and against analytical solutions, which verify the code's accuracy. This documents the code's reliability as a prelude to its use in more difficult problems. We finally present a new simple drifting solution for thermal convection in a spherical shell that successfully sustains a magnetic field of simple geometry. By dint of its rapid stabilization from the given initial conditions, we deem it suitable as a benchmark against which other self-consistent dynamo codes can be tested.
Ultrasonic focusing through inhomogeneous media by application of the inverse scattering problem
Haddadin, Osama S.; Ebbini, Emad S.
2010-01-01
A new approach is introduced for self-focusing phased arrays through inhomogeneous media for therapeutic and imaging applications. This algorithm utilizes solutions to the inverse scattering problem to estimate the impulse response (Green’s function) of the desired focal point(s) at the elements of the array. This approach is a two-stage procedure, where in the first stage the Green’s functions is estimated from measurements of the scattered field taken outside the region of interest. In the second stage, these estimates are used in the pseudoinverse method to compute excitation weights satisfying predefined set of constraints on the structure of the field at the focus points. These scalar, complex valued excitation weights are used to modulate the incident field for retransmission. The pseudoinverse pattern synthesis method requires knowing the Green’s function between the focus points and the array, which is difficult to attain for an unknown inhomogeneous medium. However, the solution to the inverse scattering problem, the scattering function, can be used directly to compute the required inhomogeneous Green’s function. This inverse scattering based self-focusing is noninvasive and does not require a strong point scatterer at or near the desired focus point. It simply requires measurements of the scattered field outside the region of interest. It can be used for high resolution imaging and enhanced therapeutic effects through inhomogeneous media without making any assumptions on the shape, size, or location of the inhomogeneity. This technique is outlined and numerical simulations are shown which validate this technique for single and multiple focusing using a circular array. PMID:9670525
NASA Astrophysics Data System (ADS)
2014-11-01
Editors: M.S.Tagirov, V.V.Semashko, A.S.Nizamutdinov Kazan is the motherland of Electronic Paramagnetic Resonance (EPR) which was discovered in Kazan State University in 1944 by prof. E.K.Zavojskii. Since the Young Scientist School of Magnetic Resonance run by professor G.V.Skrotskii from MIPT stopped its work, Kazan took up the activity under the initiative of academician A.S.Borovik-Romanov. Nowadays this school is rejuvenated and the International Youth Scientific School studying "Actual problems of the magnetic resonance and its application" is developing. Traditionally the main subjects of the School meetings are: Magnetic Resonance in Solids, Chemistry, Geology, Biology and Medicine. The unchallenged organizers of that school are Kazan Federal University and Kazan E. K. Zavoisky Physical-Technical Institute. The rector of the School is professor Murat Tagirov, vice-rector - professor Valentine Zhikharev. Since 1997 more than 100 famous scientists from Germany, France, Switzerland, USA, Japan, Russia, Ukraine, Moldavia, Georgia provided plenary lecture presentations. Almost 700 young scientists have had an opportunity to participate in discussions of the latest scientific developments, to make their oral reports and to improve their knowledge and skills. To enhance competition among the young scientists, reports take place every year and the Program Committee members name the best reports, the authors of which are invited to prepare full-scale scientific papers. Since 2013 the International Youth Scientific School "Actual problems of the magnetic resonance and its application", following the tendency for comprehensive studies of matter properties and its interaction with electromagnetic fields, expanded "the field of interest" and opened the new section: Coherent Optics and Optical Spectroscopy. Many young people have submitted interesting reports on photonics, quantum electronics, laser physics, quantum optics, traditional optical and laser spectroscopy, non
Application of Second-Moment Source Analysis to Three Problems in Earthquake Forecasting
NASA Astrophysics Data System (ADS)
Donovan, J.; Jordan, T. H.
2011-12-01
Though earthquake forecasting models have often represented seismic sources as space-time points (usually hypocenters), a more complete hazard analysis requires the consideration of finite-source effects, such as rupture extent, orientation, directivity, and stress drop. The most compact source representation that includes these effects is the finite moment tensor (FMT), which approximates the degree-two polynomial moments of the stress glut by its projection onto the seismic (degree-zero) moment tensor. This projection yields a scalar space-time source function whose degree-one moments define the centroid moment tensor (CMT) and whose degree-two moments define the FMT. We apply this finite-source parameterization to three forecasting problems. The first is the question of hypocenter bias: can we reject the null hypothesis that the conditional probability of hypocenter location is uniformly distributed over the rupture area? This hypothesis is currently used to specify rupture sets in the "extended" earthquake forecasts that drive simulation-based hazard models, such as CyberShake. Following McGuire et al. (2002), we test the hypothesis using the distribution of FMT directivity ratios calculated from a global data set of source slip inversions. The second is the question of source identification: given an observed FMT (and its errors), can we identify it with an FMT in the complete rupture set that represents an extended fault-based rupture forecast? Solving this problem will facilitate operational earthquake forecasting, which requires the rapid updating of earthquake triggering and clustering models. Our proposed method uses the second-order uncertainties as a norm on the FMT parameter space to identify the closest member of the hypothetical rupture set and to test whether this closest member is an adequate representation of the observed event. Finally, we address the aftershock excitation problem: given a mainshock, what is the spatial distribution of aftershock
2007-01-01
viscous flows , compressible or incompressible flows . The SPH option in LS-DYNA was used to simulate the Poiseuille flow and Couette flow . The SPH...at a certain constant velocity ( ). The simulations of Poiseuille and Couette flow show that this approach can be furthered to understand the scour... simulating fluid dynamic problems. The SPH method with various formulations can simulate different dynamic fluid flow problems, such as inviscid or
NASA Astrophysics Data System (ADS)
Van der Auweraer, H.; Steinbichler, H.; Vanlanduit, S.; Haberstok, C.; Freymann, R.; Storer, D.; Linet, V.
2002-04-01
Accurate structural models are key to the optimization of the vibro-acoustic behaviour of panel-like structures. However, at the frequencies of relevance to the acoustic problem, the structural modes are very complex, requiring high-spatial-resolution measurements. The present paper discusses a vibration testing system based on pulsed-laser holographic electronic speckle pattern interferometry (ESPI) measurements. It is a characteristic of the method that time-triggered (and not time-averaged) vibration images are obtained. Its integration into a practicable modal testing and analysis procedure is reviewed. The accumulation of results at multiple excitation frequencies allows one to build up frequency response functions. A novel parameter extraction approach using spline-based data reduction and maximum-likelihood parameter estimation was developed. Specific extensions have been added in view of the industrial application of the approach. These include the integration of geometry and response information, the integration of multiple views into one single model, the integration with finite-element model data and the prior identification of the critical panels and critical modes. A global procedure was hence established. The approach has been applied to several industrial case studies, including car panels, the firewall of a monovolume car, a full vehicle, panels of a light truck and a household product. The research was conducted in the context of the EUREKA project HOLOMODAL and the Brite-Euram project SALOME.
NASA Astrophysics Data System (ADS)
Giancotti, Marco; Campagnola, Stefano; Tsuda, Yuichi; Kawaguchi, Jun'ichiro
2014-11-01
This work studies periodic solutions applicable, as an extended phase, to the JAXA asteroid rendezvous mission Hayabusa 2 when it is close to target asteroid 1999 JU3. The motion of a spacecraft close to a small asteroid can be approximated with the equations of Hill's problem modified to account for the strong solar radiation pressure. The identification of families of periodic solutions in such systems is just starting and the field is largely unexplored. We find several periodic orbits using a grid search, then apply numerical continuation and bifurcation theory to a subset of these to explore the changes in the orbit families when the orbital energy is varied. This analysis gives information on their stability and bifurcations. We then compare the various families on the basis of the restrictions and requirements of the specific mission considered, such as the pointing of the solar panels and instruments. We also use information about their resilience against parameter errors and their ground tracks to identify one particularly promising type of solution.
Sushko, Iurii; Novotarskyi, Sergii; Körner, Robert; Pandey, Anil Kumar; Cherkasov, Artem; Li, Jiazhong; Gramatica, Paola; Hansen, Katja; Schroeter, Timon; Müller, Klaus-Robert; Xi, Lili; Liu, Huanxiang; Yao, Xiaojun; Öberg, Tomas; Hormozdiari, Farhad; Dao, Phuong; Sahinalp, Cenk; Todeschini, Roberto; Polishchuk, Pavel; Artemenko, Anatoliy; Kuz'min, Victor; Martin, Todd M; Young, Douglas M; Fourches, Denis; Muratov, Eugene; Tropsha, Alexander; Baskin, Igor; Horvath, Dragos; Marcou, Gilles; Muller, Christophe; Varnek, Alexander; Prokopenko, Volodymyr V; Tetko, Igor V
2010-12-27
The estimation of accuracy and applicability of QSAR and QSPR models for biological and physicochemical properties represents a critical problem. The developed parameter of "distance to model" (DM) is defined as a metric of similarity between the training and test set compounds that have been subjected to QSAR/QSPR modeling. In our previous work, we demonstrated the utility and optimal performance of DM metrics that have been based on the standard deviation within an ensemble of QSAR models. The current study applies such analysis to 30 QSAR models for the Ames mutagenicity data set that were previously reported within the 2009 QSAR challenge. We demonstrate that the DMs based on an ensemble (consensus) model provide systematically better performance than other DMs. The presented approach identifies 30-60% of compounds having an accuracy of prediction similar to the interlaboratory accuracy of the Ames test, which is estimated to be 90%. Thus, the in silico predictions can be used to halve the cost of experimental measurements by providing a similar prediction accuracy. The developed model has been made publicly available at http://ochem.eu/models/1 .
On some generalization of the area theorem with applications to the problem of rolling balls
NASA Astrophysics Data System (ADS)
Chaplygin, Sergey A.
2012-04-01
This publication contributes to the series of RCD translations of Sergey Alexeevich Chaplygin's scientific heritage. Earlier we published three of his papers on non-holonomic dynamics (vol. 7, no. 2; vol. 13, no. 4) and two papers on hydrodynamics (vol. 12, nos. 1, 2). The present paper deals with mechanical systems that consist of several spheres and discusses generalized conditions for the existence of integrals of motion (linear in velocities) in such systems. First published in 1897 and awarded by the Gold Medal of Russian Academy of Sciences, this work has not lost its scientific significance and relevance. (In particular, its principal ideas are further developed and extended in the recent article "Two Non-holonomic Integrable Problems Tracing Back to Chaplygin", published in this issue, see p. 191). Note that non-holonomic models for rolling motion of spherical shells, including the case where the shells contain intricate mechanisms inside, are currently of particular interest in the context of their application in the design of ball-shaped mobile robots. We hope that this classical work will be estimated at its true worth by the English-speaking world.
NASA Astrophysics Data System (ADS)
Tsuji, Takuya; Yokomine, Takehiko; Shimizu, Akihiko
2002-11-01
We have been engaged in the development of multi-scale adaptive simulation technique for incompressible turbulent flow. This is designed as that important scale components in the flow field are detected automatically by lifting wavelet and solved selectively. In conventional incompressible scheme, it is very common to solve Poisson equation of pressure to meet the divergence free constraints of incompressible flow. It may be not impossible to solve the Poisson eq. in the adaptive way, but this is very troublesome because it requires generation of control volume at each time step. We gave an eye on weakly compressible model proposed by Bao(2001). This model was derived from zero Mach limit asymptotic analysis of compressible Navier-Stokes eq. and does not need to solve the Poisson eq. at all. But it is relatively new and it requires demonstration study before the combination with the adaptation by wavelet. In present study, 2-D and 3-D Backstep flow were selected as test problems and applicability to turbulent flow is verified in detail. Besides, combination of adaptation by wavelet with weakly compressible model towards the adaptive turbulence simulation is discussed.
Hybrid modeling of spatial continuity for application to numerical inverse problems
Friedel, Michael J.; Iwashita, Fabio
2013-01-01
A novel two-step modeling approach is presented to obtain optimal starting values and geostatistical constraints for numerical inverse problems otherwise characterized by spatially-limited field data. First, a type of unsupervised neural network, called the self-organizing map (SOM), is trained to recognize nonlinear relations among environmental variables (covariates) occurring at various scales. The values of these variables are then estimated at random locations across the model domain by iterative minimization of SOM topographic error vectors. Cross-validation is used to ensure unbiasedness and compute prediction uncertainty for select subsets of the data. Second, analytical functions are fit to experimental variograms derived from original plus resampled SOM estimates producing model variograms. Sequential Gaussian simulation is used to evaluate spatial uncertainty associated with the analytical functions and probable range for constraining variables. The hybrid modeling of spatial continuity is demonstrated using spatially-limited hydrologic measurements at different scales in Brazil: (1) physical soil properties (sand, silt, clay, hydraulic conductivity) in the 42 km2 Vargem de Caldas basin; (2) well yield and electrical conductivity of groundwater in the 132 km2 fractured crystalline aquifer; and (3) specific capacity, hydraulic head, and major ions in a 100,000 km2 transboundary fractured-basalt aquifer. These results illustrate the benefits of exploiting nonlinear relations among sparse and disparate data sets for modeling spatial continuity, but the actual application of these spatial data to improve numerical inverse modeling requires testing.
NASA Astrophysics Data System (ADS)
Bayley, T. W.; Ferré, T. P. A.
2014-12-01
There is growing recognition in the hydrologic community that deterministic hydrologic models are imperfect tools for decision support. Despite this insight, the state of practice for a hydrologic investigation follows this sequence: data collection, conceptual model development, numerical model development, and finally decision making based on model projections. This approach, based on relatively unconsidered design of data collection, may result in uninformative data. As a result, it is commonly repeated several times to resolve critical uncertainties. We present a novel two step multi-model approach to optimizing data collection to aid decision making, risk analysis. Here, we describe the application this approach (Discrimination Inference to Reduce Expected Cost Technique - DIRECT) for a contaminant transport problem. DIRECT has 7 steps. First, outcomes of concern were defined explicitly. Next a probabilistic analysis of the outcomes was conducted that incorporated multiple conceptual and parametric realizations. The likelihood of each model was assessed based on goodness of fit to existing data. A cost function was developed and used to define the projected costs based on the model-predicted outcomes of concern. Data collection was then optimized to identify the data that could test the models of greatest concern (cost) against the other models in the ensemble. Finally a field program was conducted that included gathering lithologic, hydrologic, and chemical data from 22 new wells that were drilled in projected high value locations. The additional data reduced the expected cost of model projections to an acceptable level for defining new site compliance conditions.
NASA Technical Reports Server (NTRS)
Kreinovich, Vladik
1996-01-01
For a space mission to be successful it is vitally important to have a good control strategy. For example, with the Space Shuttle it is necessary to guarantee the success and smoothness of docking, the smoothness and fuel efficiency of trajectory control, etc. For an automated planetary mission it is important to control the spacecraft's trajectory, and after that, to control the planetary rover so that it would be operable for the longest possible period of time. In many complicated control situations, traditional methods of control theory are difficult or even impossible to apply. In general, in uncertain situations, where no routine methods are directly applicable, we must rely on the creativity and skill of the human operators. In order to simulate these experts, an intelligent control methodology must be developed. The research objectives of this project were: to analyze existing control techniques; to find out which of these techniques is the best with respect to the basic optimality criteria (stability, smoothness, robustness); and, if for some problems, none of the existing techniques is satisfactory, to design new, better intelligent control techniques.
A novel transport based model for wire media and its application to scattering problems
NASA Astrophysics Data System (ADS)
Forati, Ebrahim
Artificially engineered materials, known as metamaterials, have attracted the interest of researchers because of the potential for novel applications. Effective modeling of metamaterials is a crucial step for analyzing and synthesizing devices. In this thesis, we focus on wire medium (both isotropic and uniaxial) and validate a novel transport based model for them. Scattering problems involving wire media are computationally intensive due to the spatially dispersive nature of homogenized wire media. However, it will be shown that using the new model to solve scattering problems can simplify the calculations a great deal. For scattering problems, an integro-differential equation based on a transport formulation is proposed instead of the convolution-form integral equation that directly comes from spatial dispersion. The integro-differential equation is much faster to solve than the convolution equation form, and its effectiveness is confirmed by solving several examples in one-, two-, and three-dimensions. Both the integro-differential equation formulation and the homogenized wire medium parameters are experimentaly confirmed. To do so, several isotropic connected wire medium spheres have been fabricated using a rapid-prototyping machine, and their measured extinction cross sections are compared with simulation results. Wire parameters (period and diameter) are varied to the point where homogenization theory breaks down, which is observed in the measurements. The same process is done for three-dimensional cubical objects made of a uniaxail wire medium, and their measured results are compared with the numerical results based on the new model. The new method is extremely fast compared to brute-force numerical methods such as FDTD, and provides more physical insight (within the limits of homogenization), including the idea of a Debye length for wire media. The limits of homogenization are examined by comparing homogenization results and measurement. Then, a novel
Fernandes, L.; Friedlander, A.; Guedes, M.; Judice, J.
2001-07-01
This paper addresses a General Linear Complementarity Problem (GLCP) that has found applications in global optimization. It is shown that a solution of the GLCP can be computed by finding a stationary point of a differentiable function over a set defined by simple bounds on the variables. The application of this result to the solution of bilinear programs and LCPs is discussed. Some computational evidence of its usefulness is included in the last part of the paper.
1974-08-20
are employed at relatively low altitudes , usually below 20, 000 ft (6. 1 kin). Although France recently flew a stratospheric tethered balloon at 55...the start those which do not appear suitable for the high-6 altitude , multi-mode communications application. 2.1 Tethered Balloon Systems The first...free balloon systemsp i and their applicability to the high altitude communications relay problem. 6. Corbin, C. D. (1974) Portable Tethered Balloon
An application of a linear programing technique to nonlinear minimax problems
NASA Technical Reports Server (NTRS)
Schiess, J. R.
1973-01-01
A differential correction technique for solving nonlinear minimax problems is presented. The basis of the technique is a linear programing algorithm which solves the linear minimax problem. By linearizing the original nonlinear equations about a nominal solution, both nonlinear approximation and estimation problems using the minimax norm may be solved iteratively. Some consideration is also given to improving convergence and to the treatment of problems with more than one measured quantity. A sample problem is treated with this technique and with the least-squares differential correction method to illustrate the properties of the minimax solution. The results indicate that for the sample approximation problem, the minimax technique provides better estimates than the least-squares method if a sufficient amount of data is used. For the sample estimation problem, the minimax estimates are better if the mathematical model is incomplete.
On an iterative ensemble smoother and its application to a reservoir facies estimation problem
NASA Astrophysics Data System (ADS)
Luo, Xiaodong; Chen, Yan; Valestrand, Randi; Stordal, Andreas; Lorentzen, Rolf; Nævdal, Geir
2014-05-01
For data assimilation problems there are different ways in utilizing the available observations. While certain data assimilation algorithms, for instance, the ensemble Kalman filter (EnKF, see, for examples, Aanonsen et al., 2009; Evensen, 2006) assimilate the observations sequentially in time, other data assimilation algorithms may instead collect the observations at different time instants and assimilate them simultaneously. In general such algorithms can be classified as smoothers. In this aspect, the ensemble smoother (ES, see, for example, Evensen and van Leeuwen, 2000) can be considered as an smoother counterpart of the EnKF. The EnKF has been widely used for reservoir data assimilation (history matching) problems since its introduction to the community of petroleum engineering (Nævdal et al., 2002). The applications of the ES to reservoir data assimilation problems are also investigated recently (see, for example, Skjervheim and Evensen, 2011). Compared to the EnKF, the ES has certain technical advantages, including, for instance, avoiding the restarts associated with each update step in the EnKF and also having fewer variables to update, which may result in a significant reduction in simulation time, while providing similar assimilation results to those obtained by the EnKF (Skjervheim and Evensen, 2011). To further improve the performance of the ES, some iterative ensemble smoothers are suggested in the literature, in which the iterations are carried out in the forms of certain iterative optimization algorithms, e.g., the Gaussian-Newton (Chen and Oliver, 2012) or the Levenberg-Marquardt method (Chen and Oliver, 2013; Emerick and Reynolds, 2012), or in the context of adaptive Gaussian mixture (AGM, see Stordal and Lorentzen, 2013). In Emerick and Reynolds (2012) the iteration formula is derived based on the idea that, for linear observations, the final results of the iterative ES should be equal to the estimate of the EnKF. In Chen and Oliver (2013), the
followed with the introduction of Bayes Theorem as a model for intelligence analysis. The conjecture is made that Bayes Theorem can also serve as the...nucleus of a formal methodology. The application of Bayes Theorem to several types of problems is demonstrated. However, the implementation of such a
ERIC Educational Resources Information Center
Blum, Werner; Niss, Mogens
1991-01-01
This paper reviews the present state, recent trends, and prospective lines of development concerning applied problem solving, modeling, and their respective applications. Four major trends are scrutinized with respect to curriculum inclusion: a widened spectrum of arguments, an increased universality, an increased consolidation, and an extended…
ERIC Educational Resources Information Center
Donohue, Brad; Azrin, Nathan; Allen, Daniel N.; Romero, Valerie; Hill, Heather H.; Tracy, Kendra; Lapota, Holly; Gorney, Suzanne; Abdel-al, Ruweida; Caldas, Diana; Herdzik, Karen; Bradshaw, Kelsey; Valdez, Robby; Van Hasselt, Vincent B.
2009-01-01
A comprehensive evidence-based treatment for substance abuse and other associated problems (Family Behavior Therapy) is described, including its application to both adolescents and adults across a wide range of clinical contexts (i.e., criminal justice, child welfare). Relevant to practitioners and applied clinical researchers, topic areas include…
Radiative transport in plant canopies: Forward and inverse problem for UAV applications
NASA Astrophysics Data System (ADS)
Furfaro, Roberto
This dissertation deals with modeling the radiative regime in vegetation canopies and the possible remote sensing applications derived by solving the forward and inverse canopy transport equation. The aim of the research is to develop a methodology (called "end-to-end problem solution") that, starting from first principles describing the interaction between light and vegetation, constructs, as the final product, a tool that analyzes remote sensing data for precision agriculture (ripeness prediction). The procedure begins by defining the equations that describe the transport of photons inside the leaf and within the canopy. The resulting integro-differential equations are numerically integrated by adapting the conventional discrete-ordinate methods to compute the reflectance at the top of the canopy. The canopy transport equation is also analyzed to explore its spectral properties. The goal here is to apply Case's method to determine eigenvalues and eigenfunctions and to prove completeness. A model inversion is attempted by using neural network algorithms. Using input-outputs generated by running the forward model, a neural network is trained to learn the inverse map. The model-based neural network represents the end product of the overall procedure. During Oct 2002, an Unmanned Aerial Vehicles (UAVs) equipped with a camera system, flew over Kauai to take images of coffee field plantations. Our goal is to predict the amount of ripe coffee cherries for optimal harvesting. The Leaf-Canopy model was modified to include cherries as absorbing and scattering elements and two classes of neural networks were trained on the model to learn the relationship between reflectance and percentage of ripe, over-ripe and under-ripe cherries. The neural networks are interfaced with images coming from Kauai to predict ripeness percentage. Both ground and airborne images are considered. The latter were taken from the on-board Helios UAV camera system flying over the Kauai coffee field
Application of unstructured grid methods to steady and unsteady aerodynamic problems
NASA Technical Reports Server (NTRS)
Batina, John T.
1989-01-01
The purpose is to describe the development of unstructured grid methods which have several advantages when compared to methods which make use of structured grids. Unstructured grids, for example, easily allow the treatment of complex geometries, allow for general mesh movement for realistic motions and structural deformations of complete aircraft configurations which is important for aeroelastic analysis, and enable adaptive mesh refinement to more accurately resolve the physics of the flow. Steady Euler calculations for a supersonic fighter configuration to demonstrate the complex geometry capability; unsteady Euler calculations for the supersonic fighter undergoing harmonic oscillations in a complete-vehicle bending mode to demonstrate the general mesh movement capability; and vortex-dominated conical-flow calculations for highly-swept delta wings to demonstrate the adaptive mesh refinement capability are discussed. The basic solution algorithm is a multi-stage Runge-Kutta time-stepping scheme with a finite-volume spatial discretization based on an unstructured grid of triangles in 2D or tetrahedra in 3D. The moving mesh capability is a general procedure which models each edge of each triangle (2D) or tetrahedra (3D) with a spring. The resulting static equilibrium equations which result from a summation of forces are then used to move the mesh to allow it to continuously conform to the instantaneous position or shape of the aircraft. The adaptive mesh refinement procedure enriches the unstructured mesh locally to more accurately resolve the vortical flow features. These capabilities are described in detail along with representative results which demonstrate several advantages of unstructured grid methods. The applicability of the unstructured grid methodology to steady and unsteady aerodynamic problems and directions for future work are discussed.
NASA Astrophysics Data System (ADS)
Salakhov, M. Kh; Tagirov, M. S.; Dooglav, A. V.
2013-12-01
In 1997, A S Borovik-Romanov, the Academician of RAS, and A V Aganov, the head of the Physics Department of Kazan State University, suggested that the 'School of Magnetic Resonance', well known in the Soviet Union, should recommence and be regularly held in Kazan. This school was created in 1968 by G V Scrotskii, the prominent scientist in the field of magnetic resonance and the editor of many famous books on magnetic resonance (authored by A Abragam, B. Bleaney, C. Slichter, and many others) translated and edited in the Soviet Union. In 1991 the last, the 12th School, was held under the supervision of G V Scrotskii. Since 1997, more than 600 young scientists, 'schoolboys', have taken part in the School meetings, made their oral reports and participated in heated discussions. Every year a competition among the young scientist takes place and the Program Committee members name the best reports, the authors of which are invited to prepare full-scale scientific papers. The XVI International Youth Scientific School 'Actual problems of the magnetic resonance and its application' in its themes is slightly different from previous ones. A new section has been opened this year: Coherent Optics and Optical Spectroscopy. Many young people have submitted interesting reports on optical research, many of the reports are devoted to the implementation of nanotechnology in optical studies. The XVI International Youth Scientific School has been supported by the Program of development of Kazan Federal University. It is a pleasure to thank the sponsors (BRUKER Ltd, Moscow, the Russian Academy of Science, the Dynasty foundation of Dmitrii Zimin, Russia, Russian Foundation for Basic Research) and all the participants and contributors for making the International School meeting possible and interesting. A V Dooglav, M Kh Salakhov and M S Tagirov The Editors
NASA Astrophysics Data System (ADS)
Slaughter, A. E.; Permann, C.; Peterson, J. W.; Gaston, D.; Andrs, D.; Miller, J.
2014-12-01
The Idaho National Laboratory (INL)-developed Multiphysics Object Oriented Simulation Environment (MOOSE; www.mooseframework.org), is an open-source, parallel computational framework for enabling the solution of complex, fully implicit multiphysics systems. MOOSE provides a set of computational tools that scientists and engineers can use to create sophisticated multiphysics simulations. Applications built using MOOSE have computed solutions for chemical reaction and transport equations, computational fluid dynamics, solid mechanics, heat conduction, mesoscale materials modeling, geomechanics, and others. To facilitate the coupling of diverse and highly-coupled physical systems, MOOSE employs the Jacobian-free Newton-Krylov (JFNK) method when solving the coupled nonlinear systems of equations arising in multiphysics applications. The MOOSE framework is written in C++, and leverages other high-quality, open-source scientific software packages such as LibMesh, Hypre, and PETSc. MOOSE uses a "hybrid parallel" model which combines both shared memory (thread-based) and distributed memory (MPI-based) parallelism to ensure efficient resource utilization on a wide range of computational hardware. MOOSE-based applications are inherently modular, which allows for simulation expansion (via coupling of additional physics modules) and the creation of multi-scale simulations. Any application developed with MOOSE supports running (in parallel) any other MOOSE-based application. Each application can be developed independently, yet easily communicate with other applications (e.g., conductivity in a slope-scale model could be a constant input, or a complete phase-field micro-structure simulation) without additional code being written. This method of development has proven effective at INL and expedites the development of sophisticated, sustainable, and collaborative simulation tools.
NASA Astrophysics Data System (ADS)
Hébert, Alain
2014-06-01
We are presenting the computer science techniques involved in the integration of codes DRAGON5 and DONJON5 in the SALOME platform. This integration brings new capabilities in designing multi-physics computational schemes, with the possibility to couple our reactor physics codes with thermal-hydraulics or thermo-mechanics codes from other organizations. A demonstration is presented where two code components are coupled using the YACS module of SALOME, based on the CORBA protocol. The first component is a full-core 3D steady-state neuronic calculation in a PWR performed using DONJON5. The second component implement a set of 1D thermal-hydraulics calculations, each performed over a single assembly.
Code verification for unsteady 3-D fluid-solid interaction problems
NASA Astrophysics Data System (ADS)
Yu, Kintak Raymond; Étienne, Stéphane; Hay, Alexander; Pelletier, Dominique
2015-12-01
This paper describes a procedure to synthesize Manufactured Solutions for Code Verification of an important class of Fluid-Structure Interaction (FSI) problems whose behaviors can be modeled as rigid body vibrations in incompressible fluids. We refer this class of FSI problems as Fluid-Solid Interaction problems, which can be found in many practical engineering applications. The methodology can be utilized to develop Manufactured Solutions for both 2-D and 3-D cases. We demonstrate the procedure with our numerical code. We present details of the formulation and methodology. We also provide the reasonings behind our proposed approach. Results from grid and time step refinement studies confirm the verification of our solver and demonstrate the versatility of the simple synthesis procedure. In addition, the results also demonstrate that the modified decoupled approach to verify flow problems with high-order time-stepping schemes can be employed equally well to verify code for multi-physics problems (here, those of the Fluid-Solid Interaction) when the numerical discretization is based on the Method of Lines.
NASA Technical Reports Server (NTRS)
Straeter, T. A.
1972-01-01
The Davidon-Broyden class of rank one, quasi-Newton minimization methods is extended from Euclidean spaces to infinite-dimensional, real Hilbert spaces. For several techniques of choosing the step size, conditions are found which assure convergence of the associated iterates to the location of the minimum of a positive definite quadratic functional. For those techniques, convergence is achieved without the problem of the computation of a one-dimensional minimum at each iteration. The application of this class of minimization methods for the direct computation of the solution of an optimal control problem is outlined. The performance of various members of the class are compared by solving a sample optimal control problem. Finally, the sample problem is solved by other known gradient methods, and the results are compared with those obtained with the rank one quasi-Newton methods.
Understanding the public's health problems: applications of symbolic interaction to public health.
Maycock, Bruce
2015-01-01
Public health has typically investigated health issues using methods from the positivistic paradigm. Yet these approaches, although they are able to quantify the problem, may not be able to explain the social reasons of why the problem exists or the impact on those affected. This article will provide a brief overview of a sociological theory that provides methods and a theoretical framework that has proven useful in understanding public health problems and developing interventions.
NASA Astrophysics Data System (ADS)
Roth, Bradley J.; Hobbie, Russell K.
2014-05-01
This article contains a collection of homework problems to help students learn how concepts from electricity and magnetism can be applied to topics in medicine and biology. The problems are at a level typical of an undergraduate electricity and magnetism class, covering topics such as nerve electrophysiology, transcranial magnetic stimulation, and magnetic resonance imaging. The goal of these problems is to train biology and medical students to use quantitative methods, and also to introduce physics and engineering students to biological phenomena.
On the application of deterministic optimization methods to stochastic control problems
NASA Technical Reports Server (NTRS)
Kramer, L. C.; Athans, M.
1974-01-01
A technique is presented by which deterministic optimization techniques, for example, the maximum principle of Pontriagin, can be applied to stochastic optimal control problems formulated around linear systems with Gaussian noises and general cost criteria. Using this technique, the stochastic nature of the problem is suppressed but for two expectation operations, the optimization being deterministic. The use of the technique in treating problems with quadratic and nonquadratic costs is illustrated.
NASA Technical Reports Server (NTRS)
Banks, H. T.; Silcox, R. J.; Keeling, S. L.; Wang, C.
1989-01-01
A unified treatment of the linear quadratic tracking (LQT) problem, in which a control system's dynamics are modeled by a linear evolution equation with a nonhomogeneous component that is linearly dependent on the control function u, is presented; the treatment proceeds from the theoretical formulation to a numerical approximation framework. Attention is given to two categories of LQT problems in an infinite time interval: the finite energy and the finite average energy. The behavior of the optimal solution for finite time-interval problems as the length of the interval tends to infinity is discussed. Also presented are the formulations and properties of LQT problems in a finite time interval.
NASA Astrophysics Data System (ADS)
Konno, Hiroshi; Gotoh, Jun-Ya; Uno, Takeaki; Yuki, Atsushi
2002-09-01
We will propose a new cutting plane algorithm for solving a class of semi-definite programming problems (SDP) with a small number of variables and a large number of constraints. Problems of this type appear when we try to classify a large number of multi-dimensional data into two groups by a hyper-ellipsoidal surface. Among such examples are cancer diagnosis, failure discrimination of enterprises. Also, a certain class of option pricing problems can be formulated as this type of problem. We will show that the cutting plane algorithm is much more efficient than the standard interior point algorithms for solving SDP.
2009-01-01
the Poiseuille flow and Couette flow . The results of these simulations showed that this approach can be furthered to understand the scour around a...method with a turbulent stress model of the large-eddy simulation (LES) to compute incompressible viscous multi-phase flows . STM is used to compute...with various formulations can simulate different dynamic fluid flow problems, such as inviscid or viscous flows , compressible or incompressible flows
A Perturbation Theory for Hamilton's Principal Function: Applications to Boundary Value Problems
NASA Astrophysics Data System (ADS)
Munoa, Oier Penagaricano
This thesis introduces an analytical perturbation theory for Hamilton's principal function and Hamilton's characteristic function. Based on Hamilton's principle and the research carried out by Sir William Rowan Hamilton, a perturbation theory is developed to analytically solve two-point boundary value problems. The principal function is shown to solve the two-point boundary value problem through simple differentiation and elimination. The characteristic function is related to the principal function through a Legendre transformation, and can also be used to solve two-point boundary value problems. In order to obtain the solution to the perturbed two-point boundary value problem the knowledge of the nominal solution is sufficient. The perturbation theory is applied to the two body problem to study the perturbed dynamics in the vicinity of the Hohmann transfer. It is found that the perturbation can actually offer a lower cost two-impulse transfer to the target orbit than the Hohmann transfer. The numerical error analysis of the perturbation theory is shown for different orders of calculation. Coupling Hamilton's principal and characteristic functions yields an analytical perturbation theory for the initial value problem, where the state of the perturbed system can be accurately obtained. The perturbation theory is applied to the restricted three-body problem, where the system is viewed as a two-body problem perturbed by the presence of a third body. It is shown that the first order theory can be sufficient to solve the problem, winch is expressed in terms of Delaunay elements. The solution to the initial value problem is applied to derive a Keplerian periapsis map that can be used for low-energy space mission design problems.
A New Large-Scale Global Optimization Method and Its Application to Lennard-Jones Problems
1992-11-01
stochastic methods. Computational results on Lennard - Jones problems show that the new method is considerably more successful than any other method that...our method does not find as good a solution as has been found by the best special purpose methods for Lennard - Jones problems. This illustrates the inherent difficulty of large scale global optimization.
Preservice Teachers' Application of a Problem-Solving Approach on Multimedia Case
ERIC Educational Resources Information Center
Kilbane, Clare R.
2008-01-01
This study explored the use of case-based pedagogy to promote preservice teachers' problem-solving proficiency. Students in a web-supported course called CaseNEX learned to use a problem-solving approach when analyzing multimedia case studies. Their performance was compared with students in two groups who had no exposure to case methods--other…
Application of Graph Theory in an Intelligent Tutoring System for Solving Mathematical Word Problems
ERIC Educational Resources Information Center
Nabiyev, Vasif V.; Çakiroglu, Ünal; Karal, Hasan; Erümit, Ali K.; Çebi, Ayça
2016-01-01
This study is aimed to construct a model to transform word "motion problems" in to an algorithmic form in order to be processed by an intelligent tutoring system (ITS). First; categorizing the characteristics of motion problems, second; suggesting a model for the categories were carried out. In order to solve all categories of the…
ERIC Educational Resources Information Center
Ceberio, Mikel; Almudí, José Manuel; Franco, Ángel
2016-01-01
In recent years, interactive computer simulations have been progressively integrated in the teaching of the sciences and have contributed significant improvements in the teaching-learning process. Practicing problem-solving is a key factor in science and engineering education. The aim of this study was to design simulation-based problem-solving…
Students' Understanding and Application of the Area under the Curve Concept in Physics Problems
ERIC Educational Resources Information Center
Nguyen, Dong-Hai; Rebello, N. Sanjay
2011-01-01
This study investigates how students understand and apply the area under the curve concept and the integral-area relation in solving introductory physics problems. We interviewed 20 students in the first semester and 15 students from the same cohort in the second semester of a calculus-based physics course sequence on several problems involving…
ERIC Educational Resources Information Center
Reese, Simon R.
2015-01-01
This paper reflects upon a three-step process to expand the problem definition in the early stages of an action learning project. The process created a community-powered problem-solving approach within the action learning context. The simple three steps expanded upon in the paper create independence, dependence, and inter-dependence to aid the…
Application of Choice-Making Intervention for a Student with Multiply Maintained Problem Behavior.
ERIC Educational Resources Information Center
Peterson, Stephanie M. Peck; Caniglia, Cyndi; Royster, Amy Jo
2001-01-01
A functional behavioral assessment for a 10-year-old boy with autism found both teacher attention and escape from task demands maintained his problem behavior. A choice-making intervention involving either completing work alone followed by a break with teacher attention versus working with teacher assistance was found to decrease problem behavior…
Coorbital Restricted Problem and its Application in the Design of the Orbits of the LISA Spacecraft
NASA Astrophysics Data System (ADS)
Yi, Zhaohua; Li, Guangyu; Heinzel, Gerhard; Rüdiger, Albrecht; Jennrich, Oliver; Wang, Li; Xia, Yan; Zeng, Fei; Zhao, Haibin
On the basis of many coorbital phenomena in astronomy and spacecraft motion, a dynamics model is proposed in this paper — treating the coorbital restricted problem together with method for obtaining a general approximate solution. The design of the LISA spacecraft orbits is a special 2+3 coorbital restricted problem. The problem is analyzed in two steps. First, the motion of the barycenter of the three spacecraft is analyzed, which is a planar coorbital restricted three-body problem. And an approximate analytical solution of the radius and the argument of the center is obtained consequently. Secondly, the configuration of the three spacecraft with minimum arm-length variation is analyzed. The motion of a single spacecraft is a near-planar coorbital restricted three-body problem, allowing approximate analytical solutions for the orbit radius and the argument of a spacecraft. Thus approximative expressions for the arm-length are given.
NASA Astrophysics Data System (ADS)
Sudakov, Ivan; Vakulenko, Sergey
2015-11-01
The original Rayleigh-Benard convection is a standard example of the system where the critical transitions occur with changing of a control parameter. We will discuss the modified Rayleigh-Benard convection problem which includes the radiative effects as well as the specific gas sources on a surface. Such formulation of this problem leads to identification a new kind of nonlinear phenomenon, besides the well-known Benard cells. Modeling of methane emissions from permafrost into the atmosphere drives to difficult problems, involving the Navier-Stokes equations. Taking into account the modified Rayleigh-Benard convection problem, we will discuss a new approach which makes the problem of a climate catastrophe in the result of a greenhouse effect more tractable and allows us to describe catastrophic transitions in the atmosphere induced by permafrost greenhouse gas sources.
NASA Astrophysics Data System (ADS)
Ceberio, Mikel; Almudí, José Manuel; Franco, Ángel
2016-08-01
In recent years, interactive computer simulations have been progressively integrated in the teaching of the sciences and have contributed significant improvements in the teaching-learning process. Practicing problem-solving is a key factor in science and engineering education. The aim of this study was to design simulation-based problem-solving teaching materials and assess their effectiveness in improving students' ability to solve problems in university-level physics. Firstly, we analyze the effect of using simulation-based materials in the development of students' skills in employing procedures that are typically used in the scientific method of problem-solving. We found that a significant percentage of the experimental students used expert-type scientific procedures such as qualitative analysis of the problem, making hypotheses, and analysis of results. At the end of the course, only a minority of the students persisted with habits based solely on mathematical equations. Secondly, we compare the effectiveness in terms of problem-solving of the experimental group students with the students who are taught conventionally. We found that the implementation of the problem-solving strategy improved experimental students' results regarding obtaining a correct solution from the academic point of view, in standard textbook problems. Thirdly, we explore students' satisfaction with simulation-based problem-solving teaching materials and we found that the majority appear to be satisfied with the methodology proposed and took on a favorable attitude to learning problem-solving. The research was carried out among first-year Engineering Degree students.
NASA Astrophysics Data System (ADS)
Hetmaniuk, Ulrich Ladislas
Fast solvers are often designed for problems posed on simple domains. Unfortunately, engineering applications deal with arbitrary domains. To allow the use of fast solvers, fictitious domain methods have been developed. They usually define an auxiliary problem on a rectangle or a parallelepiped. In aerospace and military applications, many scatterers are composed of one major axisymmetric component and a few features. Therefore, the aim of this thesis is to define, for the scattering of acoustic waves, fictitious domain methods which exploit such local axisymmetry. The original exterior problem is first approximated by introducing an absorbing boundary condition on an artificial boundary. A family of absorbing conditions is reviewed. For some simple scatterers, numerical experiments on the position of the artificial boundary reveal that the error induced by the absorbing condition is bounded, as the wave number increases, when the artificial boundary is fixed. Then, for a class of partially axisymmetric scatterers, the truncated computational domain is embedded into an axisymmetric domain. Helmholtz problems are formulated inside this axisymmetric domain and inside each feature. Lagrange multipliers are introduced at the interfaces between the features and the axisymmetric domain to enforce a set of carefully constructed constraints. This formulation is analyzed at the continuous level and is shown to be equivalent to the original one. For the Helmholtz equation defined over the axisymmetric domain, the solution is approximated by truncated Fourier series and finite elements. Properties of this discretization method for the Helmholtz equation are also analyzed on a two-dimensional model problem. Numerical experiments are performed to illustrate the analytical results. For the auxiliary problem inside each feature, classical finite elements are used to approximate the solution. The constraints are enforced pointwise. The resulting algebraic system is solved either
Shioiri, Toshiki
2015-01-01
of fears from two or more agoraphobia-related situations is now required, because this is a robust means for distinguishing agoraphobia from specific phobias. Also, the criteria for agoraphobia are now extended to be consistent with criteria sets for other anxiety disorders (e.g., a clinician's judgment of the fears as being out of proportion to the actual danger in the situation, with a typical duration of 6 months or more). From the above, these changes from DSM-IV-TR to DSM-5 in anxiety disorders make our judgments faster and more efficient in clinical practice, and DSM-5 is more useful to elucidate the pathology. In this manuscript, we discuss the application and problems based on clinical and research viewpoints regarding anxiety disorders in DSM-5.
NASA Astrophysics Data System (ADS)
Li, C.; Nowack, R. L.; Pyrak-Nolte, L.
2003-12-01
Seismic tomographic experiments in soil and rock are strongly affected by limited and non-uniform ray coverage. We propose a new method to extrapolate data used for seismic tomography to full coverage. The proposed two-stage autoregressive extrapolation technique can be used to extend the available data and provide better tomographic images. The algorithm is based on the principle that the extrapolated data adds minimal information to the existing data. A two-stage autoregressive (AR) extrapolation scheme is then applied to the seismic tomography problem. The first stage of the extrapolation is to find the optimal prediction-error filter (PE filter). For the second stage, we use the PE filter to find the values for the missing data so that the power out of the PE filter is minimized. At the second stage, we are able to estimate missing data values with the same spectrum as the known data. This is similar to maximizing an entropy criterion. Synthetic tomographic experiments have been conducted and demonstrate that the two-stage AR extrapolation technique is a powerful tool for data extrapolation and can improve the quality of tomographic inversions of experimental and field data. Moreover, the two-stage AR extrapolation technique is tolerant to noise in the data and can still extrapolate the data to obtain overall patterns, which is very important for real data applications. In this study, we have applied AR extrapolation to a series of datasets from laboratory tomographic experiments on synthetic sediments with known structure. In these tomographic experiments, glass beads saturated with de-ionized water were used as the synthetic water-saturated background sediments. The synthetic sediments were packed in plastic cylindrical containers with a diameter of 220 mm. Tomographic experiments were then set up to measure transmitted acoustic waves through the sediment samples from multiple directions. We recorded data for sources and receivers with varying angular
NASA Technical Reports Server (NTRS)
Jacobson, R. A.
1978-01-01
The formulation of the classical Linear-Quadratic-Gaussian stochastic control problem as employed in low thrust navigation analysis is reviewed. A reformulation is then presented which eliminates a potentially unreliable matrix subtraction in the control calculations, improves the computational efficiency, and provides for a cleaner computational interface between the estimation and control processes. Lastly, the application of the U-D factorization method to the reformulated equations is examined with the objective of achieving a complete set of factored equations for the joint estimation and control problem.
Consensus properties and their large-scale applications for the gene duplication problem.
Moon, Jucheol; Lin, Harris T; Eulenstein, Oliver
2016-06-01
Solving the gene duplication problem is a classical approach for species tree inference from gene trees that are confounded by gene duplications. This problem takes a collection of gene trees and seeks a species tree that implies the minimum number of gene duplications. Wilkinson et al. posed the conjecture that the gene duplication problem satisfies the desirable Pareto property for clusters. That is, for every instance of the problem, all clusters that are commonly present in the input gene trees of this instance, called strict consensus, will also be found in every solution to this instance. We prove that this conjecture does not generally hold. Despite this negative result we show that the gene duplication problem satisfies a weaker version of the Pareto property where the strict consensus is found in at least one solution (rather than all solutions). This weaker property contributes to our design of an efficient scalable algorithm for the gene duplication problem. We demonstrate the performance of our algorithm in analyzing large-scale empirical datasets. Finally, we utilize the algorithm to evaluate the accuracy of standard heuristics for the gene duplication problem using simulated datasets.
Line Spring Model and Its Applications to Part-Through Crack Problems in Plates and Shells
NASA Technical Reports Server (NTRS)
Erdogan, F.; Aksel, B.
1986-01-01
The line spring model is described and extended to cover the problem of interaction of multiple internal and surface cracks in plates and shells. The shape functions for various related crack geometries obtained from the plane strain solution and the results of some multiple crack problems are presented. The problems considered include coplanar surface cracks on the same or opposite sides of a plate, nonsymmetrically located coplanar internal elliptic cracks, and in a very limited way the surface and corner cracks in a plate of finite width and a surface crack in a cylindrical shell with fixed end.
Haber, Eldad
2014-03-17
The focus of research was: Developing adaptive mesh for the solution of Maxwell's equations; Developing a parallel framework for time dependent inverse Maxwell's equations; Developing multilevel methods for optimization problems with inequal- ity constraints; A new inversion code for inverse Maxwell's equations in the 0th frequency (DC resistivity); A new inversion code for inverse Maxwell's equations in low frequency regime. Although the research concentrated on electromagnetic forward and in- verse problems the results of the research was applied to the problem of image registration.
Application of Dynamic Programming to Solving K Postmen Chinese Postmen Problem
NASA Astrophysics Data System (ADS)
Fei, Rong; Cui, Duwu; Zhang, Yikun; Wang, Chaoxue
In this paper, Dynamic Programming is used to solve K postmen Chinese postmen problem for the first time. And a novel model for decision- making of KPCPP and the computation models for solving the whole problem are proposed. The arcs of G are changed into the points of G' by CAPA, and the model is converted into another one, which applies to Multistep Decision Process, by MDPMCA. On the base of these two programs, Dynamic Programming algorithm KMPDPA can finally solve the NPC problem-KPCPP. An illustrative example is given to clarify concepts and methods. The accuracy of these algorithms and the relative theories are verified by mathematical language.
NASA Astrophysics Data System (ADS)
Saif, Ullah; Guan, Zailin; Wang, Baoxi; Mirza, Jahanzeb
2014-09-01
Robustness in most of the literature is associated with min-max or min-max regret criteria. However, these criteria of robustness are conservative and therefore recently new criteria called, lexicographic α-robust method has been introduced in literature which defines the robust solution as a set of solutions whose quality or jth largest cost is not worse than the best possible jth largest cost in all scenarios. These criteria might be significant for robust optimization of single objective optimization problems. However, in real optimization problems, two or more than two conflicting objectives are desired to optimize concurrently and solution of multi objective optimization problems exists in the form of a set of solutions called Pareto solutions and from these solutions it might be difficult to decide which Pareto solution can satisfy min-max, min-max regret or lexicographic α-robust criteria by considering multiple objectives simultaneously. Therefore, lexicographic α-robust method which is a recently introduced method in literature is extended in the current research for Pareto solutions. The proposed method called Pareto lexicographic α-robust approach can define Pareto lexicographic α-robust solutions from different scenarios by considering multiple objectives simultaneously. A simple example and an application of the proposed method on a simple problem of multi objective optimization of simple assembly line balancing problem with task time uncertainty is presented to get their robust solutions. The presented method can be significant to implement on different multi objective robust optimization problems containing uncertainty.
Application of Particle Swarm Optimization Algorithm in the Heating System Planning Problem
Ma, Rong-Jiang; Yu, Nan-Yang; Hu, Jun-Yi
2013-01-01
Based on the life cycle cost (LCC) approach, this paper presents an integral mathematical model and particle swarm optimization (PSO) algorithm for the heating system planning (HSP) problem. The proposed mathematical model minimizes the cost of heating system as the objective for a given life cycle time. For the particularity of HSP problem, the general particle swarm optimization algorithm was improved. An actual case study was calculated to check its feasibility in practical use. The results show that the improved particle swarm optimization (IPSO) algorithm can more preferably solve the HSP problem than PSO algorithm. Moreover, the results also present the potential to provide useful information when making decisions in the practical planning process. Therefore, it is believed that if this approach is applied correctly and in combination with other elements, it can become a powerful and effective optimization tool for HSP problem. PMID:23935429
Problem gambling of Chinese college students: application of the theory of planned behavior.
Wu, Anise M S; Tang, Catherine So-kum
2012-06-01
The present study, using the theory of planned behavior (TPB), investigated psychological correlates of intention to gamble and problem gambling among Chinese college students. Nine hundred and thirty two Chinese college students (aged from 18 to 25 years) in Hong Kong and Macao were surveyed. The findings generally support the efficacy of the TPB in explaining gambling intention and problems among Chinese college students. Specifically, the results of the path analysis indicate gambling intention and perceived control over gambling as the most proximal predictors of problem gambling, whereas attitudes, subjective norms, and perceived control, which are TPB components, influence gambling intention. Thus, these three TPB components should make up the core contents of the prevention and intervention efforts against problem gambling for Chinese college students.
Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems
NASA Technical Reports Server (NTRS)
Guruswamy, Guru P.; Kwak, Dochan (Technical Monitor)
2002-01-01
A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel supercomputers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.
Extreme values and the level-crossing problem: An application to the Feller process
NASA Astrophysics Data System (ADS)
Masoliver, Jaume
2014-04-01
We review the question of the extreme values attained by a random process. We relate it to level crossings to one boundary (first-passage problems) as well as to two boundaries (escape problems). The extremes studied are the maximum, the minimum, the maximum absolute value, and the range or span. We specialize in diffusion processes and present detailed results for the Wiener and Feller processes.
Extreme values and the level-crossing problem: an application to the Feller process.
Masoliver, Jaume
2014-04-01
We review the question of the extreme values attained by a random process. We relate it to level crossings to one boundary (first-passage problems) as well as to two boundaries (escape problems). The extremes studied are the maximum, the minimum, the maximum absolute value, and the range or span. We specialize in diffusion processes and present detailed results for the Wiener and Feller processes.
On the application of deterministic optimization methods to stochastic control problems.
NASA Technical Reports Server (NTRS)
Kramer, L. C.; Athans, M.
1972-01-01
A technique is presented by which one can apply the Minimum Principle of Pontryagin to stochastic optimal control problems formulated around linear systems with Gaussian noises and general cost criteria. Using this technique, the stochastic nature of the problem is suppressed but for two expectation operations, the optimization being essentially deterministic. The technique is applied to systems with quadratic and non-quadratic costs to illustrate its use.
Applications of remote sensing to estuarine problems. [estuaries of Chesapeake Bay
NASA Technical Reports Server (NTRS)
Munday, J. C., Jr.
1975-01-01
A variety of siting problems for the estuaries of the lower Chesapeake Bay have been solved with cost beneficial remote sensing techniques. Principal techniques used were repetitive 1:30,000 color photography of dye emitting buoys to map circulation patterns, and investigation of water color boundaries via color and color infrared imagery to scales of 1:120,000. Problems solved included sewage outfall siting, shoreline preservation and enhancement, oil pollution risk assessment, and protection of shellfish beds from dredge operations.
2014-05-01
and finite element for a curved geometry . . . . 44 3-1 Parallel performance of Euclid +GMRES solver in Hypre . . . . . . . . . . . . . 81 3-2 Parallel...MIG from HYPRE, DS (Diagonal scaling) + BiCGStab and Euclid (ILU preconditioner) + GMRES. Since the matrices for high speed flow problems and thermal...reflection problem) respectively are shown in Figure 3-1. Figure 3-1 A shows the performance plot of Euclid + GMRES for 2780 elements with different
Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems
NASA Technical Reports Server (NTRS)
Guruswamy, Guru P.; Byun, Chansup; Kwak, Dochan (Technical Monitor)
2001-01-01
A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel super computers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.
Problems with numerical techniques: Application to mid-loop operation transients
Bryce, W.M.; Lillington, J.N.
1997-07-01
There has been an increasing need to consider accidents at shutdown which have been shown in some PSAs to provide a significant contribution to overall risk. In the UK experience has been gained at three levels: (1) Assessment of codes against experiments; (2) Plant studies specifically for Sizewell B; and (3) Detailed review of modelling to support the plant studies for Sizewell B. The work has largely been carried out using various versions of RELAP5 and SCDAP/RELAP5. The paper details some of the problems that have needed to be addressed. It is believed by the authors that these kinds of problems are probably generic to most of the present generation system thermal-hydraulic codes for the conditions present in mid-loop transients. Thus as far as possible these problems and solutions are proposed in generic terms. The areas addressed include: condensables at low pressure, poor time step calculation detection, water packing, inadequate physical modelling, numerical heat transfer and mass errors. In general single code modifications have been proposed to solve the problems. These have been very much concerned with means of improving existing models rather than by formulating a completely new approach. They have been produced after a particular problem has arisen. Thus, and this has been borne out in practice, the danger is that when new transients are attempted, new problems arise which then also require patching.
Zhou Kaiyi; Sheate, William R.
2011-11-15
Since the Law of the People's Republic of China on Environmental Impact Assessment was enacted in 2003 and Huanfa 2004 No. 98 was released in 2004, Strategic Environmental Assessment (SEA) has been officially being implemented in the expressway infrastructure planning field in China. Through scrutinizing two SEA application cases of China's provincial level expressway infrastructure (PLEI) network plans, it is found that current SEA practice in expressway infrastructure planning field has a number of problems including: SEA practitioners do not fully understand the objective of SEA; its potential contributions to strategic planning and decision-making is extremely limited; the employed application procedure and prediction and assessment techniques are too simple to bring objective, unbiased and scientific results; and no alternative options are considered. All these problems directly lead to poor quality SEA and consequently weaken SEA's effectiveness.
NASA Astrophysics Data System (ADS)
Wang, Dan; Qin, Zhongfeng
2016-04-01
Uncertainty is inherent in the newsvendor problem. Most of the existing literature is devoted to characterizing the uncertainty either by randomness or by fuzziness. However, in many cases, randomness and fuzziness simultaneously appear in the same problem. Motivated by this observation, we investigate the multi-product newsvendor problem by considering the demands as hybrid variables which are proposed to describe quantities with double uncertainties. According to the expected value criterion, we formulate an expected profit maximization model and convert it to a deterministic form when the chance distributions are given. We discuss two special cases of hybrid variable demands and give their chance distributions. Then we design hybrid simulation to estimate the chance distribution and use genetic algorithm to solve the proposed models. Finally, we proceed to present numerical examples of purchasing pharmaceutical reference standard materials to illustrate the applicability of our methodology and the effectiveness of genetic algorithm.
Heydari, M.H.; Hooshmandasl, M.R.; Cattani, C.; Maalek Ghaini, F.M.
2015-02-15
Because of the nonlinearity, closed-form solutions of many important stochastic functional equations are virtually impossible to obtain. Thus, numerical solutions are a viable alternative. In this paper, a new computational method based on the generalized hat basis functions together with their stochastic operational matrix of Itô-integration is proposed for solving nonlinear stochastic Itô integral equations in large intervals. In the proposed method, a new technique for computing nonlinear terms in such problems is presented. The main advantage of the proposed method is that it transforms problems under consideration into nonlinear systems of algebraic equations which can be simply solved. Error analysis of the proposed method is investigated and also the efficiency of this method is shown on some concrete examples. The obtained results reveal that the proposed method is very accurate and efficient. As two useful applications, the proposed method is applied to obtain approximate solutions of the stochastic population growth models and stochastic pendulum problem.
Leung, Y.-F.; Marion, J.
1999-01-01
The degradation of trail resources associated with expanding recreation and tourism visitation is a growing management problem in protected areas worldwide. In order to make judicious trail and visitor management decisions, protected area managers need objective and timely information on trail resource conditions. This paper introduces a trail survey method that efficiently characterizes the lineal extent of common trail problems. The method was applied to a large sample of trails within Great Smoky Mountains National Park, a highuse protected area in the USA. The Trail ProblemAssessment Method (TPAM) employs a continuous search for multiple indicators of predefined tread problems, yielding census data documenting the location, occurrence and extent of each problem. The present application employed 23 different indicators in three categories to gather inventory, resource condition, and design and maintenance data of each surveyed trail. Seventy-two backcountry hiking trails (528 km), or 35% of the Park's total trail length, were surveyed. Soil erosion and wet soil were found to be the two most common impacts on a lineal extent basis. Trails with serious tread problems were well distributed throughout the Park, although wet muddy treads tended to be concentrated in areas where horse use was high. The effectiveness of maintenance features installed to divert water from trail treads was also evaluated. Water bars were found to be more effective than drainage dips. The TPAM was able to provide Park managers with objective and quantitative information for use in trail planning, management and maintenance decisions, and is applicable to other protected areas elsewhere with different environmental and impact characteristics.
Kushniruk, Andre W; Triola, Marc M; Borycki, Elizabeth M; Stein, Ben; Kannry, Joseph L
2005-08-01
This paper describes an innovative approach to the evaluation of a handheld prescription writing application. Participants (10 physicians) were asked to perform a series of tasks involving entering prescriptions into the application from a medication list. The study procedure involved the collection of data consisting of transcripts of the subjects who were asked to "think aloud" while interacting with the prescription writing program to enter medications. All user interactions with the device were video and audio recorded. Analysis of the protocols was conducted in two phases: (1) usability problems were identified from coding of the transcripts and video data, (2) actual errors in entering prescription data were also identified. The results indicated that there were a variety of usability problems, with most related to interface design issues. In examining the relationship between usability problems and errors, it was found that certain types of usability problems were closely associated with the occurrence of specific types of errors in prescription of medications. Implications for identifying and predicting technology-induced error are discussed in the context of improving the safety of health care information systems.
NASA Astrophysics Data System (ADS)
Beker, B.
1992-12-01
Numerical modeling of electromagnetic (EM) interaction is normally performed by using either differential or integral equation methods. Both techniques can be implemented to solve problems in frequency or time domain. The method of moments (MOM) approach to solving integral equations has matured to the point where it can be used to solve complex problems. In the past, MOM has only been applied to scattering and radiation problems involving perfectly conducting or isotropic penetrable, lossy or lossless objects. However, many materials, (e.g., composites that are used on the Navy's surface ships in practical applications) exhibit anisotropic properties. To account for these new effects, several integral equation formulations for scattering and radiation by anisotropic objects have been developed recently. The differential equation approach to EM interaction studies has seen the emergence of the finite-difference time-domain (FD-TD) method as the method of choice in many of today's scattering and radiation applications. This approach has been applied to study transient as well as steady-state scattering from many complex structures, radiation from wire antennas, and coupling into wires through narrow apertures in conducting cavities. It is important to determine whether or not, and how effectively, the FD-TD can be used to solve EM interaction problems of interest to the Navy, such as investigating potential EM interference in shipboard communication systems. Consequently, this report partly addresses this issue by dealing exclusively with FD-TD modeling of time-domain EM scattering and radiation.
An Application of Fuzzy Logic Control to a Classical Military Tracking Problem
1994-05-19
34Probability Measures of Fuzzy Events", Journal of Mathematical Analysis and Applications , vol.23, 1968, p.421. 9. Kosko, Bart. "Fuzziness Versus...January 1973, pp.28-44. Zadeh, L.A. "Probability Measures of Fuzzy Events", Journal of Mathematical Analysis and Applications , vol.23, 1968, pp.421
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang
2000-01-01
This project is about the investigation of the development of the discontinuous Galerkin finite element methods, for general geometry and triangulations, for solving convection dominated problems, with applications to aeroacoustics. On the analysis side, we have studied the efficient and stable discontinuous Galerkin framework for small second derivative terms, for example in Navier-Stokes equations, and also for related equations such as the Hamilton-Jacobi equations. This is a truly local discontinuous formulation where derivatives are considered as new variables. On the applied side, we have implemented and tested the efficiency of different approaches numerically. Related issues in high order ENO and WENO finite difference methods and spectral methods have also been investigated. Jointly with Hu, we have presented a discontinuous Galerkin finite element method for solving the nonlinear Hamilton-Jacobi equations. This method is based on the RungeKutta discontinuous Galerkin finite element method for solving conservation laws. The method has the flexibility of treating complicated geometry by using arbitrary triangulation, can achieve high order accuracy with a local, compact stencil, and are suited for efficient parallel implementation. One and two dimensional numerical examples are given to illustrate the capability of the method. Jointly with Hu, we have constructed third and fourth order WENO schemes on two dimensional unstructured meshes (triangles) in the finite volume formulation. The third order schemes are based on a combination of linear polynomials with nonlinear weights, and the fourth order schemes are based on combination of quadratic polynomials with nonlinear weights. We have addressed several difficult issues associated with high order WENO schemes on unstructured mesh, including the choice of linear and nonlinear weights, what to do with negative weights, etc. Numerical examples are shown to demonstrate the accuracies and robustness of the
Application of robust Generalised Cross-Validation to the inverse problem of electrocardiology.
Barnes, Josef P; Johnston, Peter R
2016-02-01
Robust Generalised Cross-Validation was proposed recently as a method for determining near optimal regularisation parameters in inverse problems. It was introduced to overcome a problem with the regular Generalised Cross-Validation method in which the function that is minimised to obtain the regularisation parameter often has a broad, flat minimum, resulting in a poor estimate for the parameter. The robust method defines a new function to be minimised which has a narrower minimum, but at the expense of introducing a new parameter called the robustness parameter. In this study, the Robust Generalised Cross-Validation method is applied to the inverse problem of electrocardiology. It is demonstrated that, for realistic situations, the robustness parameter can be set to zero. With this choice of robustness parameter, it is shown that the robust method is able to obtain estimates of the regularisation parameter in the inverse problem of electrocardiology that are comparable to, or better than, many of the standard methods that are applied to this inverse problem.
Problems of optimal transportation on the circle and their mechanical applications
NASA Astrophysics Data System (ADS)
Plakhov, Alexander; Tchemisova, Tatiana
2017-02-01
We consider a mechanical problem concerning a 2D axisymmetric body moving forward on the plane and making slow turns of fixed magnitude about its axis of symmetry. The body moves through a medium of non-interacting particles at rest, and collisions of particles with the body's boundary are perfectly elastic (billiard-like). The body has a blunt nose: a line segment orthogonal to the symmetry axis. It is required to make small cavities with special shape on the nose so as to minimize its aerodynamic resistance. This problem of optimizing the shape of the cavities amounts to a special case of the optimal mass transportation problem on the circle with the transportation cost being the squared Euclidean distance. We find the explicit solution for this problem when the amplitude of rotation is smaller than a fixed critical value, and give a numerical solution otherwise. As a by-product, we get explicit description of the solution for a class of optimal transportation problems on the circle.
Application of the pseudostate theory to the three-body Lambert problem
NASA Technical Reports Server (NTRS)
Byrnes, Dennis V.
1989-01-01
The pseudostate theory, which approximates three-body trajectories by overlapping the conic effects of both massive bodies on the third body, has been used to solve boundary-value problems. Frequently, the approach to the secondary is quite close, as in interplanetary gravity-assist or satellite-tour trajectories. In this case, the orbit with respect to the primary is radically changed so that perturbation techniques are time consuming, yet higher accuracy than point-to-point conics is necessary. This method reduces the solution of the three-body Lambert problem to solving two conic Lambert problems and inverting a 7 x 7 matrix, the components of which are all found analytically. Typically 90-95 percent of the point-to-point conic error, with respect to an integrated trajectory, is eliminated.
Application of the complex scaling method in solving three-body Coulomb scattering problem
NASA Astrophysics Data System (ADS)
Lazauskas, R.
2017-03-01
The three-body scattering problem in Coulombic systems is a widespread, yet unresolved problem using the mathematically rigorous methods. In this work this long-term challenge has been undertaken by combining distorted waves and Faddeev–Merkuriev equation formalisms in conjunction with the complex scaling technique to overcome the difficulties related with the boundary conditions. Unlike the common belief, it is demonstrated that the smooth complex scaling method can be applied to solve the three-body Coulomb scattering problem in a wide energy region, including the fully elastic domain and extending to the energies well beyond the atom ionization threshold. A newly developed method is used to study electron scattering on the ground states of hydrogen and positronium atoms as well as a {e}++{{H}}({n}=1)\\rightleftarrows {{p}}+{Ps}({n}=1) reaction. Where available, obtained results are compared with the experimental data and theoretical predictions, proving the accuracy and efficiency of the newly developed method.
Predictive models based on sensitivity theory and their application to practical shielding problems
Bhuiyan, S.I.; Roussin, R.W.; Lucius, J.L.; Bartine, D.E.
1983-01-01
Two new calculational models based on the use of cross-section sensitivity coefficients have been devised for calculating radiation transport in relatively simple shields. The two models, one an exponential model and the other a power model, have been applied, together with the traditional linear model, to 1- and 2-m-thick concrete-slab problems in which the water content, reinforcing-steel content, or composition of the concrete was varied. Comparing the results obtained with the three models with those obtained from exact one-dimensional discrete-ordinates transport calculations indicates that the exponential model, named the BEST model (for basic exponential shielding trend), is a particularly promising predictive tool for shielding problems dominated by exponential attenuation. When applied to a deep-penetration sodium problem, the BEST model also yields better results than do calculations based on second-order sensitivity theory.
The linearized characteristics method and its application to practical nonlinear supersonic problems
NASA Technical Reports Server (NTRS)
Ferri, Antonio
1952-01-01
The methods of characteristics has been linearized by assuming that the flow field can be represented as a basic flow field determined by nonlinearized methods and a linearized superposed flow field that accounts for small changes of boundary conditions. The method has been applied to two-dimensional rotational flow where the basic flow is potential flow and to axially symmetric problems where conical flows have been used as the basic flows. In both cases the method allows the determination of the flow field to be simplified and the numerical work to be reduced to a few calculations. The calculations of axially symmetric flow can be simplified if tabulated values of some coefficients of the conical flow are obtained. The method has also been applied to slender bodies without symmetry and to some three-dimensional wing problems where two-dimensional flow can be used as the basic flow. Both problems were unsolved before in the approximation of nonlinear flow.
Application of Modified Flower Pollination Algorithm on Mechanical Engineering Design Problem
NASA Astrophysics Data System (ADS)
Kok Meng, Ong; Pauline, Ong; Chee Kiong, Sia; Wahab, Hanani Abdul; Jafferi, Noormaziah
2017-01-01
The aim of the optimization is to obtain the best solution among other solutions in order to achieve the objective of the problem without evaluation on all possible solutions. In this study, an improved flower pollination algorithm, namely, the Modified Flower Pollination Algorithms (MFPA) is developed. Comprising of the elements of chaos theory, frog leaping local search and adaptive inertia weight, the performance of MFPA is evaluated in optimizing five benchmark mechanical engineering design problems - tubular column design, speed reducer, gear train, tension/compression spring design and pressure vessel. The obtained results are listed and compared with the results of the other state-of-art algorithms. Assessment shows that the MFPA gives promising result in finding the optimal design for all considered mechanical engineering problems.
History-Dependent Problems with Applications to Contact Models for Elastic Beams
Bartosz, Krzysztof; Kalita, Piotr; Migórski, Stanisław; Ochal, Anna; Sofonea, Mircea
2016-02-15
We prove an existence and uniqueness result for a class of subdifferential inclusions which involve a history-dependent operator. Then we specialize this result in the study of a class of history-dependent hemivariational inequalities. Problems of such kind arise in a large number of mathematical models which describe quasistatic processes of contact. To provide an example we consider an elastic beam in contact with a reactive obstacle. The contact is modeled with a new and nonstandard condition which involves both the subdifferential of a nonconvex and nonsmooth function and a Volterra-type integral term. We derive a variational formulation of the problem which is in the form of a history-dependent hemivariational inequality for the displacement field. Then, we use our abstract result to prove its unique weak solvability. Finally, we consider a numerical approximation of the model, solve effectively the approximate problems and provide numerical simulations.
Boundary value problem for the solution of magnetic cutoff rigidities and some special applications
NASA Technical Reports Server (NTRS)
Edmonds, Larry
1987-01-01
Since a planet's magnetic field can sometimes provide a spacecraft with some protection against cosmic ray and solar flare particles, it is important to be able to quantify this protection. This is done by calculating cutoff rigidities. An alternate to the conventional method (particle trajectory tracing) is introduced, which is to treat the problem as a boundary value problem. In this approach trajectory tracing is only needed to supply boundary conditions. In some special cases, trajectory tracing is not needed at all because the problem can be solved analytically. A differential equation governing cutoff rigidities is derived for static magnetic fields. The presense of solid objects, which can block a trajectory and other force fields are not included. A few qualititative comments, on existence and uniqueness of solutions, are made which may be useful when deciding how the boundary conditions should be set up. Also included are topics on axially symmetric fields.
NASA Astrophysics Data System (ADS)
Gazzola, Mattia; Chatelain, Philippe; Koumoutsakos, Petros
2010-11-01
We present a vortex particle-mesh method for fluid-structure interaction problems. The proposed methodology combines implicit interface capturing, Brinkmann penalization techniques, and the self-consistent computation of momentum transfer between the fluid and the structure. In addition, our scheme is able to handle immersed bodies characterized by non-solenoidal deformations, allowing the study of arbitrary deforming geometries. This attractively simple algorithm is shown to accurately reproduce reference simulations for rigid and deforming structures. Its suitability for biological locomotion problems is then demonstrated with the simulation of self-propelled anguilliform swimmers.
Ant Colony Optimization with Memory and Its Application to Traveling Salesman Problem
NASA Astrophysics Data System (ADS)
Wang, Rong-Long; Zhao, Li-Qing; Zhou, Xiao-Fan
Ant Colony Optimization (ACO) is one of the most recent techniques for solving combinatorial optimization problems, and has been unexpectedly successful. Therefore, many improvements have been proposed to improve the performance of the ACO algorithm. In this paper an ant colony optimization with memory is proposed, which is applied to the classical traveling salesman problem (TSP). In the proposed algorithm, each ant searches the solution not only according to the pheromone and heuristic information but also based on the memory which is from the solution of the last iteration. A large number of simulation runs are performed, and simulation results illustrate that the proposed algorithm performs better than the compared algorithms.
The Application of Imperialist Competitive Algorithm for Fuzzy Random Portfolio Selection Problem
NASA Astrophysics Data System (ADS)
EhsanHesamSadati, Mir; Bagherzadeh Mohasefi, Jamshid
2013-10-01
This paper presents an implementation of the Imperialist Competitive Algorithm (ICA) for solving the fuzzy random portfolio selection problem where the asset returns are represented by fuzzy random variables. Portfolio Optimization is an important research field in modern finance. By using the necessity-based model, fuzzy random variables reformulate to the linear programming and ICA will be designed to find the optimum solution. To show the efficiency of the proposed method, a numerical example illustrates the whole idea on implementation of ICA for fuzzy random portfolio selection problem.
Chebyshev polynomials in the spectral Tau method and applications to Eigenvalue problems
NASA Technical Reports Server (NTRS)
Johnson, Duane
1996-01-01
Chebyshev Spectral methods have received much attention recently as a technique for the rapid solution of ordinary differential equations. This technique also works well for solving linear eigenvalue problems. Specific detail is given to the properties and algebra of chebyshev polynomials; the use of chebyshev polynomials in spectral methods; and the recurrence relationships that are developed. These formula and equations are then applied to several examples which are worked out in detail. The appendix contains an example FORTRAN program used in solving an eigenvalue problem.
The potential application of the blackboard model of problem solving to multidisciplinary design
NASA Technical Reports Server (NTRS)
Rogers, J. L.
1989-01-01
Problems associated with the sequential approach to multidisciplinary design are discussed. A blackboard model is suggested as a potential tool for implementing the multilevel decomposition approach to overcome these problems. The blackboard model serves as a global database for the solution with each discipline acting as a knowledge source for updating the solution. With this approach, it is possible for engineers to improve the coordination, communication, and cooperation in the conceptual design process, allowing them to achieve a more optimal design from an interdisciplinary standpoint.
Application of a novel finite difference method to dynamic crack problems
NASA Technical Reports Server (NTRS)
Chen, Y. M.; Wilkins, M. L.
1976-01-01
A versatile finite difference method (HEMP and HEMP 3D computer programs) was developed originally for solving dynamic problems in continuum mechanics. It was extended to analyze the stress field around cracks in a solid with finite geometry subjected to dynamic loads and to simulate numerically the dynamic fracture phenomena with success. This method is an explicit finite difference method applied to the Lagrangian formulation of the equations of continuum mechanics in two and three space dimensions and time. The calculational grid moves with the material and in this way it gives a more detailed description of the physics of the problem than the Eulerian formulation.
Behavioral gerontology: application of behavioral methods to the problems of older adults.
Burgio, L D; Burgio, K L
1986-01-01
Elderly persons are under-represented in research and clinical applied behavior analysis, in spite of data suggesting that behavior problems are quite prevalent in both community dwelling and institutionalized elderly. Preliminary investigations suggest that behavioral procedures can be used effectively in treating various geriatric behavior problems. We discuss a number of areas within behavioral gerontology that would profit from additional research, including basic field study, self-management, community caregiver training, institutional staff training and management, and geriatric behavioral pharmacology. Special considerations for adapting behavioral procedures are discussed, and suggestions for expanding the role of behavior analysis in geriatric care are offered. PMID:3804865
NASA Technical Reports Server (NTRS)
Britcher, Colin P.
1997-01-01
This paper will briefly review previous work in wind tunnel Magnetic Suspension and Balance Systems (MSBS) and will examine the handful of systems around the world currently known to be in operational condition or undergoing recommissioning. Technical developments emerging from research programs at NASA and elsewhere will be reviewed briefly, where there is potential impact on large-scale MSBSS. The likely aerodynamic applications for large MSBSs will be addressed, since these applications should properly drive system designs. A recently proposed application to ultra-high Reynolds number testing will then be addressed in some detail. Finally, some opinions on the technical feasibility and usefulness of a large MSBS will be given.
NASA Astrophysics Data System (ADS)
Long, Kim Chenming
Real-world engineering optimization problems often require the consideration of multiple conflicting and noncommensurate objectives, subject to nonconvex constraint regions in a high-dimensional decision space. Further challenges occur for combinatorial multiobjective problems in which the decision variables are not continuous. Traditional multiobjective optimization methods of operations research, such as weighting and epsilon constraint methods, are ill-suited to solving these complex, multiobjective problems. This has given rise to the application of a wide range of metaheuristic optimization algorithms, such as evolutionary, particle swarm, simulated annealing, and ant colony methods, to multiobjective optimization. Several multiobjective evolutionary algorithms have been developed, including the strength Pareto evolutionary algorithm (SPEA) and the non-dominated sorting genetic algorithm (NSGA), for determining the Pareto-optimal set of non-dominated solutions. Although numerous researchers have developed a wide range of multiobjective optimization algorithms, there is a continuing need to construct computationally efficient algorithms with an improved ability to converge to globally non-dominated solutions along the Pareto-optimal front for complex, large-scale, multiobjective engineering optimization problems. This is particularly important when the multiple objective functions and constraints of the real-world system cannot be expressed in explicit mathematical representations. This research presents a novel metaheuristic evolutionary algorithm for complex multiobjective optimization problems, which combines the metaheuristic tabu search algorithm with the evolutionary algorithm (TSEA), as embodied in genetic algorithms. TSEA is successfully applied to bicriteria (i.e., structural reliability and retrofit cost) optimization of the aircraft tail structure fatigue life, which increases its reliability by prolonging fatigue life. A comparison for this
An Application of the Patient-Oriented Problem-Solving (POPS) System.
ERIC Educational Resources Information Center
Chiodo, Gary T.; And Others
1991-01-01
The Patient-Oriented Problem-Solving System, a cooperative learning model, was implemented in a second year immunology course at the Oregon Health Sciences University School of Dentistry, to correlate basic and clinical sciences information about Acquired Immune Deficiency Syndrome. Student enthusiasm and learning were substantial. (MSE)
Application of Lead Field Theory and Computerized Thorax Modeling for the ECG Inverse Problem
2007-11-02
Takano, P. Laarne, J. Malmivuo Ragnar Granit Institute, Tampere University of Technology, Finland Abstract – The ECG inverse problem is a...Performing Organization Name(s) and Address(es) Ragnar Granit Institute, Tampere University of Technology, Finland Performing Organization Report Number...computational load for calculating ECG inverse solutions. ACKNOWLEDGMENTS This work has been kindly supported by The Ragnar Granit Foundation, The
Application of fuzzy theories to formulation of multi-objective design problems. [for helicopters
NASA Technical Reports Server (NTRS)
Dhingra, A. K.; Rao, S. S.; Miura, H.
1988-01-01
Much of the decision making in real world takes place in an environment in which the goals, the constraints, and the consequences of possible actions are not known precisely. In order to deal with imprecision quantitatively, the tools of fuzzy set theory can by used. This paper demonstrates the effectiveness of fuzzy theories in the formulation and solution of two types of helicopter design problems involving multiple objectives. The first problem deals with the determination of optimal flight parameters to accomplish a specified mission in the presence of three competing objectives. The second problem addresses the optimal design of the main rotor of a helicopter involving eight objective functions. A method of solving these multi-objective problems using nonlinear programming techniques is presented. Results obtained using fuzzy formulation are compared with those obtained using crisp optimization techniques. The outlined procedures are expected to be useful in situations where doubt arises about the exactness of permissible values, degree of credibility, and correctness of statements and judgements.
Problem analysis: application in the development of market strategies for health care organizations.
Martin, J
1988-03-01
The problem analysis technique is an approach to understanding salient customer needs that is especially appropriate under complex market conditions. The author demonstrates the use of the approach in segmenting markets and conducting competitive analysis for positioning strategy decisions in health care.
Some Theoretical Aspects of Nonzero Sum Differential Games and Applications to Combat Problems
1971-06-01
8217 Homocidal Chauffer Game and Game of Two Cars [17]. The following equation characterizes the dispersal surface for player i (27, 28]: Hi (X, At, Ui, t) dt...Isaacs’ homocidal chaueffer problem [171, however these intermediate arcs asem to occur when the players are initially inside each other’s turning
2012-02-09
several optimization models and algorithm design for problems from computer vision and learning , research on sparse solutions in quadratic optimization...following papers: [9] L. Mukherjee, V. Singh, J. Peng and C. Hinrichs. Learning kernels for variants of normalized cuts: Convex relaxations and...are very small gaps compared to state-of-the-art knowledge in comunications . Table 1. Bounds for adjacency matrix
1995-05-01
Over the years, multigrid has been demonstrated as an efficient technique for solving inviscid flow problems. However, for viscous flows, convergence...capture the boundary layer near the body. Usual techniques for generating a sequence of grids that produce proper convergence rates on isotropic...formulation on unstructured stretched triangular meshes. A coarsening strategy is proposed and results are discussed. (AN)
Learner Perspectives of Online Problem-Based Learning and Applications from Cognitive Load Theory
ERIC Educational Resources Information Center
Chen, Ruth
2016-01-01
Problem-based learning (PBL) courses have historically been situated in physical classrooms involving in-person interactions. As online learning is embraced in higher education, programs that use PBL can integrate online platforms to support curriculum delivery and facilitate student engagement. This report describes student perspectives of the…
Computer-Based Assessment of Complex Problem Solving: Concept, Implementation, and Application
ERIC Educational Resources Information Center
Greiff, Samuel; Wustenberg, Sascha; Holt, Daniel V.; Goldhammer, Frank; Funke, Joachim
2013-01-01
Complex Problem Solving (CPS) skills are essential to successfully deal with environments that change dynamically and involve a large number of interconnected and partially unknown causal influences. The increasing importance of such skills in the 21st century requires appropriate assessment and intervention methods, which in turn rely on adequate…
ERIC Educational Resources Information Center
Lawrence, Virginia
No longer just a user of commercial software, the 21st century teacher is a designer of interactive software based on theories of learning. This software, a comprehensive study of straightline equations, enhances conceptual understanding, sketching, graphic interpretive and word problem solving skills as well as making connections to real-life and…
NASA Technical Reports Server (NTRS)
Eisner, M. (Editor)
1974-01-01
The possible utilization of the zero gravity resource for studies in a variety of fluid dynamics and fluid-dynamic related problems was investigated. A group of experiments are discussed and described in detail; these include experiments in the areas of geophysical fluid models, fluid dynamics, mass transfer processes, electrokinetic separation of large particles, and biophysical and physiological areas.
Solving the Maximum Clique Problem on a Class of Network Graphs, With Application to Social Networks
2008-06-01
Conan Doyle (1895-1930) A. INTRODUCTION TO THE PRUNING ALGORITHM Our pruning algorithm is based on the “clique program” developed by Bell [6...MAXIMUM CLIQUE PROBLEM Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth…. Sherlock Holmes, by Sir Arthur
ERIC Educational Resources Information Center
Borrero, Carrie S. W.; Vollmer, Timothy R.; Borrero, John C.; Bourret, Jason C.; Sloman, Kimberly N.; Samaha, Andrew L.; Dallery, Jesse
2010-01-01
This study evaluated how children who exhibited functionally equivalent problem and appropriate behavior allocate responding to experimentally arranged reinforcer rates. Relative reinforcer rates were arranged on concurrent variable-interval schedules and effects on relative response rates were interpreted using the generalized matching equation.…
Application Of Flash X-Ray Radiography To Problems In The Pulp And Paper Industry
NASA Astrophysics Data System (ADS)
Farrington, Theodore E.
1988-02-01
The use of flash x-ray radiography to investigate high speed multiphase flows is demonstrated. Both fundamental and practical problems of interest to the pulp and paper industry are used as examples. More specifically, studies of concentrated fiber suspension flows, kraft black liquor sprays and impulse drying are discussed.
Application of DOT-MORSE coupling to the analysis of three-dimensional SNAP shielding problems
NASA Technical Reports Server (NTRS)
Straker, E. A.; Childs, R. L.; Emmett, M. B.
1972-01-01
The use of discrete ordinates and Monte Carlo techniques to solve radiation transport problems is discussed. A general discussion of two possible coupling schemes is given for the two methods. The calculation of the reactor radiation scattered from a docked service and command module is used as an example of coupling discrete ordinates (DOT) and Monte Carlo (MORSE) calculations.
Applications of Taylor-Galerkin finite element method to compressible internal flow problems
NASA Technical Reports Server (NTRS)
Sohn, Jeong L.; Kim, Yongmo; Chung, T. J.
1989-01-01
A two-step Taylor-Galerkin finite element method with Lapidus' artificial viscosity scheme is applied to several test cases for internal compressible inviscid flow problems. Investigations for the effect of supersonic/subsonic inlet and outlet boundary conditions on computational results are particularly emphasized.
Allied Health Applications Integrated into Developmental Mathematics Using Problem Based Learning
ERIC Educational Resources Information Center
Shore, Mark; Shore, JoAnna; Boggs, Stacey
2004-01-01
For this FIPSE funded project, mathematics faculty attended allied health classes and allied health faculty attended developmental mathematics courses to incorporate health examples into the developmental mathematics curriculum. Through the course of this grant a 450-page developmental mathematics book was written with many problems from a variety…
For QSAR and QSPR modeling of biological and physicochemical properties, estimating the accuracy of predictions is a critical problem. The “distance to model” (DM) can be defined as a metric that defines the similarity between the training set molecules and the test set compound ...
Large Context Problems and Their Applications to Education: Some Contemporary Examples
ERIC Educational Resources Information Center
Winchester, Ian
2006-01-01
Some 35 years ago, Gerard K. O'Neill used the large context of space travel with his undergraduate physics students. A Canadian physics teacher, Art Stinner, independently arrived at a similar notion in a more limited but, therefore, more generally useful sense, which he referred to as the "large context problem" approach. At a slightly earlier…
Applications of Elliptic Integral and Elliptic Function to Electric Power Cable Problems
NASA Astrophysics Data System (ADS)
Watanabe, Kazuo
The paper proposes an application of elliptic function to a new measuring method of electric resistivity of outer-semiconductive layer of XLPE cable. The new measuring method may substitute the conventional method. The resistivity can be obtained easily by measuring resistance between two electrodes which are attached to a circumferential edge on one side of the outer-semiconductive layer of a cable core sample. The solution process is applicable to heat conduction as well as hydromechanics.
An Application of the Difference Potentials Method to Solving External Problems in CFD
NASA Technical Reports Server (NTRS)
Ryaben 'Kii, Victor S.; Tsynkov, Semyon V.
1997-01-01
Numerical solution of infinite-domain boundary-value problems requires some special techniques that would make the problem available for treatment on the computer. Indeed, the problem must be discretized in a way that the computer operates with only finite amount of information. Therefore, the original infinite-domain formulation must be altered and/or augmented so that on one hand the solution is not changed (or changed slightly) and on the other hand the finite discrete formulation becomes available. One widely used approach to constructing such discretizations consists of truncating the unbounded original domain and then setting the artificial boundary conditions (ABC's) at the newly formed external boundary. The role of the ABC's is to close the truncated problem and at the same time to ensure that the solution found inside the finite computational domain would be maximally close to (in the ideal case, exactly the same as) the corresponding fragment of the original infinite-domain solution. Let us emphasize that the proper treatment of artificial boundaries may have a profound impact on the overall quality and performance of numerical algorithms. The latter statement is corroborated by the numerous computational experiments and especially concerns the area of CFD, in which external problems present a wide class of practically important formulations. In this paper, we review some work that has been done over the recent years on constructing highly accurate nonlocal ABC's for calculation of compressible external flows. The approach is based on implementation of the generalized potentials and pseudodifferential boundary projection operators analogous to those proposed first by Calderon. The difference potentials method (DPM) by Ryaben'kii is used for the effective computation of the generalized potentials and projections. The resulting ABC's clearly outperform the existing methods from the standpoints of accuracy and robustness, in many cases noticeably speed up
ERIC Educational Resources Information Center
Lancaster, F. Wilfrid, Ed.
In planning this ninth annual clinic an attempt was made to include papers on a wide range of library applications of on-line computers, as well as to include libraries of various types and various sizes. Two papers deal with on-line circulation control (the Ohio State University system, described by Hugh C. Atkinson, and the Northwestern…
NASA Technical Reports Server (NTRS)
Arya, V. K.
1990-01-01
The viability of advanced viscoplastic models for nonlinear finite element analyses of structural components is investigated. Several uniaxial and a multiaxial problem are analyzed using the finite element implementation of Freed's viscoplastic model. Good agreement between the experimental and calculated uniaxial results validates the finite element implementation and gives confidence to apply it to more complex multiaxial problems. A comparison of results for a sample structural component (the cowl lip of a hypersonic engine inlet) with the earlier elastic, elastic-plastic, and elastic-plastic-creep analyses available in the literature shows that the elastic-viscoplastic analyses yield more reasonable stress and strain distributions. Finally, the versatility of the finite-element-based solution technology presented herein is demonstrated by applying it to another viscoplastic model.
King, S.R.; Cotney, C.R.
1996-09-01
Paraffin and asphaltene related problems including solid deposits, stabilization of emulsions and sludge production continue to plaque the oil and gas industry. Condensates and refined aromatic solvents are popular treatments for dissolving and/or controlling paraffin and asphaltene related problems. These treating fluids are typically used as a quick fix with little regard for formation damage consequences and long term effectiveness. Testing, blending and refining of unique condensate feedstocks has resulted in unique natural multi-component hydrocarbon solvents. These unique solvents will dissolve a broad carbon number spectrum of organic deposits and keep them in solution under extreme conditions. These natural solvents maximize solvency, demulsifying properties and natural wettibility tendencies without the addition of chemical additives. Those natural solvents offer economic alternatives and enhancement to common treatment practices including condensate treatments, hot oiling, chemical treatments, stimulation and production treatments.
Ant Colony Optimization with Genetic Operation and Its Application to Traveling Salesman Problem
NASA Astrophysics Data System (ADS)
Wang, Rong-Long; Zhou, Xiao-Fan; Okazaki, Kozo
Ant colony optimization (ACO) algorithms are a recently developed, population-based approach which has been successfully applied to optimization problems. However, in the ACO algorithms it is difficult to adjust the balance between intensification and diversification and thus the performance is not always very well. In this work, we propose an improved ACO algorithm in which some of ants can evolve by performing genetic operation, and the balance between intensification and diversification can be adjusted by numbers of ants which perform genetic operation. The proposed algorithm is tested by simulating the Traveling Salesman Problem (TSP). Experimental studies show that the proposed ACO algorithm with genetic operation has superior performance when compared to other existing ACO algorithms.
Application of the Lambert W function in mathematical problems of plasma physics
NASA Astrophysics Data System (ADS)
Dubinova, I. D.
2004-10-01
Examples of solutions to transcendental equations that arise in mathematical problems of plasma physics are considered. Earlier, such equations were solved only by approximate methods. The use of a new function—the Lambert W function—has made it possible to obtain explicit exact solutions that can help to refine the existing relevant theories. As examples, the following problems from different branches of plasma physics are considered: the equilibrium charge of a dust grain in a plasma, the structure of the Bohm sheath, the diameter of the separatrix in a Galathea-Belt system, the transverse structure of an electron beam in a plasma, the energy loss rate of a test charged particle in a plasma, and the structure of the Sagdeev pseudopotential for ion acoustic waves.
NASA Astrophysics Data System (ADS)
Liang, Li-Fu; Liu, Zong-Min; Guo, Qing-Yong
2009-03-01
The fluid-solid coupling theory, an interdisciplinary science between hydrodynamics and solid mechanics, is an important tool for response analysis and direct design of structures in naval architecture and ocean engineering. By applying the corresponding relations between generalized forces and generalized displacements, convolutions were performed between the basic equations of elasto-dynamics in the primary space and corresponding virtual quantities. The results were integrated and then added algebraically. In light of the fact that body forces and surface forces are both follower forces, the generalized quasi-complementary energy principle with two kinds of variables for an initial value problem is established in non-conservative systems. Using the generalized quasi-complementary energy principle to deal with the fluid-solid coupling problem and to analyze the dynamic response of structures, a method for using two kinds of variables simultaneously for calculation of force and displacement was derived.
NASA Technical Reports Server (NTRS)
Cunningham, A. M., Jr.
1973-01-01
A study was conducted to investigate the feasibility of using combined subsonic and supersonic linear theory as a means for solving unsteady transonic flow problems in an economical and yet realistic manner. With some modification, existing linear theory methods are combined into a single program and a simple algorithm is derived for determining interference between lifting surface elements of different Mach number. The method is applied to a wide variety of problems for which measured unsteady pressure distributions and Mach number distributions are available. By comparing theory and experiment, the transonic method solutions show a significant improvement over uniform flow solutions. It is concluded that with these refinements the method will provide a means for performing realistic transonic flutter and dynamic response analyses at costs which are compatible with current linear theory based solutions.
He, Qiang; Hu, Xiangtao; Ren, Hong; Zhang, Hongqi
2015-11-01
A novel artificial fish swarm algorithm (NAFSA) is proposed for solving large-scale reliability-redundancy allocation problem (RAP). In NAFSA, the social behaviors of fish swarm are classified in three ways: foraging behavior, reproductive behavior, and random behavior. The foraging behavior designs two position-updating strategies. And, the selection and crossover operators are applied to define the reproductive ability of an artificial fish. For the random behavior, which is essentially a mutation strategy, the basic cloud generator is used as the mutation operator. Finally, numerical results of four benchmark problems and a large-scale RAP are reported and compared. NAFSA shows good performance in terms of computational accuracy and computational efficiency for large scale RAP.
A Vector Study of Linearized Supersonic Flow Applications to Nonplanar Problems
NASA Technical Reports Server (NTRS)
Martin, John C
1953-01-01
A vector study of the partial-differential equation of steady linearized supersonic flow is presented. General expressions which relate the velocity potential in the stream to the conditions on the disturbing surfaces, are derived. In connection with these general expressions the concept of the finite part of an integral is discussed. A discussion of problems dealing with planar bodies is given and the conditions for the solution to be unique are investigated. Problems concerning nonplanar systems are investigated, and methods are derived for the solution of some simple nonplanar bodies. The surface pressure distribution and the damping in roll are found for rolling tails consisting of four, six, and eight rectangular fins for the Mach number range where the region of interference between adjacent fins does not affect the fin tips.
Kim, Won Hwa; Chung, Moo K; Singh, Vikas
2013-01-01
The analysis of 3-D shape meshes is a fundamental problem in computer vision, graphics, and medical imaging. Frequently, the needs of the application require that our analysis take a multi-resolution view of the shape's local and global topology, and that the solution is consistent across multiple scales. Unfortunately, the preferred mathematical construct which offers this behavior in classical image/signal processing, Wavelets, is no longer applicable in this general setting (data with non-uniform topology). In particular, the traditional definition does not allow writing out an expansion for graphs that do not correspond to the uniformly sampled lattice (e.g., images). In this paper, we adapt recent results in harmonic analysis, to derive Non-Euclidean Wavelets based algorithms for a range of shape analysis problems in vision and medical imaging. We show how descriptors derived from the dual domain representation offer native multi-resolution behavior for characterizing local/global topology around vertices. With only minor modifications, the framework yields a method for extracting interest/key points from shapes, a surprisingly simple algorithm for 3-D shape segmentation (competitive with state of the art), and a method for surface alignment (without landmarks). We give an extensive set of comparison results on a large shape segmentation benchmark and derive a uniqueness theorem for the surface alignment problem.
Application of a Class of Nonstationary Iterative Methods to Flow Problems
NASA Astrophysics Data System (ADS)
Lei, Xiuren; Peng, Hong
Convergence of a certain class of nonstationary iterative methods applied to the numerical solution of algebraic linear systems arising in flow problems is studied. The iteration matrix of these methods can be expressed by a constant matrix plus a variable matrix tending to zero. The conclusions of convergence based on the matrix spectrum are given and applied to a class of semi-iterative methods. Keywords: algebraic linear system, iterative method, convergence, matrix spectrum
Theory of two-index Bessel functions and applications to physical problems
NASA Astrophysics Data System (ADS)
Dattoli, G.; Lorenzutta, S.; Maino, G.; Torre, A.; Voykov, G.; Chiccoli, C.
1994-07-01
In this article the theory of two-index Bessel functions is presented. Their generating function, series expansion, and integral representations are discussed. Their usefulness in physical problems is also discussed in the context of analysis of radiation emitted by relativistic electrons in two-frequency undulators. Finally, the theoretical analysis proving addition and multiplication theorems for two-index Bessel functions are completed and their modified forms are introduced.
Romero, V.J.; Bankston, S.D.
1998-03-01
Optimal response surface construction is being investigated as part of Sandia discretionary (LDRD) research into Analytic Nondeterministic Methods. The goal is to achieve an adequate representation of system behavior over the relevant parameter space of a problem with a minimum of computational and user effort. This is important in global optimization and in estimation of system probabilistic response, which are both made more viable by replacing large complex computer models with fast-running accurate and noiseless approximations. A Finite Element/Lattice Sampling (FE/LS) methodology for constructing progressively refined finite element response surfaces that reuse previous generations of samples is described here. Similar finite element implementations can be extended to N-dimensional problems and/or random fields and applied to other types of structured sampling paradigms, such as classical experimental design and Gauss, Lobatto, and Patterson sampling. Here the FE/LS model is applied in a ``decoupled`` Monte Carlo analysis of two sets of probability quantification test problems. The analytic test problems, spanning a large range of probabilities and very demanding failure region geometries, constitute a good testbed for comparing the performance of various nondeterministic analysis methods. In results here, FE/LS decoupled Monte Carlo analysis required orders of magnitude less computer time than direct Monte Carlo analysis, with no appreciable loss of accuracy. Thus, when arriving at probabilities or distributions by Monte Carlo, it appears to be more efficient to expend computer-model function evaluations on building a FE/LS response surface than to expend them in direct Monte Carlo sampling.
Applications of FEM and BEM in two-dimensional fracture mechanics problems
NASA Technical Reports Server (NTRS)
Min, J. B.; Steeve, B. E.; Swanson, G. R.
1992-01-01
A comparison of the finite element method (FEM) and boundary element method (BEM) for the solution of two-dimensional plane strain problems in fracture mechanics is presented in this paper. Stress intensity factors (SIF's) were calculated using both methods for elastic plates with either a single-edge crack or an inclined-edge crack. In particular, two currently available programs, ANSYS for finite element analysis and BEASY for boundary element analysis, were used.
Solving Two-Level Optimization Problems with Applications to Robust Design and Energy Markets
2011-01-01
made up of only robust points otherwise the term is ill-defined. There is also a global counterpart as defined below. Definition 2.6: Globally ...optimal robust: For a robust optimization problem, a globally optimal robust solution x*, is a robust point such that x* is optimal ( xxfxf...Since this dissertation‟s approach is based on gradient-based methods, a globally optimal robust solution can never be guaranteed for the complete
The application of cost averaging techniques to robust control of the benchmark problem
NASA Technical Reports Server (NTRS)
Hagood, Nesbitt W.; Crawley, Edward F.
1991-01-01
A method is presented for the synthesis of robust controllers for linear time invariant systems with parameterized uncertainty structures. The method involves minimizing the average quadratic (H2) cost over the parameterized system. Bonded average cost implies stability over the set of systems. The average cost functional is minimized to derive robust fixed-order dynamic compensators. The robustness properties of these controllers are demonstrated on the sample problem.
2014-09-09
computerized tomography, synthetic aperture radar , geophysical prospecting and nondestructive test- ing. Since the solution of any inverse problem is...domain in order to handle limited aperture data. The main accomplishments during the period of this report were: 1. The derivation of new methods...in nondestructive testing using the theory of transmission eigenvalues. 2. The introduction and investigation of a new class of inverse scattering
NASA Astrophysics Data System (ADS)
Franck, I. M.; Koutsourelakis, P. S.
2017-01-01
This paper is concerned with the numerical solution of model-based, Bayesian inverse problems. We are particularly interested in cases where the cost of each likelihood evaluation (forward-model call) is expensive and the number of unknown (latent) variables is high. This is the setting in many problems in computational physics where forward models with nonlinear PDEs are used and the parameters to be calibrated involve spatio-temporarily varying coefficients, which upon discretization give rise to a high-dimensional vector of unknowns. One of the consequences of the well-documented ill-posedness of inverse problems is the possibility of multiple solutions. While such information is contained in the posterior density in Bayesian formulations, the discovery of a single mode, let alone multiple, poses a formidable computational task. The goal of the present paper is two-fold. On one hand, we propose approximate, adaptive inference strategies using mixture densities to capture multi-modal posteriors. On the other, we extend our work in [1] with regard to effective dimensionality reduction techniques that reveal low-dimensional subspaces where the posterior variance is mostly concentrated. We validate the proposed model by employing Importance Sampling which confirms that the bias introduced is small and can be efficiently corrected if the analyst wishes to do so. We demonstrate the performance of the proposed strategy in nonlinear elastography where the identification of the mechanical properties of biological materials can inform non-invasive, medical diagnosis. The discovery of multiple modes (solutions) in such problems is critical in achieving the diagnostic objectives.
The incomplete inverse and its applications to the linear least squares problem
NASA Technical Reports Server (NTRS)
Morduch, G. E.
1977-01-01
A modified matrix product is explained, and it is shown that this product defiles a group whose inverse is called the incomplete inverse. It was proven that the incomplete inverse of an augmented normal matrix includes all the quantities associated with the least squares solution. An answer is provided to the problem that occurs when the data residuals are too large and when insufficient data to justify augmenting the model are available.
Tradeoffs in Process Strategy Games with Application in the WDM Reconfiguration Problem
NASA Astrophysics Data System (ADS)
Cohen, Nathann; Coudert, David; Mazauric, Dorian; Nepomuceno, Napoleão; Nisse, Nicolas
We consider a variant of the graph searching games that is closely related to the routing reconfiguration problem in WDM networks. In the digraph processing game, a team of agents is aiming at clearing, or processing, the vertices of a digraph D. In this game, two important measures arise: 1) the total number of agents used, and 2) the total number of vertices occupied by an agent during the processing of D. Previous works have studied the problem of minimizing each of these parameters independently. In particular, both of these optimization problems are not in APX. In this paper, we study the tradeoff between both these conflicting objectives. More precisely, we prove that there exist some instances for which minimizing one of these objectives arbitrarily impairs the quality of the solution for the other one. We show that such bad tradeoffs may happen even in the case of basic network topologies. On the other hand, we exhibit classes of instances where good tradeoffs can be achieved. We also show that minimizing one of these parameters while the other is constrained is not in APX.
NASA Astrophysics Data System (ADS)
Rogovtsov, Nikolai N.; Borovik, Felix
2016-11-01
A brief analysis of different properties and principles of invariance to solve a number of classical problems of the radiation transport theory is presented. The main ideas, constructions, and assertions used in the general invariance relations reduction method are described in outline. The most important distinctive features of this general method of solving a wide enough range of problems of the radiation transport theory and mathematical physics are listed. To illustrate the potential of this method, a number of problems of the scalar radiative transfer theory have been solved rigorously in the article. The main stages of rigorous derivations of asymptotical formulas for the smallest in modulo elements of the discrete spectrum and the eigenfunctions, corresponding to them, of the characteristic equation for the case of an arbitrary phase function and almost conservative scattering are described. Formulas of the same type for the azimuthal averaged reflection function, the plane and spherical albedos have been obtained rigorously. New analytical representations for the reflection function, the plane and spherical albedos have been obtained, and effective algorithms for calculating these values have been offered for the case of a practically arbitrary phase function satisfying the Hölder condition. New analytical representation of the «surface» Green function of the scalar radiative transfer equation for a semi-infinite plane-parallel conservatively scattering medium has been found. The deep regime asymptotics of the "volume" Green function has been obtained for the case of a turbid medium of cylindrical form.
Application of poly ethylene glycol hydrogel to overcome latex urinary catheter related problems.
Sankar, Sriram; Rajalakshmi, T
2007-01-01
Urinary catheterization is a routine procedure in an intensive care unit (ICU) for monitoring the urine output of critically ill patients. The catheters which are most often used to help with urinary incontinence and retention also face problems like blockage, leakage and infection. These problems are due to proteins that adhere to the catheter surface and quickly build up on each other forming a protein layer. As the layers build up they can crystallize, providing the major source of blockage and leakage. Current strategies to avoid these problems include coating a catheter with silver alloy to reduce bacteria on the catheter surface. However, silver alloy coatings can lead to increased silver resistance for bacteria. Since silver is already used as an antibacterial agent in many places in a hospital, it is even more possible that resistance can develop. An alternative solution is presented involving coating latex, a common urinary catheter material with a micro layer (5-100 microns) of polyethylene glycol. This hydrogel is applied using an interfacial photopolymerization process with ethyl eosin as the photoinitiator. A 25 ppm concentration of ethyl eosin provided the strongest gel to surface adhesion and significantly lowered protein adhesion when compared to an uncoated latex substrate.
The geometry of discombinations and its applications to semi-inverse problems in anelasticity
Yavari, Arash; Goriely, Alain
2014-01-01
The geometrical formulation of continuum mechanics provides us with a powerful approach to understand and solve problems in anelasticity where an elastic deformation is combined with a non-elastic component arising from defects, thermal stresses, growth effects or other effects leading to residual stresses. The central idea is to assume that the material manifold, prescribing the reference configuration for a body, has an intrinsic, non-Euclidean, geometrical structure. Residual stresses then naturally arise when this configuration is mapped into Euclidean space. Here, we consider the problem of discombinations (a new term that we introduce in this paper), that is, a combined distribution of fields of dislocations, disclinations and point defects. Given a discombination, we compute the geometrical characteristics of the material manifold (curvature, torsion, non-metricity), its Cartan's moving frames and structural equations. This identification provides a powerful algorithm to solve semi-inverse problems with non-elastic components. As an example, we calculate the residual stress field of a cylindrically symmetric distribution of discombinations in an infinite circular cylindrical bar made of an incompressible hyperelastic isotropic elastic solid. PMID:25197257
NASA Astrophysics Data System (ADS)
Mena, Andres; Ferrero, Jose M.; Rodriguez Matas, Jose F.
2015-11-01
Solving the electric activity of the heart possess a big challenge, not only because of the structural complexities inherent to the heart tissue, but also because of the complex electric behaviour of the cardiac cells. The multi-scale nature of the electrophysiology problem makes difficult its numerical solution, requiring temporal and spatial resolutions of 0.1 ms and 0.2 mm respectively for accurate simulations, leading to models with millions degrees of freedom that need to be solved for thousand time steps. Solution of this problem requires the use of algorithms with higher level of parallelism in multi-core platforms. In this regard the newer programmable graphic processing units (GPU) has become a valid alternative due to their tremendous computational horsepower. This paper presents results obtained with a novel electrophysiology simulation software entirely developed in Compute Unified Device Architecture (CUDA). The software implements fully explicit and semi-implicit solvers for the monodomain model, using operator splitting. Performance is compared with classical multi-core MPI based solvers operating on dedicated high-performance computer clusters. Results obtained with the GPU based solver show enormous potential for this technology with accelerations over 50 × for three-dimensional problems.
NASA Technical Reports Server (NTRS)
Keyes, David E.; Smooke, Mitchell D.
1987-01-01
A parallelized finite difference code based on the Newton method for systems of nonlinear elliptic boundary value problems in two dimensions is analyzed in terms of computational complexity and parallel efficiency. An approximate cost function depending on 15 dimensionless parameters is derived for algorithms based on stripwise and boxwise decompositions of the domain and a one-to-one assignment of the strip or box subdomains to processors. The sensitivity of the cost functions to the parameters is explored in regions of parameter space corresponding to model small-order systems with inexpensive function evaluations and also a coupled system of nineteen equations with very expensive function evaluations. The algorithm was implemented on the Intel Hypercube, and some experimental results for the model problems with stripwise decompositions are presented and compared with the theory. In the context of computational combustion problems, multiprocessors of either message-passing or shared-memory type may be employed with stripwise decompositions to realize speedup of O(n), where n is mesh resolution in one direction, for reasonable n.
Application of CFD Analysis to Design Support and Problem Resolution for ASRM and RSRM
NASA Technical Reports Server (NTRS)
Dill, Richard A.; Whitesides, R. Harold
1993-01-01
The use of Navier-Stokes CFD codes to predict the internal flow field environment in a solid rocket motor is a very important analysis element during the design phase of a motor development program. These computational flow field solutions uncover a variety of potential problems associated with motor performance as well as suggesting solutions to these problems. CFD codes have also proven to be of great benefit in explaining problems associated with operational motors such as in the case of the pressure spike problem with the STS-54B flight motor. This paper presents results from analyses involving both motor design support and problem resolution. The issues discussed include the fluid dynamic/mechanical stress coupling at field joints relative to significant propellant deformations, the prediction of axial and radial pressure gradients in the motor associated with motor performance and propellant mechanical loading, the prediction of transition of the internal flow in the motor associated with erosive burning, the accumulation of slag at the field joints and in the submerged nozzle region, impingement of flow on the nozzle nose, and pressure gradients in the nozzle region of the motor. The analyses presented in this paper have been performed using a two-dimensional axisymmetric model. Fluent/BFC, a three dimensional Navier-Stokes flow field code, has been used to make the numerical calculations. This code utilizes a staggered grid formulation along with the SIMPLER numerical pressure-velocity coupling algorithm. Wall functions are used to represent the character of the viscous sub-layer flow, and an adjusted k-epsilon turbulence model especially configured for mass injection internal flows, is used to model the growth of turbulence in the motor port. Conclusions discussed in this paper consider flow field effects on the forward, center, and aft propellant grains except for the head end star grain region of the forward propellant segment. The field joints and the
Technology Transfer Automated Retrieval System (TEKTRAN)
The Soil and Water Assessment Tool (SWAT) is a basin scale hydrologic model developed by the US Department of Agriculture-Agricultural Research Service. SWAT's broad applicability, user friendly model interfaces, and automatic calibration software have led to a rapid increase in the number of new u...
NASA Technical Reports Server (NTRS)
Goetz, A. F. H.; Billingsley, F. C.
1974-01-01
Enhancements discussed include contrast stretching, multiratio color displays, Fourier plane operations to remove striping and boosting MTF response to enhance high spatial frequency content. The use of each technique in a specific application in the fields of geology, geomorphology and oceanography is demonstrated.
The application of interactive graphics to large time-dependent hydrodynamics problems
NASA Technical Reports Server (NTRS)
Gama-Lobo, F.; Maas, L. D.
1975-01-01
A written companion of a movie entitled "Interactive Graphics at Los Alamos Scientific Laboratory" was presented. While the movie presents the actual graphics terminal and the functions performed on it, the paper attempts to put in perspective the complexity of the application code and the complexity of the interaction that is possible.
Application of kin theory to long-standing problem in nematode production for biocontrol
Technology Transfer Automated Retrieval System (TEKTRAN)
We present a review of Shapiro-Ilan and Raymond (2016. Limiting opportunities for cheating stabilizes virulence in insect parasitic nematodes. Evolutionary Applications 9:462-470. doi: 10.1111/eva.12348) who tested changes in virulence and reproductive output in a serially propagated entomopathogeni...
Limits of applicability of the concept of scattering amplitude in small-angle scattering problems
NASA Astrophysics Data System (ADS)
Dzheparov, F. S.; Lvov, D. V.
2014-01-01
The applicability of the concept of scattering amplitude to the description of small-angle scattering experiments has been considered. An expression has been obtained for a scattered radiation flux on a detector under much milder conditions than the condition of Fraunhofer diffraction. The influence of incoherence of the source on the results has been evaluated.
Herman, Gabor T; Chen, Wei
2008-03-01
The goal of Intensity-Modulated Radiation Therapy (IMRT) is to deliver sufficient doses to tumors to kill them, but without causing irreparable damage to critical organs. This requirement can be formulated as a linear feasibility problem. The sequential (i.e., iteratively treating the constraints one after another in a cyclic fashion) algorithm ART3 is known to find a solution to such problems in a finite number of steps, provided that the feasible region is full dimensional. We present a faster algorithm called ART3+. The idea of ART3+ is to avoid unnecessary checks on constraints that are likely to be satisfied. The superior performance of the new algorithm is demonstrated by mathematical experiments inspired by the IMRT application.
NASA Astrophysics Data System (ADS)
Büsser, C. A.; Martins, G. B.; Feiguin, A. E.
2013-12-01
We present a completely unbiased and controlled numerical method to solve quantum impurity problems in d-dimensional lattices. This approach is based on a canonical transformation, of the Lanczos form, where the complete lattice Hamiltonian is exactly mapped onto an equivalent one-dimensional system, in the same spirit as Wilson's numerical renormalization, and Haydock's recursion method. We introduce many-body interactions in the form of a Kondo or Anderson impurity and we solve the low-dimensional problem using the density matrix renormalization group. The technique is particularly suited to study systems that are inhomogeneous, and/or have a boundary. The resulting dimensional reduction translates into a reduction of the scaling of the entanglement entropy by a factor Ld-1, where L is the linear dimension of the original d-dimensional lattice. This allows one to calculate the ground state of a magnetic impurity attached to an L×L square lattice and an L×L×L cubic lattice with L up to 140 sites. We also study the localized edge states in graphene nanoribbons by attaching a magnetic impurity to the edge or the center of the system. For armchair metallic nanoribbons we find a slow decay of the spin correlations as a consequence of the delocalized metallic states. In the case of zigzag ribbons, the decay of the spin correlations depends on the position of the impurity. If the impurity is situated in the bulk of the ribbon, the decay is slow as in the metallic case. On the other hand, if the adatom is attached to the edge, the decay is fast, within few sites of the impurity, as a consequence of the localized edge states, and the short correlation length. The mapping can be combined with ab initio band structure calculations to model the system, and to understand correlation effects in quantum impurity problems starting from first principles.
Spherical cavity-expansion forcing function in PRONTO 3D for application to penetration problems
Warren, T.L.; Tabbara, M.R.
1997-05-01
In certain penetration events the primary mode of deformation of the target can be approximated by known analytical expressions. In the context of an analysis code, this approximation eliminates the need for modeling the target as well as the need for a contact algorithm. This technique substantially reduces execution time. In this spirit, a forcing function which is derived from a spherical-cavity expansion analysis has been implemented in PRONTO 3D. This implementation is capable of computing the structural and component responses of a projectile due to three dimensional penetration events. Sample problems demonstrate good agreement with experimental and analytical results.
Korsakov, Andrey V; Zhukov, Vladimir P; Vandenabeele, Peter
2010-08-01
Raman-based geobarometry has recently become increasingly popular because it is an elegant way to obtain information on peak metamorphic conditions or the entire pressure-temperature-time (P-T-t) path of metamorphic rocks, especially those formed under ultrahigh-pressure (UHP) conditions. However, several problems need to be solved to get reliable estimates of metamorphic conditions. In this paper we present some examples of difficulties which can arise during the Raman spectroscopy study of solid inclusions from ultrahigh-pressure metamorphic rocks.
[Application of problem-based learning in teaching practice of Science of Meridians and Acupoints].
Wang, Xiaoyan; Tang, Jiqin; Ying, Zhenhao; Zhang, Yongchen
2015-02-01
Science of Meridians and Acupoints is the bridge between basic medicine and clinical medicine of acupuncture and moxibustion. This teaching practice was conducted in reference to the teaching mode of problembased learning (PBL), in association with the clinical design problems, by taking as the students as the role and guided by teachers. In order to stimulate students' active learning enthusiasm, the writers implemented the class teaching in views of the typical questions of clinical design, presentation of study group, emphasis on drawing meridian running courses and acupoint locations, summarization and analysis, as well as comprehensive evaluation so that the comprehensive innovative ability of students and the teaching quality could be improved.
Cooperative learning: a new application of problem-based learning in mental health training.
Bahar-Ozvariş, Sevkat; Cetin, Füsun Cuhadaroğlu; Turan, Sevgi; Peters, Antoinette S
2006-09-01
Interaction in problem-based learning (PBL) tutorials is not necessarily cooperative, which may account for variation in learning outcomes. Therefore, a cooperative assessment structure was introduced in a PBL course and the difference examined between this method and individual, lecture-based learning in mental health training. Experimental student groups gained more knowledge between pre- and post-test than did control groups, and the experimental students who scored low on the pre-test made the greatest gains. Groups that reported greater cooperation tended to have higher achievement scores. Experimental students felt that cooperation helped them learn but it also took more time and was sometimes chaotic.
Problems of Development and Application of Metal Matrix Composite Powders for Additive Technologies
NASA Astrophysics Data System (ADS)
Korosteleva, Elena N.; Pribytkov, Gennadii A.; Krinitcyn, Maxim G.; Baranovskii, Anton V.; Korzhova, Victoria V.
2016-07-01
The paper considers the problem of structure formation in composites with carbide phase and a metal binder under self-propagating high-temperature synthesis (SHS) of powder mixtures. The relation between metal binder content and their structure and wear resistance of coatings was studied. It has been shown that dispersion of the carbide phase and volume content of metal binder in the composite powders structure could be regulated purposefully for all of studied composites. It was found that the structure of surfaced coating was fully inherited of composite powders. Modification or coarsening of the structure at the expense of recrystallization or coagulation carbide phase during deposition and sputtering does not occur.
NASA Astrophysics Data System (ADS)
Misawa, Tetsuya
Recently, the wholesale electric power exchange has been founded in Japan. With the progress of the electricity market, some management schemes of electricity price risk will be necessary. In financial markets or the preceding electricity markets, various “derivatives" on assets in the markets are often used as management tools to hedge the price risk. This paper gives a short commentary on some fundamental concepts of the derivatives and the pricing theory in the financial engineering, and discusses the problems on the financial engineering approach to electricity derivatives.
Morse Theory for Symmetric Functionals on the Sphere and an Application to a Bifurcation Problem.
1984-04-01
If the problem exhibits some symmetry, the eigenvalues of A’(0) are generally degenerate. Under suitable assumptions, we prove that the number of...extend the BMhme-Marino result to more general situations (Th. 3.1). Since in our situation we cannot use the L. S. category, the choice of the Morse...Hessian Hf in the point xw . If f e c2 and if is given by an isolated, degenerate critical point x , then, in general , p(t,x W is not equal to td (see
NASA Technical Reports Server (NTRS)
Krzywoblocki, M. Z. V.
1974-01-01
The application of the elements of quantum (wave) mechanics to some special problems in the field of macroscopic fluid dynamics is discussed. Emphasis is placed on the flow of a viscous, incompressible fluid around a circular cylinder. The following subjects are considered: (1) the flow of a nonviscous fluid around a circular cylinder, (2) the restrictions imposed the stream function by the number of dimensions of space, and (3) the flow past three dimensional bodies in a viscous fluid, particularly past a circular cylinder in the symmetrical case.
NASA Astrophysics Data System (ADS)
Ebaid, Abdelhalim; Wazwaz, Abdul-Majid; Alali, Elham; Masaedeh, Basem S.
2017-03-01
Very recently, it was observed that the temperature of nanofluids is finally governed by second-order ordinary differential equations with variable coefficients of exponential orders. Such coefficients were then transformed to polynomials type by using new independent variables. In this paper, a class of second-order ordinary differential equations with variable coefficients of polynomials type has been solved analytically. The analytical solution is expressed in terms of a hypergeometric function with generalized parameters. Moreover, applications of the present results have been applied on some selected nanofluids problems in the literature. The exact solutions in the literature were derived as special cases of our generalized analytical solution.
NASA Astrophysics Data System (ADS)
Scaramuzzino, F.
2009-09-01
This paper considers a qualitative analysis of the solution of a pure exchange general economic equilibrium problem according to two independent parameters. Some recently results obtained by the author in the static and the dynamic case have been collected. Such results have been applied in a particular parametric case: it has been focused the attention on a numerical application for which the existence of the solution of time-depending parametric variational inequality that describes the equilibrium conditions has been proved by means of the direct method. By using MatLab computation after a linear interpolation, the curves of equilibrium have been visualized.
NASA Technical Reports Server (NTRS)
Morris, R. V.; Mendell, W. W.; Neely, S. C.
1982-01-01
An understanding of the reflectance spectra of scattering media is vital for the appropriate interpretation of the reflectance spectra of planetary surfaces. When the absorption coefficient (k) and the mean size of the scattering centers are small, the Kubelka-Munk (K-M) theory of diffuse reflectance is valid. Since small values of k are characteristic of a wide variety of geologically important materials over a significant range of wavelength, the K-M theory should be applicable to appropriate portions of the reflectance spectra of these media if the dimensions of the scattering centers are sufficiently small. To test the utility of the K-M theory, a comparison is conducted of a set of theoretically generated spectra with a set of independently measured experimental spectra. The similarities found in the behavior of the two sets of spectra demonstrate the applicability of the K-M theory to the understanding of physical phenomena. Aspects of wavelength-dependent scattering are investigated.
Oexle, Konrad
2006-05-01
Probabilities or risks may change when new information is available. Common sense frequently fails in assessing this change. In such cases, Bayes' theorem may be applied. It is easy to derive and has abundant applications in biology and medicine. Some examples of the application of Bayes' theorem are presented here, such as carrier risk estimation in X-chromosomal disorders, maximal manifestation probability of a dominant trait with unknown penetrance, combination of genetic and non-genetic information, and linkage analysis. The presentation addresses the non-specialist who asks for valid and consistent explanations. The conclusion to be drawn is that Bayes' theorem is an accessible and helpful tool for probability calculations in genetics.
Applications of shallow high-resolution seismic reflection to various environmental problems
Miller, R.D.; Steeples, D.W.
1994-01-01
Shallow seismic reflection has been successfully applied to environmental problems in a variety of geologic settings. Increased dynamic range of recording equipment and decreased cost of processing hardware and software have made seismic reflection a cost-effective means of imaging shallow geologic targets. Seismic data possess sufficient resolution in many areas to detect faulting with displacement of less than 3 m and beds as thin as 1 m. We have detected reflections from depths as shallow as 2 m. Subsurface voids associated with abandoned coal mines at depths of less than 20 m can be detected and mapped. Seismic reflection has been successful in mapping disturbed subsurface associated with dissolution mining of salt. A graben detected and traced by seismic reflection was shown to be a preferential pathway for leachate leaking from a chemical storage pond. As shown by these case histories, shallow high-resolution seismic reflection has the potential to significantly enhance the economics and efficiency of preventing and/or solving many environmental problems. ?? 1994.
Synchronous motion in the Kinoshita problem. Application to satellites and binary asteroids
NASA Astrophysics Data System (ADS)
Breiter, S.; Melendo, B.; Bartczak, P.; Wytrzyszczak, I.
2005-07-01
A Lie-Poisson integrator with Wisdom-Holman type splitting is constructed for the problem of a rigid body and a sphere (the Kinoshita problem). The algorithm propagates not only the position, momentum and angular momentum vector of the system, but also the tangent vector of "infinitesimal displacements". The latter allow to evaluate the maximum Lyapunov exponent or the MEGNO indicator of Cincotta and Simó. Three exemplary cases are studied: the motion of Hyperion, a fictitious binary asteroid with Hyperion as one of the components, and the binary asteroid 90 Antiope. In all cases the attitude instability of the rotation state with spin vector normal to an equatorial orbit influences stability of the system at lower rotation rates. The MEGNO maps with variations restricted to the orbital plane for position and momentum, and to the orbit normal direction for the angular momentum resemble usual Poincaré sections. But if no restriction is imposed on the variations, some stable zones turn into highly chaotic regions, often retaining the shape of their boundaries.
Application of SEAWAT to select variable-density and viscosity problems
Dausman, Alyssa M.; Langevin, Christian D.; Thorne, Danny T.; Sukop, Michael C.
2010-01-01
SEAWAT is a combined version of MODFLOW and MT3DMS, designed to simulate three-dimensional, variable-density, saturated groundwater flow. The most recent version of the SEAWAT program, SEAWAT Version 4 (or SEAWAT_V4), supports equations of state for fluid density and viscosity. In SEAWAT_V4, fluid density can be calculated as a function of one or more MT3DMS species, and optionally, fluid pressure. Fluid viscosity is calculated as a function of one or more MT3DMS species, and the program also includes additional functions for representing the dependence of fluid viscosity on temperature. This report documents testing of and experimentation with SEAWAT_V4 with six previously published problems that include various combinations of density-dependent flow due to temperature variations and/or concentration variations of one or more species. Some of the problems also include variations in viscosity that result from temperature differences in water and oil. Comparisons between the results of SEAWAT_V4 and other published results are generally consistent with one another, with minor differences considered acceptable.
Discrete and continuous fractional persistence problems - the positivity property and applications
NASA Astrophysics Data System (ADS)
Cresson, Jacky; Szafrańska, Anna
2017-03-01
In this article, we study the continuous and discrete fractional persistence problem which looks for the persistence of properties of a given classical (α = 1) differential equation in the fractional case (here using fractional Caputo's derivatives) and the numerical scheme which are associated (here with discrete Grünwald-Letnikov derivatives). Our main concerns are positivity, order preserving ,equilibrium points and stability of these points. We formulate explicit conditions under which a fractional system preserves positivity. We deduce also sufficient conditions to ensure order preserving. We deduce from these results a fractional persistence theorem which ensures that positivity, order preserving, equilibrium points and stability is preserved under a Caputo fractional embedding of a given differential equation. At the discrete level, the problem is more complicated. Following a strategy initiated by R. Mickens dealing with non local approximations, we define a non standard finite difference scheme for fractional differential equations based on discrete Grünwald-Letnikov derivatives, which preserves positivity unconditionally on the discretization increment. We deduce a discrete version of the fractional persistence theorem for what concerns positivity and equilibrium points. We then apply our results to study a fractional prey-predator model introduced by Javidi et al.
NASA Astrophysics Data System (ADS)
Flaga, Kazimierz; Furtak, Kazimierz
2015-03-01
Steel-concrete composite structures have been used in bridge engineering from decades. This is due to rational utilisation of the strength properties of the two materials. At the same time, the reinforced concrete (or prestressed) deck slab is more favourable than the orthotropic steel plate used in steel bridges (higher mass, better vibration damping, longer life). The most commonly found in practice are composite girder bridges, particularly in highway bridges of small and medium spans, but the spans may reach over 200 m. In larger spans steel truss girders are applied. Bridge composite structures are also employed in cable-stayed bridge decks of the main girder spans of the order of 600, 800 m. The aim of the article is to present the cionstruction process and strength analysis problems concerning of this type of structures. Much attention is paid to the design and calculation of the shear connectors characteristic for the discussed objects. The authors focused mainly on the issues of single composite structures. The effect of assembly states on the stresses and strains in composite members are highlighted. A separate part of problems is devoted to the influence of rheological factors, i.e. concrete shrinkage and creep, as well as thermal factors on the stresses and strains and redistribution of internal forces.
Applications of the Kustaanheimo-Stieffel transformation of the perturbed two-body problem
NASA Technical Reports Server (NTRS)
Bond, V. R.
1973-01-01
The Newtonian differential equations of motion for the two-body problem can be transformed into four linear harmonic-oscillator equations by simultaneously applying the regularization step dt/ds = r and the Kustaanheimo-Stieffel (KS) transformation. The regularization step changes the independent variable from time to a new variable s, and the KS transformation transforms the position and velocity vectors from Cartesian space into a four-dimensional space. A derivation of a uniform, regular solution for the perturbed two-body problem in the four-dimensional space is presented. The variation-of-parameters technique is used to develop expressions for the derivatives of ten elements (which are constants in the unperturbed motion) for the general case that includes both perturbations which can arise from a potential and perturbations which cannot be derived from a potential. This ten-element solution has mixed secular terms that degrade the long-term accuracy during numerical integration. Therefore, to eliminate these terms, the solution is modified by introducing two additional elements.
Robust moving mesh algorithms for hybrid stretched meshes: Application to moving boundaries problems
NASA Astrophysics Data System (ADS)
Landry, Jonathan; Soulaïmani, Azzeddine; Luke, Edward; Ben Haj Ali, Amine
2016-12-01
A robust Mesh-Mover Algorithm (MMA) approach is designed to adapt meshes of moving boundaries problems. A new methodology is developed from the best combination of well-known algorithms in order to preserve the quality of initial meshes. In most situations, MMAs distribute mesh deformation while preserving a good mesh quality. However, invalid meshes are generated when the motion is complex and/or involves multiple bodies. After studying a few MMA limitations, we propose the following approach: use the Inverse Distance Weighting (IDW) function to produce the displacement field, then apply the Geometric Element Transformation Method (GETMe) smoothing algorithms to improve the resulting mesh quality, and use an untangler to revert negative elements. The proposed approach has been proven efficient to adapt meshes for various realistic aerodynamic motions: a symmetric wing that has suffered large tip bending and twisting and the high-lift components of a swept wing that has moved to different flight stages. Finally, the fluid flow problem has been solved on meshes that have moved and they have produced results close to experimental ones. However, for situations where moving boundaries are too close to each other, more improvements need to be made or other approaches should be taken, such as an overset grid method.
NASA Technical Reports Server (NTRS)
Jacobson, Allan S.; Berkin, Andrew L.
1995-01-01
The Linked Windows Interactive Data System (LinkWinds) is a prototype visual data exploration system resulting from a NASA Jet Propulsion Laboratory (JPL) program of research into the application of graphical methods for rapidly accessing, displaying, and analyzing large multi variate multidisciplinary data sets. Running under UNIX it is an integrated multi-application executing environment using a data-linking paradigm to dynamically interconnect and control multiple windows containing a variety of displays and manipulators. This paradigm, resulting in a system similar to a graphical spreadsheet, is not only a powerful method for organizing large amounts of data for analysis, but leads to a highly intuitive, easy-to-learn user interface. It provides great flexibility in rapidly interacting with large masses of complex data to detect trends, correlations, and anomalies. The system, containing an expanding suite of non-domain-specific applications, provides for the ingestion of a variety of data base formats and hard -copy output of all displays. Remote networked workstations running LinkWinds may be interconnected, providing a multiuser science environment (MUSE) for collaborative data exploration by a distributed science team. The system is being developed in close collaboration with investigators in a variety of science disciplines using both archived and real-time data. It is currently being used to support the Microwave Limb Sounder (MLS) in orbit aboard the Upper Atmosphere Research Satellite (UARS). This paper describes the application of LinkWinds to this data to rapidly detect features, such as the ozone hole configuration, and to analyze correlations between chemical constituents of the atmosphere.
Huang, Zhengxing; Dong, Wei; Duan, Huilong; Li, Haomin
2014-01-01
Clinical pathways leave traces, described as event sequences with regard to a mixture of various latent treatment behaviors. Measuring similarities between patient traces can profitably be exploited further as a basis for providing insights into the pathways, and complementing existing techniques of clinical pathway analysis (CPA), which mainly focus on looking at aggregated data seen from an external perspective. Most existing methods measure similarities between patient traces via computing the relative distance between their event sequences. However, clinical pathways, as typical human-centered processes, always take place in an unstructured fashion, i.e., clinical events occur arbitrarily without a particular order. Bringing order in the chaos of clinical pathways may decline the accuracy of similarity measure between patient traces, and may distort the efficiency of further analysis tasks. In this paper, we present a behavioral topic analysis approach to measure similarities between patient traces. More specifically, a probabilistic graphical model, i.e., latent Dirichlet allocation (LDA), is employed to discover latent treatment behaviors of patient traces for clinical pathways such that similarities of pairwise patient traces can be measured based on their underlying behavioral topical features. The presented method provides a basis for further applications in CPA. In particular, three possible applications are introduced in this paper, i.e., patient trace retrieval, clustering, and anomaly detection. The proposed approach and the presented applications are evaluated via a real-world dataset of several specific clinical pathways collected from a Chinese hospital.
NASA Astrophysics Data System (ADS)
Afshar, M. H.
2007-04-01
This paper exploits the unique feature of the Ant Colony Optimization Algorithm (ACOA), namely incremental solution building mechanism, to develop partially constraint ACO algorithms for the solution of optimization problems with explicit constraints. The method is based on the provision of a tabu list for each ant at each decision point of the problem so that some constraints of the problem are satisfied. The application of the method to the problem of storm water network design is formulated and presented. The network nodes are considered as the decision points and the nodal elevations of the network are used as the decision variables of the optimization problem. Two partially constrained ACO algorithms are formulated and applied to a benchmark example of storm water network design and the results are compared with those of the original unconstrained algorithm and existing methods. In the first algorithm the positive slope constraints are satisfied explicitly and the rest are satisfied by using the penalty method while in the second one the satisfaction of constraints regarding the maximum ratio of flow depth to the diameter are also achieved explicitly via the tabu list. The method is shown to be very effective and efficient in locating the optimal solutions and in terms of the convergence characteristics of the resulting ACO algorithms. The proposed algorithms are also shown to be relatively insensitive to the initial colony used compared to the original algorithm. Furthermore, the method proves itself capable of finding an optimal or near-optimal solution, independent of the discretisation level and the size of the colony used.
Haider, M A; Guilak, F
2000-06-01
The micropipette aspiration test has been used extensively in recent years as a means of quantifying cellular mechanics and molecular interactions at the microscopic scale. However, previous studies have generally modeled the cell as an infinite half-space in order to develop an analytical solution for a viscoelastic solid cell. In this study, an axisymmetric boundary integral formulation of the governing equations of incompressible linear viscoelasticity is presented and used to simulate the micropipette aspiration contact problem. The cell is idealized as a homogeneous and isotropic continuum with constitutive equation given by three-parameter (E, tau 1, tau 2) standard linear viscoelasticity. The formulation is used to develop a computational model via a "correspondence principle" in which the solution is written as the sum of a homogeneous (elastic) part and a nonhomogeneous part, which depends only on past values of the solution. Via a time-marching scheme, the solution of the viscoelastic problem is obtained by employing an elastic boundary element method with modified boundary conditions. The accuracy and convergence of the time-marching scheme are verified using an analytical solution. An incremental reformulation of the scheme is presented to facilitate the simulation of micropipette aspiration, a nonlinear contact problem. In contrast to the halfspace model (Sato et al., 1990), this computational model accounts for nonlinearities in the cell response that result from a consideration of geometric factors including the finite cell dimension (radius R), curvature of the cell boundary, evolution of the cell-micropipette contact region, and curvature of the edges of the micropipette (inner radius a, edge curvature radius epsilon). Using 60 quadratic boundary elements, a micropipette aspiration creep test with ramp time t* = 0.1 s and ramp pressure p*/E = 0.8 is simulated for the cases a/R = 0.3, 0.4, 0.5 using mean parameter values for primary chondrocytes
Irreducible tensors and their applications in problems of dynamics of solids
NASA Astrophysics Data System (ADS)
Urman, Yu. M.
2007-12-01
One difficulty encountered in solving mechanical problems with complicated interaction is to express either the moments of forces or the force function via the phase variables of the problem. Here various transformations of coordinate systems are used, because interactions are determined by a relation between tensor variables one of which refers to the body and the other refers to the field. In this connection, the usual definition of a tensor in Cartesian coordinates is inconvenient because of the fact that the components of a tensor of rank l ≥ 2 can be arranged as several linear combinations that behave differently under rotations of the coordinate system. Naturally, one needs to define tensors in such a way that their components and linear combinations of these be transformed in a unified manner under rotations of the coordinate system. This requirement is satisfied by irreducible tensors. The mathematical apparatus of irreducible tensors was created to satisfy the requirements of quantum mechanics and turned out to be rather universal. As far as the author knows, this apparatus was first used in mechanics by G. G. Denisov and the author of the present paper [1]. Using this apparatus, one can see the clear physical meaning of complicated interactions, express these interactions in invariant form, easily perform transformations from one coordinate system to another coordinate system turned relative to the first, consider rather complicated types of interactions writing them in compact form explicitly depending on the phase variables of the problem, easily use the symmetry of both the rigid body and the force field structure, and perform the averaging procedure for the entire object rather than componentwise. The present paper further develops the paper [1]. We present a brief introduction to the theory of irreducible tensors. We show that the force function of various interactions between a rigid body and a force field can be represented as the scalar product
NASA Astrophysics Data System (ADS)
Nabar, Rahul
Recent advances in theoretical techniques and computational hardware have made it possible to apply Density Functional Theory (DFT) methods to realistic problems in heterogeneous catalysis. Hydrocarbon processing is economically, and strategically, a very important industrial sector in today's world. In this thesis, we employ DFT methods to examine several important problems in hydrocarbon processing. Fischer Tropsch Synthesis (FTS) is a mature technology to convert synthesis gas derived from coal, natural-gas or biomass into liquid fuels, specifically diesel. Iron is an active FTS catalyst, but the absence of detailed reaction mechanisms make it difficult to maximize activity and optimize product distribution. We evaluate thermochemistry, kinetics and Rate Determining Steps (RDS) for Fischer Tropsch Synthesis on several models of Fe catalysts: Fe(110), Fe(211) and Pt promoted Fe(110). Our studies indicated that CO-dissociation is likely to be the RDS under most reaction conditions, but the DFT-calculated activation energy ( Ea) for direct CO dissociation was too large to explain the observed catalyst activity. Consequently we demonstrate that H-assisted CO-dissociation pathways are competitive with direct CO dissociation on both Co and Fe catalysts and could be responsible for a major fraction of the reaction flux (especially at high CO coverages). We then extend this alternative mechanistic model to closed-packed facets of nine transition metal catalysts (Fe, Co, Ni, Ru, Rh, Pd, Os, Ir and Pt). H-assisted CO dissociation offers a kinetically easier route on each of the metals studied. DFT methods are also applied to another problem from the petroleum industry: discovery of poison-resistant, bimetallic, alloy catalysts (poisons: C, S, CI, P). Our systematic screening studies identify several Near Surface Alloys (NSAs) that are expected to be highly poison-resistant yet stable and avoiding adsorbate induced reconstruction. Adsorption trends are also correlated with
ERIC Educational Resources Information Center
Gale, David; And Others
Four units make up the contents of this document. The first examines applications of finite mathematics to business and economies. The user is expected to learn the method of optimization in optimal assignment problems. The second module presents applications of difference equations to economics and social sciences, and shows how to: 1) interpret…
PROGRESS AND PROBLEMS IN THE APPLICATION OF FOCUSED ULTRASOUND FOR BLOOD-BRAIN BARRIER DISRUPTION
Vykhodtseva, Natalia; McDannold, Nathan; Hynynen, Kullervo
2008-01-01
Advances in neuroscience have resulted in the development of new diagnostic and therapeutic agents for potential use in the central nervous system (CNS). However, the ability to deliver the majority of these agents to the brain is limited by the blood–brain barrier (BBB), a specialized structure of the blood vessel wall that hampers transport and diffusion from the blood to the brain. Many CNS disorders could be treated with drugs, enzymes, genes, or large-molecule biotechnological products such as recombinant proteins, if they could cross the BBB. This article reviews the problems of the BBB presence in treating the vast majority of CNS diseases and the efforts to circumvent the BBB through the design of new drugs and the development of more sophisticated delivery methods. Recent advances in the development of noninvasive, targeted drug delivery by MRI-guided ultrasound-induced BBB disruption are also summarized. PMID:18511095
Solving the Self-Interaction Problem in Kohn-Sham Density Functional Theory. Application to Atoms
Daene, M.; Gonis, A.; Nicholson, D. M.; Stocks, G. M.
2014-10-14
Previously, we proposed a computational methodology that addresses the elimination of the self-interaction error from the Kohn–Sham formulation of the density functional theory. We demonstrated how the exchange potential can be obtained, and presented results of calculations for atomic systems up to Kr carried out within a Cartesian coordinate system. In our paper, we provide complete details of this self-interaction free method formulated in spherical coordinates based on the explicit equidensity basis ansatz. We also prove analytically that derivatives obtained using this method satisfy the Virial theorem for spherical orbitals, where the problem can be reduced to one dimension. We present the results of calculations of ground-state energies of atomic systems throughout the periodic table carried out within the exchange-only mode.
A general model for moving boundary problems -- Application to drying of porous media
Silva, M.A.
2000-03-01
This work presents a general model to describe momentum, heat and mass transfer for moving boundary problems. The equations are obtained supposing an instantaneous superposition of a moving volume with velocity {nu}{sub s} (Lagrangean reference frame) over a stationary volume in the stream velocity {nu} (Eulerian reference frame). The set of equations for multicomponent single-phase systems is applied to porous media (multi-phase systems) using the volume-averaging method. Depending on the assumptions about the behavior of the system, it is possible to obtain the different models proposed in the literature, showing the generality of the model proposed in this work. Numerical results were compared to experimental data of Kaolin drying during the shrinking stage. These results showed a good agreement.
NASA Astrophysics Data System (ADS)
Poulin, Vivian; Serpico, Pasquale Dario
2015-03-01
The standard theory of electromagnetic cascades onto a photon background predicts a quasiuniversal shape for the resulting nonthermal photon spectrum. This has been applied to very disparate fields, including nonthermal big bang nucleosynthesis (BBN). However, once the energy of the injected photons falls below the pair-production threshold the spectral shape is much harder, a fact that has been overlooked in past literature. This loophole may have important phenomenological consequences, since it generically alters the BBN bounds on nonthermal relics; for instance, it allows us to reopen the possibility of purely electromagnetic solutions to the so-called "cosmological lithium problem," which were thought to be excluded by other cosmological constraints. We show this with a proof-of-principle example and a simple particle physics model, compared with previous literature.
Chin, Eu Gene; Ebesutani, Chad; Young, John
2013-06-01
The tripartite model of anxiety and depression has received strong support among child and adolescent populations. Clinical samples of children and adolescents in these studies, however, have usually been referred for treatment of anxiety and depression. This study investigated the fit of the tripartite model with a complicated sample of residential youths with externalizing problems. Structural Equation Modeling was used to test the tripartite model relationships between negative affect, positive affect, and mood symptoms. Multiple fit indices were used to provide a reliable and conservative evaluation of the model. As predicted, the tripartite model provided a good fit for symptoms of emotional disorders in this complicated sample of children and adolescents. Implications of these findings are discussed in terms of the utility of the tripartite model in understanding anxiety and depression in more diverse populations and recommendations for residential assessment.
Coarsening strategies for unstructured multigrid techniques with application to anisotropic problems
NASA Technical Reports Server (NTRS)
Morano, E.; Mavriplis, D. J.; Venkatakrishnan, V.
1995-01-01
Over the years, multigrid has been demonstrated as an efficient technique for solving inviscid flow problems. However, for viscous flows, convergence rates often degrade. This is generally due to the required use of stretched meshes (i.e., the aspect-ratio AR = delta y/delta x is much less than 1) in order to capture the boundary layer near the body. Usual techniques for generating a sequence of grids that produce proper convergence rates on isotopic meshes are not adequate for stretched meshes. This work focuses on the solution of Laplace's equation, discretized through a Galerkin finite-element formulation on unstructured stretched triangular meshes. A coarsening strategy is proposed and results are discussed.
Coarsening Strategies for Unstructured Multigrid Techniques with Application to Anisotropic Problems
NASA Technical Reports Server (NTRS)
Morano, E.; Mavriplis, D. J.; Venkatakrishnan, V.
1996-01-01
Over the years, multigrid has been demonstrated as an efficient technique for solving inviscid flow problems. However, for viscous flows, convergence rates often degrade. This is generally due to the required use of stretched meshes (i.e. the aspect-ratio AR = (delta)y/(delta)x much less than 1) in order to capture the boundary layer near the body. Usual techniques for generating a sequence of grids that produce proper convergence rates on isotropic meshes are not adequate for stretched meshes. This work focuses on the solution of Laplace's equation, discretized through a Galerkin finite-element formulation on unstructured stretched triangular meshes. A coarsening strategy is proposed and results are discussed.
Solving the Self-Interaction Problem in Kohn-Sham Density Functional Theory. Application to Atoms
Daene, M.; Gonis, A.; Nicholson, D. M.; ...
2014-10-14
Previously, we proposed a computational methodology that addresses the elimination of the self-interaction error from the Kohn–Sham formulation of the density functional theory. We demonstrated how the exchange potential can be obtained, and presented results of calculations for atomic systems up to Kr carried out within a Cartesian coordinate system. In our paper, we provide complete details of this self-interaction free method formulated in spherical coordinates based on the explicit equidensity basis ansatz. We also prove analytically that derivatives obtained using this method satisfy the Virial theorem for spherical orbitals, where the problem can be reduced to one dimension. Wemore » present the results of calculations of ground-state energies of atomic systems throughout the periodic table carried out within the exchange-only mode.« less
Pereira, Paulo J; Moshchalkov, Victor V; Chibotaru, Liviu F
2012-11-01
We present a method for finding the condensate distribution at the nucleation of superconductivity for arbitrary polygons. The method is based on conformal mapping of the analytical solution of the linearized Ginzburg-Landau problem for the disk and uses the superconducting gauge for the magnetic potential proposed earlier. As a demonstration of the method's accuracy, we calculate the distribution of the order parameter in regular polygons and compare the obtained solutions with available numerical results. As an example of an irregular polygon, we consider a deformed hexagon and prove that its calculation with the proposed method requires the same level of computational efforts as the regular ones. Finally, we extend the method over samples with arbitrary smooth boundaries. With this, we have made simulations for an experimental sample. They have shown perfect agreement with experimental data.
Adaptive use of prior information in inverse problems: an application to neutron depth profiling
NASA Astrophysics Data System (ADS)
Levenson, Mark S.; Coakley, Kevin J.
2000-03-01
A flexible class of Bayesian models is proposed to solve linear inverse problems. The models generalize linear regularization methods such as Tikhonov regularization and are motivated by the ideas of the image restoration model of Johnson et al (1991 IEEE Trans. Pattern Anal. Machine Intell. 13 413-25). The models allow for the existence of sharp boundaries between regions of different intensities in the signal, as well as the incorporation of prior information on the locations of the boundaries. The use of the prior boundary information is adaptive to the data. The models are applied to data collected to study a multilayer diamond-like carbon film sample using a nondestructive testing procedure known as neutron depth profiling.
NASA Astrophysics Data System (ADS)
von Rudorff, Guido Falk; Wehmeyer, Christoph; Sebastiani, Daniel
2014-06-01
We adapt a swarm-intelligence-based optimization method (the artificial bee colony algorithm, ABC) to enhance its parallel scaling properties and to improve the escaping behavior from deep local minima. Specifically, we apply the approach to the geometry optimization of Lennard-Jones clusters. We illustrate the performance and the scaling properties of the parallelization scheme for several system sizes (5-20 particles). Our main findings are specific recommendations for ranges of the parameters of the ABC algorithm which yield maximal performance for Lennard-Jones clusters and Morse clusters. The suggested parameter ranges for these different interaction potentials turn out to be very similar; thus, we believe that our reported values are fairly general for the ABC algorithm applied to chemical optimization problems.
Robustness in linear quadratic feedback design with application to an aircraft control problem
NASA Technical Reports Server (NTRS)
Patel, R. V.; Sridhar, B.; Toda, M.
1977-01-01
Some new results concerning robustness and asymptotic properties of error bounds of a linear quadratic feedback design are applied to an aircraft control problem. An autopilot for the flare control of the Augmentor Wing Jet STOL Research Aircraft (AWJSRA) is designed based on Linear Quadratic (LQ) theory and the results developed in this paper. The variation of the error bounds to changes in the weighting matrices in the LQ design is studied by computer simulations, and appropriate weighting matrices are chosen to obtain a reasonable error bound for variations in the system matrix and at the same time meet the practical constraints for the flare maneuver of the AWJSRA. Results from the computer simulation of a satisfactory autopilot design for the flare control of the AWJSRA are presented.
Numerical zoom for multiscale problems with an application to nuclear waste disposal
Apoung Kamga, Jean-Baptiste . E-mail: apoung@ann.jussieu.fr; Pironneau, Olivier . E-mail: Olivier.Pironneau@upmc.fr
2007-05-20
We analyse here a computational technique and error estimates for the numerical solution of some problems with multiple scales when the small scale is confined to geometrically small regions such as jumps of coefficients on curves and surfaces or complex variations of coefficients in small regions where numerical zooms can be made. The method is an adaptation of the Hilbert Subspace Decomposition Method studied by the second author in a different context so the method is restated with all known results. Combined with the layer decomposition of [S. Delpino, O. Pironneau, Asymptotic analysis and layer decomposition for the Couplex exercise, in: Alain Bourgeat, Michel Kern (Eds.), Computational Geosciences, vol. 8. No. 2, Kluwer Academics Publishers, 2004, pp. 149-162] the method is applied to the numerical assessment of a nuclear waste repository site.
Chuvilin, Andrey N.; Smirnov, Igor P.; Mosina, Alena G.; Varizhuk, Anna M.; Pozmogova, Galina E.
2016-01-01
A common problem of the preparation of hexachlorofluorescein labeled oligonucleotides is the transformation of the fluorophore to an arylacridine derivative under standard ammonolysis conditions. We show here that the arylacridine byproduct with distinct optical characteristics cannot be efficiently separated from the major product by HPLC or electrophoretic methods, which hampers precise physicochemical experiments with the labeled oligonucleotides. Studies of the transformation mechanism allowed us to select optimal conditions for avoiding the side reaction. The novel method for the post-synthetic deblocking of hexachlorofluorescein-labeled oligodeoxyribonucleotides described in this paper prevents the formation of the arylacridine derivative, enhances the yield of target oligomers, and allows them to be proper real-time PCR probes. PMID:27861573
Applications of Transport/Reaction Codes to Problems in Cell Modeling
MEANS, SHAWN A.; RINTOUL, MARK DANIEL; SHADID, JOHN N.
2001-11-01
We demonstrate two specific examples that show how our exiting capabilities in solving large systems of partial differential equations associated with transport/reaction systems can be easily applied to outstanding problems in computational biology. First, we examine a three-dimensional model for calcium wave propagation in a Xenopus Laevis frog egg and verify that a proposed model for the distribution of calcium release sites agrees with experimental results as a function of both space and time. Next, we create a model of the neuron's terminus based on experimental observations and show that the sodium-calcium exchanger is not the route of sodium's modulation of neurotransmitter release. These state-of-the-art simulations were performed on massively parallel platforms and required almost no modification of existing Sandia codes.
NASA Astrophysics Data System (ADS)
Fillman, Jake
2017-03-01
We study Jacobi matrices that are uniformly approximated by periodic operators. We show that if the rate of approximation is sufficiently rapid, then the associated quantum dynamics are ballistic in a rather strong sense; namely, the (normalized) Heisenberg evolution of the position operator converges strongly to a self-adjoint operator that is injective on the space of absolutely summable sequences. In particular, this means that all transport exponents corresponding to well-localized initial states are equal to one. Our result may be applied to a class of quantum many-body problems. Specifically, we establish a lower bound on the Lieb-Robinson velocity for an isotropic XY spin chain on the integers with limit-periodic couplings.
NASA Astrophysics Data System (ADS)
Inamoto, Tsutomu; Tamaki, Hisashi; Murao, Hajime
In this paper, we present a modified dynamic programming (DP) method. The method is basically the same as the value iteration method (VI), a representative DP method, except the preprocess of a system's state transition model for reducing its complexity, and is called the dynamic programming on reduced models (DPRM). That reduction is achieved by imaginarily considering causes of the probabilistic behavior of a system, and then cutting off some causes with low occurring probabilities. In computational illustrations, VI, DPRM, and the real-time Q-learning method (RTQ) are applied to elevator operation problems, which can be modeled by using Markov decision processes. The results show that DPRM can compute quasi-optimal value functions which bring more effective allocations of elevators than value functions by RTQ in less computational times than VI. This characteristic is notable when the traffic pattern is complicated.
Application of local Lyapunov exponents to maneuver design and navigation in the three-body problem
NASA Technical Reports Server (NTRS)
Anderson, Rodney L.; Lo, Martin W.; Born, George H.
2003-01-01
Dynamical systems theory has recently been employed to design trajectories within the three-body problem for several missions. This research has applied one stability technique, the calculation of local Lyapunov exponents, to such trajectories. Local Lyapunov exponents give an indication of the effects that perturbations or maneuvers will have on trajectories over a specified time. A numerical comparison of local Lyapunov exponents was first made with the distance random perturbations traveled from a nominal trajectory, and the local Lyapunov exponents were found to correspond well with the perturbations that caused the greatest deviation from the nominal. This would allow them to be used as an indicator of the points where it would be important to reduce navigation uncertainties.
Poulin, Vivian; Serpico, Pasquale Dario
2015-03-06
The standard theory of electromagnetic cascades onto a photon background predicts a quasiuniversal shape for the resulting nonthermal photon spectrum. This has been applied to very disparate fields, including nonthermal big bang nucleosynthesis (BBN). However, once the energy of the injected photons falls below the pair-production threshold the spectral shape is much harder, a fact that has been overlooked in past literature. This loophole may have important phenomenological consequences, since it generically alters the BBN bounds on nonthermal relics; for instance, it allows us to reopen the possibility of purely electromagnetic solutions to the so-called "cosmological lithium problem," which were thought to be excluded by other cosmological constraints. We show this with a proof-of-principle example and a simple particle physics model, compared with previous literature.
NASA Astrophysics Data System (ADS)
Calzetta, E.; Hu, B. L.
1987-01-01
We discuss the generalization to curved spacetime of a path-integral formalism of quantum field theory based on the sum over paths first going forward in time in the presence of one external source from an in vacuum to a state defined on a hypersurface of constant time in the future, and then backwards in time in the presence of a different source to the same in vacuum. This closed-time-path formalism which generalizes the conventional method based on in-out vacuum persistence amplitudes yields real and causal effective actions, field equations, and expectation values. We apply this method to two problems in semiclassical cosmology. First we study the back reaction of particle production in a radiation-filled Bianchi type-I universe with a conformal scalar field. Unlike the in-out formalism which yields complex geometries the real and causal effective action here yields equations for real effective geometries, with more readily interpretable results. It also provides a clear identification of particle production as a dissipative process in semiclassical theories. In the second problem we calculate the vacuum expectation value of the stress-energy tensor for a nonconformal massive λφ4 theory in a Robertson-Walker universe. This study serves to illustrate the use of Feynman diagrams and higher-loop calculations in this formalism. It also demonstrates the economy of this method in the calculation of expectation values over the mode-sum Bogolubov transformation methods ordinarily applied to matrix elements calculated in the conventional in-out approach. The capability of the closed-time-path formalism of dealing with Feynman, causal, and correlation functions on the same footing makes it a potentially powerful and versatile technique for treating nonequilibrium statistical properties of dynamical systems as in early-Universe quantum processes.
Yang, W.; Wu, H.; Cao, L.
2012-07-01
More and more MOX fuels are used in all over the world in the past several decades. Compared with UO{sub 2} fuel, it contains some new features. For example, the neutron spectrum is harder and more resonance interference effects within the resonance energy range are introduced because of more resonant nuclides contained in the MOX fuel. In this paper, the wavelets scaling function expansion method is applied to study the resonance behavior of plutonium isotopes within MOX fuel. Wavelets scaling function expansion continuous-energy self-shielding method is developed recently. It has been validated and verified by comparison to Monte Carlo calculations. In this method, the continuous-energy cross-sections are utilized within resonance energy, which means that it's capable to solve problems with serious resonance interference effects without iteration calculations. Therefore, this method adapts to treat the MOX fuel resonance calculation problem natively. Furthermore, plutonium isotopes have fierce oscillations of total cross-section within thermal energy range, especially for {sup 240}Pu and {sup 242}Pu. To take thermal resonance effect of plutonium isotopes into consideration the wavelet scaling function expansion continuous-energy resonance calculation code WAVERESON is enhanced by applying the free gas scattering kernel to obtain the continuous-energy scattering source within thermal energy range (2.1 eV to 4.0 eV) contrasting against the resonance energy range in which the elastic scattering kernel is utilized. Finally, all of the calculation results of WAVERESON are compared with MCNP calculation. (authors)
NASA Astrophysics Data System (ADS)
Mitchell, Lawrence; Müller, Eike Hermann
2016-12-01
The implementation of efficient multigrid preconditioners for elliptic partial differential equations (PDEs) is a challenge due to the complexity of the resulting algorithms and corresponding computer code. For sophisticated (mixed) finite element discretisations on unstructured grids an efficient implementation can be very time consuming and requires the programmer to have in-depth knowledge of the mathematical theory, parallel computing and optimisation techniques on manycore CPUs. In this paper we show how the development of bespoke multigrid preconditioners can be simplified significantly by using a framework which allows the expression of the each component of the algorithm at the correct abstraction level. Our approach (1) allows the expression of the finite element problem in a language which is close to the mathematical formulation of the problem, (2) guarantees the automatic generation and efficient execution of parallel optimised low-level computer code and (3) is flexible enough to support different abstraction levels and give the programmer control over details of the preconditioner. We use the composable abstractions of the Firedrake/PyOP2 package to demonstrate the efficiency of this approach for the solution of strongly anisotropic PDEs in atmospheric modelling. The weak formulation of the PDE is expressed in Unified Form Language (UFL) and the lower PyOP2 abstraction layer allows the manual design of computational kernels for a bespoke geometric multigrid preconditioner. We compare the performance of this preconditioner to a single-level method and hypre's BoomerAMG algorithm. The Firedrake/PyOP2 code is inherently parallel and we present a detailed performance analysis for a single node (24 cores) on the ARCHER supercomputer. Our implementation utilises a significant fraction of the available memory bandwidth and shows very good weak scaling on up to 6,144 compute cores.
Remote sensing application for identifying wetland sites on Cyprus: problems and prospects
NASA Astrophysics Data System (ADS)
Markogianni, Vassilik; Tzirkalli, Elli; Gücel, Salih; Dimitriou, Elias; Zogaris, Stamatis
2014-08-01
Wetland features in seasonally semi-arid islands pose particular difficulties in identification, inventory and conservation assessment. Our survey presents an application of utilizing images of a newly launched sensor, Landsat 8, to rapidly identify inland water bodies and produce a screening-level island-wide inventory of wetlands for the first time in Cyprus. The method treats all lentic water bodies (artificial and natural) and areas holding semi-aquatic vegetation as wetland sites. The results show that 179 sites are delineated by the remote sensing application and when this is supplemented by expert-guided identification and ground surveys during favourable wet-season conditions the total number of inventoried wetland sites is 315. The number of wetland sites is surprisingly large since it does not include micro-wetlands (under 2000 m2 or 0.2 ha) or widespread narrow lotic and riparian stream reaches. In Cyprus, a number of different wetland types occur and often in temporary or ephemerally flooded conditions and they are usually of very small areal extent. Many wetlands are artificial or semi-artificial water bodies, and numerous natural small wetland features are often degraded by anthropogenic changes or exist as remnant patches and are therefore heavily modified compared to their original natural state. The study proves that there is an urgent need for integrated and multidisciplinary study and monitoring of wetlands cover due to either climate change effects and/or anthropogenic interventions. Small wetlands are particularly vulnerable while many artificial wetlands are not managed for biodiversity values. The remote sensing and GIS application are efficient tools for this initial screening-level inventory. The need for baseline inventory information collection in support of wetland conservation is multi-scalar and requires an adaptive protocol to guide effective conservation planning.
Delémont, O; Martin, J-C
2007-04-11
Fire modelling has been gaining more and more interest into the community of forensic fire investigation. Despite an attractiveness that is partially justified, the application of fire models in that field of investigation rises some difficulties. Therefore, the understanding of the basic principles of the two main categories of fire models, the knowledge of their effective potential and their limitations are crucial for a valid and reliable application in forensic science. The present article gives an overview of the principle and basics that characterise the two kinds of fire models: zone models and field models. Whereas the first ones are developed on the basis of mathematical relation from empirical observations, such as stratification of fluid zones, and give a relatively broad view of mass and energy exchanges in an enclosure, the latter are based on fundamentals of fluid mechanics and represent the application of Computational Fluid Dynamics (CFD) to fire scenarii. Consequently, the data that are obtained from these two categories of fire models differ in nature, quality and quantity. First used in a fire safety perspective, fire models are not easily applied to assess parts of forensic fire investigation. A suggestion is proposed for the role of fire modelling in this domain of competence: a new tool for the evaluation of alternative hypotheses of origin and cause by considering the dynamic development of the fire. An example of a real case where such an approach was followed is explained and the evaluation of the obtained results comparing to traces revealed during the on-site investigation is enlightened.
The Development and Application of Novel Methods for the Solution of EMP Shielding Problems.
1981-02-01
COMI F?’NC FORM ..... 12. GOVT ACCt6ION NO. 3 PFCI" T" CATALOG NUMBER 111E DEVELOPMENT AND APPLICATION OF NOVEL METIIODS - FOR f lL SOLUTLION OF ILMP...elMKpe _E (e2)] 6, (151 where pt,=47t x 10- henries per meter. The incident 3Ee2)_(I -161 electric and magnetic fields are plane waves of the form E’ exp...Electric Waves upon 10 G.Polya, G.Szegd. Isoperimetric Inequalities in Mathematrical Small Obstacles in the Form of Ellipsoids or Elliptic Cylinders
Emmert-Streib, Frank; Dehmer, Matthias; Haibe-Kains, Benjamin
2014-01-01
In recent years gene regulatory networks (GRNs) have attracted a lot of interest and many methods have been introduced for their statistical inference from gene expression data. However, despite their popularity, GRNs are widely misunderstood. For this reason, we provide in this paper a general discussion and perspective of gene regulatory networks. Specifically, we discuss their meaning, the consistency among different network inference methods, ensemble methods, the assessment of GRNs, the estimated number of existing GRNs and their usage in different application domains. Furthermore, we discuss open questions and necessary steps in order to utilize gene regulatory networks in a clinical context and for personalized medicine. PMID:25364745
Application of Micro-XRF for Nuclear Materials Characterization and Problem Solving
Worley, Christopher G.; Tandon, Lav; Martinez, Patrick T.; Decker, Diana L.; Schwartz, Daniel S.
2012-08-02
Micro-X-ray fluorescence (MXRF) used for >> 20 years To date MXRF has been underutilized for nuclear materials (NM) spatially-resolved elemental characterization. Scanning electron microscopy (SEM) with EDX much more common for NM characterization at a micro scale. But MXRF fills gap for larger 10's microns to cm{sup 2} scales. Will present four interesting NM applications using MXRF. Demonstrated unique value of MXRF for various plutonium applications. Although SEM has much higher resolution, MXRF clearly better for these larger scale samples (especially non-conducting samples). MXRF useful to quickly identify insoluble particles in Pu/Np oxide. MXRF vital to locating HEPA filter Pu particles over cm{sup 2} areas which were then extracted for SEM morphology and particle size distribution analysis. MXRF perfect for surface swipes which are far too large for practical SEM imaging, and loose residue would contaminate SEM vacuum chamber. MXRF imaging of ER Plutonium metal warrants further studies to explore metal elemental heterogeneity.
The role of soil in NBT applications to landmine detection problem
Obhodas, Jasmina; Sudac, Davorin; Nad, Karlo; Valkovic, Vlado; Nebbia, Giancarlo; Viesti, Giuseppe
2003-08-26
Long-term observations of soil water content as well as determination of physical and chemical properties of different types of soils in Croatia were made in order to provide the necessary background information for landmine explosive detection. Soil water content is the key attribute of soil as a background in neutron backscattering technique (NBT) landmine detection application. If the critical value of the soil water content is reached, the detection of landmine explosives is not possible. It is recommended that soil moisture content for NBT application does not exceed 0.1 kg.kg-1 [1]. Nineteen representative samples of different soil types from different parts of Croatia were collected in order to establish soil bank with the necessary physical and chemical properties determined for each type of soil. In addition soil water content was measured on daily and weekly basis on several locations in Croatia. This procedure also included daily soil moisture measurements in the test field made of different types of soils from several locations in Croatia. This was done in order to evaluate the behavior of different types of soils under the same weather conditions.
Finite-volume application of high order ENO schemes to multi-dimensional boundary-value problems
NASA Technical Reports Server (NTRS)
Casper, Jay; Dorrepaal, J. Mark
1990-01-01
The finite volume approach in developing multi-dimensional, high-order accurate essentially non-oscillatory (ENO) schemes is considered. In particular, a two dimensional extension is proposed for the Euler equation of gas dynamics. This requires a spatial reconstruction operator that attains formal high order of accuracy in two dimensions by taking account of cross gradients. Given a set of cell averages in two spatial variables, polynomial interpolation of a two dimensional primitive function is employed in order to extract high-order pointwise values on cell interfaces. These points are appropriately chosen so that correspondingly high-order flux integrals are obtained through each interface by quadrature, at each point having calculated a flux contribution in an upwind fashion. The solution-in-the-small of Riemann's initial value problem (IVP) that is required for this pointwise flux computation is achieved using Roe's approximate Riemann solver. Issues to be considered in this two dimensional extension include the implementation of boundary conditions and application to general curvilinear coordinates. Results of numerical experiments are presented for qualitative and quantitative examination. These results contain the first successful application of ENO schemes to boundary value problems with solid walls.
NASA Astrophysics Data System (ADS)
Yang, H.
2015-12-01
used to create a multi-parameter and multi-physics ensemble. The ensemble forecast system is implemented operationally for San Diego Gas & Electric Company to improve system operations.
Liang, X B; Si, J
2001-01-01
This paper investigates the existence, uniqueness, and global exponential stability (GES) of the equilibrium point for a large class of neural networks with globally Lipschitz continuous activations including the widely used sigmoidal activations and the piecewise linear activations. The provided sufficient condition for GES is mild and some conditions easily examined in practice are also presented. The GES of neural networks in the case of locally Lipschitz continuous activations is also obtained under an appropriate condition. The analysis results given in the paper extend substantially the existing relevant stability results in the literature, and therefore expand significantly the application range of neural networks in solving optimization problems. As a demonstration, we apply the obtained analysis results to the design of a recurrent neural network (RNN) for solving the linear variational inequality problem (VIP) defined on any nonempty and closed box set, which includes the box constrained quadratic programming and the linear complementarity problem as the special cases. It can be inferred that the linear VIP has a unique solution for the class of Lyapunov diagonally stable matrices, and that the synthesized RNN is globally exponentially convergent to the unique solution. Some illustrative simulation examples are also given.
NASA Astrophysics Data System (ADS)
Sano, Jennifer; Ganguly, Jibamitra; Hervig, Richard; Dohmen, Ralf; Zhang, Xiaoyu
2011-08-01
We have determined the Nd 3+ diffusion kinetics in natural enstatite crystals as a function of temperature, f(O 2) and crystallographic direction at 1 bar pressure and applied these data to several terrestrial and planetary problems. The diffusion is found to be anisotropic with the diffusion parallel to the c-axial direction being significantly greater than that parallel to a- and b-axis. Also, D(// a) is likely to be somewhat greater than D(// b). Diffusion experiments parallel to the b-axial direction as a function of f(O 2) do not show a significant dependence of D(Nd 3+) on f(O 2) within the range defined by the IW buffer and 1.5 log unit above the WM buffer. The observed diffusion anisotropy and weak f(O 2) effect on D(Nd 3+) may be understood by considering the crystal structure of enstatite and the likely diffusion pathways. Using the experimental data for D(Nd 3+), we calculated the closure temperature of the Sm-Nd geochronological system in enstatite during cooling as a function of cooling rate, grain size and geometry, initial (peak) temperature and diffusion direction. We have also evaluated the approximate domain of validity of closure temperatures calculated on the basis of an infinite plane sheet model for finite plane sheets showing anisotropic diffusion. These results provide a quantitative framework for the interpretation of Sm-Nd mineral ages of orthopyroxene in planetary samples. We discuss the implications of our experimental data to the problems of melting and subsolidus cooling of mantle rocks, and the resetting of Sm-Nd mineral ages in mesosiderites. It is found that a cooling model proposed earlier [Ganguly J., Yang H., Ghose S., 1994. Thermal history of mesosiderites: Quantitative constraints from compositional zoning and Fe-Mg ordering in orthopyroxene. Geochim. Cosmochim. Acta 58, 2711-2723] could lead to the observed ˜90 Ma difference between the U-Pb age and Sm-Nd mineral age for mesosiderites, thus obviating the need for a model of
Applications of fractured continuum model to enhanced geothermal system heat extraction problems.
Kalinina, Elena A; Klise, Katherine A; McKenna, Sean A; Hadgu, Teklu; Lowry, Thomas S
2014-01-01
This paper describes the applications of the fractured continuum model to the different enhanced geothermal systems reservoir conditions. The capability of the fractured continuum model to generate fracture characteristics expected in enhanced geothermal systems reservoir environments are demonstrated for single and multiple sets of fractures. Fracture characteristics are defined by fracture strike, dip, spacing, and aperture. The paper demonstrates how the fractured continuum model can be extended to represent continuous fractured features, such as long fractures, and the conditions in which the fracture density varies within the different depth intervals. Simulations of heat transport using different fracture settings were compared with regard to their heat extraction effectiveness. The best heat extraction was obtained in the case when fractures were horizontal. A conventional heat extraction scheme with vertical wells was compared to an alternative scheme with horizontal wells. The heat extraction with the horizontal wells was significantly better than with the vertical wells when the injector was at the bottom.
Application of remote sensing to selected problems within the state of California
NASA Technical Reports Server (NTRS)
Colwell, R. N. (Principal Investigator); Benson, A. S.; Estes, J. E.; Johnson, C.
1981-01-01
Specific case studies undertaken to demonstrate the usefulness of remote sensing technology to resource managers in California are highlighted. Applications discussed include the mapping and quantization of wildland fire fuels in Mendocino and Shasta Counties as well as in the Central Valley; the development of a digital spectral/terrain data set for Colusa County; the Forsythe Planning Experiment to maximize the usefulness of inputs from LANDSAT and geographic information systems to county planning in Mendocino County; the development of a digital data bank for Big Basin State Park in Santa Cruz County; the detection of salinity related cotton canopy reflectance differences in the Central Valley; and the surveying of avocado acreage and that of other fruits and nut crops in Southern California. Special studies include the interpretability of high altitude, large format photography of forested areas for coordinated resource planning using U-2 photographs of the NASA Bucks Lake Forestry test site in the Plumas National Forest in the Sierra Nevada Mountains.
Textile-Based Electronic Components for Energy Applications: Principles, Problems, and Perspective
Kaushik, Vishakha; Lee, Jaehong; Hong, Juree; Lee, Seulah; Lee, Sanggeun; Seo, Jungmok; Mahata, Chandreswar; Lee, Taeyoon
2015-01-01
Textile-based electronic components have gained interest in the fields of science and technology. Recent developments in nanotechnology have enabled the integration of electronic components into textiles while retaining desirable characteristics such as flexibility, strength, and conductivity. Various materials were investigated in detail to obtain current conductive textile technology, and the integration of electronic components into these textiles shows great promise for common everyday applications. The harvest and storage of energy in textile electronics is a challenge that requires further attention in order to enable complete adoption of this technology in practical implementations. This review focuses on the various conductive textiles, their methods of preparation, and textile-based electronic components. We also focus on fabrication and the function of textile-based energy harvesting and storage devices, discuss their fundamental limitations, and suggest new areas of study. PMID:28347078
Hesitant fuzzy soft sets with application in multicriteria group decision making problems.
Wang, Jian-qiang; Li, Xin-E; Chen, Xiao-hong
2015-01-01
Soft sets have been regarded as a useful mathematical tool to deal with uncertainty. In recent years, many scholars have shown an intense interest in soft sets and extended standard soft sets to intuitionistic fuzzy soft sets, interval-valued fuzzy soft sets, and generalized fuzzy soft sets. In this paper, hesitant fuzzy soft sets are defined by combining fuzzy soft sets with hesitant fuzzy sets. And some operations on hesitant fuzzy soft sets based on Archimedean t-norm and Archimedean t-conorm are defined. Besides, four aggregation operations, such as the HFSWA, HFSWG, GHFSWA, and GHFSWG operators, are given. Based on these operators, a multicriteria group decision making approach with hesitant fuzzy soft sets is also proposed. To demonstrate its accuracy and applicability, this approach is finally employed to calculate a numerical example.
NASA Astrophysics Data System (ADS)
Cobos, Agustín C.; Poma, Ana L.; Alvarez, Guillermo D.; Sanz, Darío E.
2016-10-01
We introduce an alternative method to calculate the steady state solution of the angular photon flux after a numerical evolution of the time-dependent Boltzmann transport equation (BTE). After a proper discretization the transport equation was converted into an ordinary system of differential equations that can be iterated as a weighted Richardson algorithm. As a different approach, in this work the time variable regulates the iteration process and convergence criteria is based on physical parameters. Positivity and convergence was assessed from first principles and a modified Courant-Friedrichs-Lewy condition was devised to guarantee convergence. The Penelope Monte Carlo method was used to test the convergence and accuracy of our approach for different phase space discretizations. Benchmarking was performed by calculation of total fluence and photon spectra in different one-dimensional geometries irradiated with 60Co and 6 MV photon beams and radiological applications were devised.
Determination of the zincate diffusion coefficient and its application to alkaline battery problems
NASA Technical Reports Server (NTRS)
May, C. E.; Kautz, Harold E.
1978-01-01
The diffusion coefficient for the zincate ion at 24 C was found to be 9.9 X 10 to the minus 7th power squared cm per sec + or - 30 percent in 45 percent potassium hydroxide and 1.4 x 10 to the minus 7 squared cm per sec + or - 25 percent in 40 percent sodium hydroxide. Comparison of these values with literature values at different potassium hydroxide concentrations show that the Stokes-Einstein equation is obeyed. The diffusion coefficient is characteristic of the zincate ion (not the cation) and independent of its concentration. Calculations with the measured value of the diffusion coefficient show that the zinc concentration in an alkaline zincate half cell becomes uniform throughout in tens of hours by diffusion alone. Diffusion equations are derived which are applicable to finite size chambers. Details and discussion of the experimental method are also given.
Some Unsolved Problems, Questions, and Applications of the Brightsen Nucleon Cluster Model
NASA Astrophysics Data System (ADS)
Smarandache, Florentin
2010-10-01
Brightsen Model is opposite to the Standard Model, and it was build on John Weeler's Resonating Group Structure Model and on Linus Pauling's Close-Packed Spheron Model. Among Brightsen Model's predictions and applications we cite the fact that it derives the average number of prompt neutrons per fission event, it provides a theoretical way for understanding the low temperature / low energy reactions and for approaching the artificially induced fission, it predicts that forces within nucleon clusters are stronger than forces between such clusters within isotopes; it predicts the unmatter entities inside nuclei that result from stable and neutral union of matter and antimatter, and so on. But these predictions have to be tested in the future at the new CERN laboratory.
NASA Astrophysics Data System (ADS)
Guo, Weian; Li, Wuzhao; Zhang, Qun; Wang, Lei; Wu, Qidi; Ren, Hongliang
2014-11-01
In evolutionary algorithms, elites are crucial to maintain good features in solutions. However, too many elites can make the evolutionary process stagnate and cannot enhance the performance. This article employs particle swarm optimization (PSO) and biogeography-based optimization (BBO) to propose a hybrid algorithm termed biogeography-based particle swarm optimization (BPSO) which could make a large number of elites effective in searching optima. In this algorithm, the whole population is split into several subgroups; BBO is employed to search within each subgroup and PSO for the global search. Since not all the population is used in PSO, this structure overcomes the premature convergence in the original PSO. Time complexity analysis shows that the novel algorithm does not increase the time consumption. Fourteen numerical benchmarks and four engineering problems with constraints are used to test the BPSO. To better deal with constraints, a fuzzy strategy for the number of elites is investigated. The simulation results validate the feasibility and effectiveness of the proposed algorithm.
Mercer, D.E.
1991-01-01
The objectives are threefold: (1) to perform an analytical survey of household production theory as it relates to natural-resource problems in less-developed countries, (2) to develop a household production model of fuelwood decision making, (3) to derive a theoretical framework for travel-cost demand studies of international nature tourism. The model of household fuelwood decision making provides a rich array of implications and predictions for empirical analysis. For example, it is shown that fuelwood and modern fuels may be either substitutes or complements depending on the interaction of the gross-substitution and income-expansion effects. Therefore, empirical analysis should precede adoption of any inter-fuel substitution policies such as subsidizing kerosene. The fuelwood model also provides a framework for analyzing the conditions and factors determining entry and exit by households into the wood-burning subpopulation, a key for designing optimal household energy policies in the Third World. The international nature tourism travel cost model predicts that the demand for nature tourism is an aggregate of the demand for the individual activities undertaken during the trip.
An analysis of the demarcation problem in science and its application to therapeutic touch theory.
Newbold, David; Roberts, Julia
2007-12-01
This paper analyses the demarcation problem from the perspective of four philosophers: Popper, Kuhn, Lakatos and Feyerabend. To Popper, pseudoscience uses induction to generate theories, and only performs experiments to seek to verify them. To Popper, falsifiability is what determines the scientific status of a theory. Taking a historical approach, Kuhn observed that scientists did not follow Popper's rule, and might ignore falsifying data, unless overwhelming. To Kuhn, puzzle-solving within a paradigm is science. Lakatos attempted to resolve this debate, by suggesting history shows that science occurs in research programmes, competing according to how progressive they are. The leading idea of a programme could evolve, driven by its heuristic to make predictions that can be supported by evidence. Feyerabend claimed that Lakatos was selective in his examples, and the whole history of science shows there is no universal rule of scientific method, and imposing one on the scientific community impedes progress. These positions are used in turn, to examine the scientific status of therapeutic touch theory. The paper concludes that imposing a single rule of method can impede progress, in the face of multiple epistemologies, and the choice of scientific approach should be a pragmatic one based on the aims of the programme.
Low Dimensional Tools for Flow-Structure Interaction Problems: Application to Micro Air Vehicles
NASA Technical Reports Server (NTRS)
Schmit, Ryan F.; Glauser, Mark N.; Gorton, Susan A.
2003-01-01
A low dimensional tool for flow-structure interaction problems based on Proper Orthogonal Decomposition (POD) and modified Linear Stochastic Estimation (mLSE) has been proposed and was applied to a Micro Air Vehicle (MAV) wing. The method utilizes the dynamic strain measurements from the wing to estimate the POD expansion coefficients from which an estimation of the velocity in the wake can be obtained. For this experiment the MAV wing was set at five different angles of attack, from 0 deg to 20 deg. The tunnel velocities varied from 44 to 58 ft/sec with corresponding Reynolds numbers of 46,000 to 70,000. A stereo Particle Image Velocimetry (PIV) system was used to measure the wake of the MAV wing simultaneously with the signals from the twelve dynamic strain gauges mounted on the wing. With 20 out of 2400 POD modes, a reasonable estimation of the flow flow was observed. By increasing the number of POD modes, a better estimation of the flow field will occur. Utilizing the simultaneously sampled strain gauges and flow field measurements in conjunction with mLSE, an estimation of the flow field with lower energy modes is reasonable. With these results, the methodology for estimating the wake flow field from just dynamic strain gauges is validated.
On application of vector optimization in the problem of formation of portfolio of counterparties
NASA Astrophysics Data System (ADS)
Gorbich, A. L.; Medvedeva, M. A.; Medvedev, M. A.
2016-12-01
For the effective functioning of any enterprise it is necessary to choose the right partners: suppliers of raw material, buyers of finished products, with which the company interacts in the course of their business. However, the presence on the market of big amounts of enterprises makes the choice the most appropriate among them very difficult and requires the ability to objectively assess of the possible partners, based on multilateral analysis of their activities. This analysis can be carried out based on the solution of multiobjective problems of mathematical programming by using the methods of vector optimization. The work considers existing methods of selection of counterparties, as well as the theoretical foundations for the proposed methodology. It also describes a computer program that analyzes the raw data for contractors and allows choosing the best portfolio of suppliers of enterprise. The feature of selection of counterparties is that today's market has a large number of enterprises in similar activities. Successful choice of contractor will help to avoid unpleasant situations and financial losses, as well as to find a reliable partner in his person for the implementation of the production strategy of the company.
Corrected Newtonian potentials in the two-body problem with applications
NASA Astrophysics Data System (ADS)
Anisiu, M.-C.; Szücs-Csillik, I.
2016-12-01
The paper deals with an analytical study of various corrected Newtonian potentials. We offer a complete description of the corrected potentials, for the entire range of the parameters involved. These parameters can be fixed for different models in order to obtain a good concordance with known data. Some of the potentials are generated by continued fractions, and another one is derived from the Newtonian potential by adding a logarithmic correction. The zonal potential, which models the motion of a satellite moving in the equatorial plane of the Earth, is also considered. The range of the parameters for which the potentials behave or not similarly to the Newtonian one is pointed out. The shape of the potentials is displayed for all the significant cases, as well as the orbit of Raduga-1M 2 satellite in the field generated by the continued fractional potential U3, and then by the zonal one. For the continued fractional potential U2 we study the basic problem of the existence and linear stability of circular orbits. We prove that such orbits exist and are linearly stable. This qualitative study offers the possibility to choose the adequate potential, either for modeling the motion of planets or satellites, or to explain some phenomena at galactic scale.
NASA Technical Reports Server (NTRS)
Head, J. W.; Belton, M.; Greeley, R.; Pieters, C.; Mcewen, A.; Neukum, G.; Mccord, T.
1993-01-01
The Lunar Scout Missions (payload: x-ray fluorescence spectrometer, high-resolution stereocamera, neutron spectrometer, gamma-ray spectrometer, imaging spectrometer, gravity experiment) will provide a global data set for the chemistry, mineralogy, geology, topography, and gravity of the Moon. These data will in turn provide an important baseline for the further scientific exploration of the Moon by all-purpose landers and micro-rovers, and sample return missions from sites shown to be of primary interest from the global orbital data. These data would clearly provide the basis for intelligent selection of sites for the establishment of lunar base sites for long-term scientific and resource exploration and engineering studies. The two recent Galileo encounters with the Moon (December, 1990 and December, 1992) illustrate how modern technology can be applied to significant lunar problems. We emphasize the regional results of the Galileo SSI to show the promise of geologic unit definition and characterization as an example of what can be done with the global coverage to be obtained by the Lunar Scout Missions.
Geochemical Atlas of Slovakia and examples of its applications to environmental problems
NASA Astrophysics Data System (ADS)
Rapant, S.; Bodiš, D.; Vrana, K.; Cvečková, V.; Kordík, J.; Krčmová, K.; Slaninka, I.
2009-03-01
Results of comprehensive geochemical mapping and thematic studies of the Slovak territory (rocks, soils, stream sediments, groundwaters, biomass, and radioactivity) in the first half of the 1990s led to several new research programmes in Slovakia, within the frame of which new methodologies for geochemical data evaluation and map visualization were elaborated. This study describes the application and elaboration of data from the Geochemical Atlas of the Slovak Republic at national and regional levels. Based on the index of environmental risk (IER = ΣPEC/PNEC), the level of contamination for the geological component of the environment in Slovakia was evaluated. Approximately 10.5% of Slovakia’s territory was characterized as being environmentally disturbed to highly disturbed. In the areas where environmental loadings have accumulated, 14 regions where environmental risks existed due to high element concentrations were defined. The model calculations of health risk estimates based on the databases of the Geochemical Atlas for groundwater and soils indicate that the possible risk occurrence of carcinogenic diseases from groundwater arsenic contents is high in more than 10% of Slovakia, whereas the chronic risk is negligible. To determinate the background and threshold levels a combined statistical-geochemical approach was developed and applied as an example for groundwater at the national level as well as for single groundwater bodies. The results of statistical method application for the whole groundwater body (GBW) were compared with the background values for anthropogenically non-influenced areas in GBW. Final background value took into account time variations and spatial distribution of the element in GBW. Furthermore, based on the database from the Geochemical Atlas for groundwater, groundwater bodies potentially at qualitative risk were delineated for the whole of Slovakia. From a total of 101 groundwater bodies 17 were characterized as being at risk and 22
Temperature and dynamics problems of ultracapacitors in stationary and mobile applications
NASA Astrophysics Data System (ADS)
Michel, Hartmut
Ultracapacitors as a powerful energy storage systems are used in various areas of power electronics. Depending on the application, temperature and dynamics properties of these components have to be considered. These properties strongly depend on the characteristics of basic materials of the capacitors. The frequency and temperature dependence of the capacitance as well as of the internal resistance, ESR, is manly affected by the electrodes of activated carbon and the electrolyte. Under operating conditions differences of 15% and more of the capacitance due to a different structure of the electrodes are observed. Due to the reduced solubility of the conducting salt and the increased viscosity of the solvents for temperatures below freezing point the conductivity of the electrolyte drops drastically with decreasing temperatures. Thus, increases of the ESR between 200 and 700% between room temperature and -30 °C depending on the electrolyte are registered. Due to the slightly different selfdischarge of the single capacitors equivalent to a voltage drop of 4-12% within 3 days, the individual cell inside a module has to be protected by a cell voltage balancing unit. By an active cell voltage balancing unit connected in parallel to the capacitors a voltage drop will be leveled out after 1 h. In addition to the electrical characteristics of the ultracaps also the thermal properties of the single cell as well as of the modules have to be considered for the design in of these storage devices. By cooling elements integrated in the surface of the module casing and forced cooling the effective current load can be nearly doubled. Based on this know how ultracap modules were designed, which fulfill all the requirements of the applications in automotive and industrial electronics.
Topology-based kernels with application to inference problems in Alzheimer's disease.
Pachauri, Deepti; Hinrichs, Chris; Chung, Moo K; Johnson, Sterling C; Singh, Vikas
2011-10-01
Alzheimer's disease (AD) research has recently witnessed a great deal of activity focused on developing new statistical learning tools for automated inference using imaging data. The workhorse for many of these techniques is the support vector machine (SVM) framework (or more generally kernel-based methods). Most of these require, as a first step, specification of a kernel matrix K between input examples (i.e., images). The inner product between images I(i) and I(j) in a feature space can generally be written in closed form and so it is convenient to treat K as "given." However, in certain neuroimaging applications such an assumption becomes problematic. As an example, it is rather challenging to provide a scalar measure of similarity between two instances of highly attributed data such as cortical thickness measures on cortical surfaces. Note that cortical thickness is known to be discriminative for neurological disorders, so leveraging such information in an inference framework, especially within a multi-modal method, is potentially advantageous. But despite being clinically meaningful, relatively few works have successfully exploited this measure for classification or regression. Motivated by these applications, our paper presents novel techniques to compute similarity matrices for such topologically-based attributed data. Our ideas leverage recent developments to characterize signals (e.g., cortical thickness) motivated by the persistence of their topological features, leading to a scheme for simple constructions of kernel matrices. As a proof of principle, on a dataset of 356 subjects from the Alzheimer's Disease Neuroimaging Initiative study, we report good performance on several statistical inference tasks without any feature selection, dimensionality reduction, or parameter tuning.
2014-01-01
within the weld. Design/methodology/approach The improved GMAW process model is next applied to the case of butt-welding of MIL A46100 (a...improved GMAW process model pertaining to the spatial distribution of the material microstructure and properties within the MIL A46100 butt-weld are
Bianchi, R.; Marino, C.M.
1997-10-01
The availability of a new aerial survey capability carried out by the CNR/LARA (National Research Council - Airborne Laboratory for the Environmental Research) by a new spectroradiometer AA5000 MIVIS (Multispectral Infrared and Visible Imaging Spectrometer) on board a CASA 212/200 aircraft, enable the scientists to obtain innovative data sets, for different approach to the definitions and the understanding of a variety of environmental and engineering problems. The 102 MIVIS channels spectral bandwidths are chosen to meet the needs of scientific research for advanced applications of remote sensing data. In such configuration MIVIS can offer significant contributions to problem solving in wide sectors such as geologic exploration, agricultural crop studies, forestry, land use mapping, idrogeology, oceanography and others. LARA in 1994-96 has been active over different test-sites in joint-venture with JPL, Pasadena, different European Institutions and Italian University and Research Institutes. These aerial surveys allow the national and international scientific community to approach the use of Hyperspectral Remote Sensing in environmental problems of very large interest. The sites surveyed in Italy, France and Germany include a variety of targets such as quarries, landfills, karst cavities areas, landslides, coastlines, geothermal areas, etc. The deployments gathered up to now more than 300 GBytes of MIVIS data in more than 30 hours of VLDS data recording. The purpose of this work is to present and to comment the procedures and the results at research and at operational level of the past campaigns with special reference to the study of environmental and engineering problems.
NASA Astrophysics Data System (ADS)
Tran, T.
With the onset of the SmallSat era, the RSO catalog is expected to see continuing growth in the near future. This presents a significant challenge to the current sensor tasking of the SSN. The Air Force is in need of a sensor tasking system that is robust, efficient, scalable, and able to respond in real-time to interruptive events that can change the tracking requirements of the RSOs. Furthermore, the system must be capable of using processed data from heterogeneous sensors to improve tasking efficiency. The SSN sensor tasking can be regarded as an economic problem of supply and demand: the amount of tracking data needed by each RSO represents the demand side while the SSN sensor tasking represents the supply side. As the number of RSOs to be tracked grows, demand exceeds supply. The decision-maker is faced with the problem of how to allocate resources in the most efficient manner. Braxton recently developed a framework called Multi-Objective Resource Optimization using Genetic Algorithm (MOROUGA) as one of its modern COTS software products. This optimization framework took advantage of the maturing technology of evolutionary computation in the last 15 years. This framework was applied successfully to address the resource allocation of an AFSCN-like problem. In any resource allocation problem, there are five key elements: (1) the resource pool, (2) the tasks using the resources, (3) a set of constraints on the tasks and the resources, (4) the objective functions to be optimized, and (5) the demand levied on the resources. In this paper we explain in detail how the design features of this optimization framework are directly applicable to address the SSN sensor tasking domain. We also discuss our validation effort as well as present the result of the AFSCN resource allocation domain using a prototype based on this optimization framework.
Moon, Francis C.
2002-04-01
Large numbers of fluid elastic structures are part of many power plant systems and vibration of these systems sometimes are responsible for plant shut downs. Earlier research at Cornell in this area had centered on nonlinear dynamics of fluid-elastic systems with low degrees of freedom. The focus of current research is the study of the dynamics of thousands of closely arrayed structures in a cross flow under both fluid and impact forces. This research is relevant to two areas: (1) First, fluid-structural problems continue to be important in the power industry, especially in heat exchange systems where up to thousands of pipe-like structures interact with a fluid medium. [Three years ago in Japan for example, there was a shut down of the Monju nuclear power plant due to a failure attributed to flow induced vibrations.] (2) The second area of relevance is to nonlinear systems and complexity phenomena; issues such as spatial temporal dynamics, localization and coherent patterns entropy measures as well as other complexity issues. Early research on flow induced vibrations in tube row and array structures in cross flow goes back to Roberts in 1966 and Connors in 1970. These studies used linear models as have many of the later work in the 1980's. Nonlinear studies of cross flow induced vibrations have been undertaken in the last decade. The research at Cornell sponsored by DOE has explored nonlinear phenomena in fluid-structure problems. In the work at Cornell we have documented a subcritical Hopf bifurcation for flow around a single row of flexible tubes and have developed an analytical model based on nonlinear system identification techniques. (Thothadri, 1998, Thothadri and Moon, 1998, 1999). These techniques have been applied to a wind tunnel experiment with a row of seven cylinders in a cross flow. These system identification methods have been used to calculate fluid force models that have replicated certain quantitative vibration limit cycle behavior of the
Application of Semi Active Control Techniques to the Damping Suppression Problem of Solar Sail Booms
NASA Technical Reports Server (NTRS)
Adetona, O.; Keel, L. H.; Whorton, M. S.
2007-01-01
Solar sails provide a propellant free form for space propulsion. These are large flat surfaces that generate thrust when they are impacted by light. When attached to a space vehicle, the thrust generated can propel the space vehicle to great distances at significant speeds. For optimal performance the sail must be kept from excessive vibration. Active control techniques can provide the best performance. However, they require an external power-source that may create significant parasitic mass to the solar sail. However, solar sails require low mass for optimal performance. Secondly, active control techniques typically require a good system model to ensure stability and performance. However, the accuracy of solar sail models validated on earth for a space environment is questionable. An alternative approach is passive vibration techniques. These do not require an external power supply, and do not destabilize the system. A third alternative is referred to as semi-active control. This approach tries to get the best of both active and passive control, while avoiding their pitfalls. In semi-active control, an active control law is designed for the system, and passive control techniques are used to implement it. As a result, no external power supply is needed so the system is not destabilize-able. Though it typically underperforms active control techniques, it has been shown to out-perform passive control approaches and can be unobtrusively installed on a solar sail boom. Motivated by this, the objective of this research is to study the suitability of a Piezoelectric (PZT) patch actuator/sensor based semi-active control system for the vibration suppression problem of solar sail booms. Accordingly, we develop a suitable mathematical and computer model for such studies and demonstrate the capabilities of the proposed approach with computer simulations.
Covariant Image Representation with Applications to Classification Problems in Medical Imaging.
Seo, Dohyung; Ho, Jeffrey; Vemuri, Baba C
2016-01-01
Images are often considered as functions defined on the image domains, and as functions, their (intensity) values are usually considered to be invariant under the image domain transforms. This functional viewpoint is both influential and prevalent, and it provides the justification for comparing images using functional L(p) -norms. However, with the advent of more advanced sensing technologies and data processing methods, the definition and the variety of images has been broadened considerably, and the long-cherished functional paradigm for images is becoming inadequate and insufficient. In this paper, we introduce the formal notion of covariant images and study two types of covariant images that are important in medical image analysis, symmetric positive-definite tensor fields and Gaussian mixture fields, images whose sample values covary i.e., jointly vary with image domain transforms rather than being invariant to them. We propose a novel similarity measure between a pair of covariant images considered as embedded shapes (manifolds) in the ambient space, a Cartesian product of the image and its sample-value domains. The similarity measure is based on matching the two embedded low-dimensional shapes, and both the extrinsic geometry of the ambient space and the intrinsic geometry of the shapes are incorporated in computing the similarity measure. Using this similarity as an affinity measure in a supervised learning framework, we demonstrate its effectiveness on two challenging classification problems: classification of brain MR images based on patients' age and (Alzheimer's) disease status and seizure detection from high angular resolution diffusion magnetic resonance scans of rat brains.
Covariant Image Representation with Applications to Classification Problems in Medical Imaging
Seo, Dohyung; Ho, Jeffrey; Vemuri, Baba C.
2016-01-01
Images are often considered as functions defined on the image domains, and as functions, their (intensity) values are usually considered to be invariant under the image domain transforms. This functional viewpoint is both influential and prevalent, and it provides the justification for comparing images using functional Lp-norms. However, with the advent of more advanced sensing technologies and data processing methods, the definition and the variety of images has been broadened considerably, and the long-cherished functional paradigm for images is becoming inadequate and insufficient. In this paper, we introduce the formal notion of covariant images and study two types of covariant images that are important in medical image analysis, symmetric positive-definite tensor fields and Gaussian mixture fields, images whose sample values covary i.e., jointly vary with image domain transforms rather than being invariant to them. We propose a novel similarity measure between a pair of covariant images considered as embedded shapes (manifolds) in the ambient space, a Cartesian product of the image and its sample-value domains. The similarity measure is based on matching the two embedded low-dimensional shapes, and both the extrinsic geometry of the ambient space and the intrinsic geometry of the shapes are incorporated in computing the similarity measure. Using this similarity as an affinity measure in a supervised learning framework, we demonstrate its effectiveness on two challenging classification problems: classification of brain MR images based on patients’ age and (Alzheimer’s) disease status and seizure detection from high angular resolution diffusion magnetic resonance scans of rat brains. PMID:27182122
State, Parameter, and Unknown Input Estimation Problems in Active Automotive Safety Applications
NASA Astrophysics Data System (ADS)
Phanomchoeng, Gridsada
A variety of driver assistance systems such as traction control, electronic stability control (ESC), rollover prevention and lane departure avoidance systems are being developed by automotive manufacturers to reduce driver burden, partially automate normal driving operations, and reduce accidents. The effectiveness of these driver assistance systems can be significant enhanced if the real-time values of several vehicle parameters and state variables, namely tire-road friction coefficient, slip angle, roll angle, and rollover index, can be known. Since there are no inexpensive sensors available to measure these variables, it is necessary to estimate them. However, due to the significant nonlinear dynamics in a vehicle, due to unknown and changing plant parameters, and due to the presence of unknown input disturbances, the design of estimation algorithms for this application is challenging. This dissertation develops a new approach to observer design for nonlinear systems in which the nonlinearity has a globally (or locally) bounded Jacobian. The developed approach utilizes a modified version of the mean value theorem to express the nonlinearity in the estimation error dynamics as a convex combination of known matrices with time varying coefficients. The observer gains are then obtained by solving linear matrix inequalities (LMIs). A number of illustrative examples are presented to show that the developed approach is less conservative and more useful than the standard Lipschitz assumption based nonlinear observer. The developed nonlinear observer is utilized for estimation of slip angle, longitudinal vehicle velocity, and vehicle roll angle. In order to predict and prevent vehicle rollovers in tripped situations, it is necessary to estimate the vertical tire forces in the presence of unknown road disturbance inputs. An approach to estimate unknown disturbance inputs in nonlinear systems using dynamic model inversion and a modified version of the mean value theorem is
NASA Astrophysics Data System (ADS)
Seyedhosseini, Seyed Mohammad; Makui, Ahmad; Shahanaghi, Kamran; Torkestani, Sara Sadat
2016-05-01
Determining the best location to be profitable for the facility's lifetime is the important decision of public and private firms, so this is why discussion about dynamic location problems (DLPs) is a critical significance. This paper presented a comprehensive review from 1968 up to most recent on published researches about DLPs and classified them into two parts. First, mathematical models developed based on different characteristics: type of parameters (deterministic, probabilistic or stochastic), number and type of objective function, numbers of commodity and modes, relocation time, number of relocation and relocating facilities, time horizon, budget and capacity constraints and their applicability. In second part, It have been also presented solution algorithms, main specification, applications and some real-world case studies of DLPs. At the ends, we concluded that in the current literature of DLPs, distribution systems and production-distribution systems with simple assumption of the tackle to the complexity of these models studied more than any other fields, as well as the concept of variety of services (hierarchical network), reliability, sustainability, relief management, waiting time for services (queuing theory) and risk of facility disruption need for further investigation. All of the available categories based on different criteria, solution methods and applicability of them, gaps and analysis which have been done in this paper suggest the ways for future research.
Image Problems Deplete the Number of Women in Academic Applicant Pools
NASA Astrophysics Data System (ADS)
Sears, Anna L. W.
Despite near numeric parity in graduate schools, women and men in science and mathematics may not perceive the same opportunities for career success. Instead, female doctoral students' career ambitions may often be influenced by perceptions of irreconcilable conflicts between personal and academic goals. This article reports the results of a career goals survey of math and science doctoral students at the University of California, Davis. Fewer women than men began their doctoral programs seeking academic research careers. Of those who initially favored academic research, twice as many women as men downgraded these ambitions during graduate school. Women were more likely to feel geographically constrained by family ties and to express concern about balancing work and family, long work hours, and tenure clock inflexibility. These results partially explain why the percentage of women in academic applicant pools is often well below the number of Ph.D. recipients. The current barriers to gender equity thus cannot be completely ameliorated by increasing the number of women in the pipeline or by altered hiring practices, but changes must be undertaken to make academic research careers more flexible, family friendly, and attractive to women.
Activated carbons derived from oil palm empty-fruit bunches: application to environmental problems.
Alam, Md Zahangir; Muyibi, Suleyman A; Mansor, Mariatul F; Wahid, Radziah
2007-01-01
Activated carbons derived from oil palm empty fruit bunches (EFB) were investigated to find the suitability of its application for removal of phenol in aqueous solution through adsorption process. Two types of activation namely; thermal activation at 300, 500 and 800 degrees C and physical activation at 150 degrees C (boiling treatment) were used for the production of the activated carbons. A control (untreated EFB) was used to compare the adsorption capacity of the activated carbons produced from these processes. The results indicated that the activated carbon derived at the temperature of 800 degrees C showed maximum absorption capacity in the aqueous solution of phenol. Batch adsorption studies showed an equilibrium time of 6 h for the activated carbon at 800 degrees C. It was observed that the adsorption capacity was higher at lower values of pH (2-3) and higher value of initial concentration of phenol (200-300 mg/L). The equilibrium data fitted better with the Freundlich adsorption isotherm compared to the Langmuir. Kinetic studies of phenol adsorption onto activated carbons were also studied to evaluate the adsorption rate. The estimated cost for production of activated carbon from EFB was shown in lower price (USD 0.50/kg of activated carbon) compared the activated carbon from other sources and processes.
The microwave thermal thruster and its application to the launch problem
NASA Astrophysics Data System (ADS)
Parkin, Kevin L. G.
Nuclear thermal thrusters long ago bypassed the 50-year-old specific impulse (Isp) limitation of conventional thrusters, using nuclear powered heat exchangers in place of conventional combustion to heat a hydrogen propellant. These heat exchanger thrusters experimentally achieved an Isp of 825 seconds, but with a thrust-to-weight ratio (T/W) of less than ten they have thus far been too heavy to propel rockets into orbit. This thesis proposes a new idea to achieve both high Isp and high T/W The Microwave Thermal Thruster. This thruster covers the underside of a rocket aeroshell with a lightweight microwave absorbent heat exchange layer that may double as a re-entry heat shield. By illuminating the layer with microwaves directed from a ground-based phased array, an Isp of 700--900 seconds and T/W of 50--150 is possible using a hydrogen propellant. The single propellant simplifies vehicle design, and the high Isp increases payload fraction and structural margins. These factors combined could have a profound effect on the economics of building and reusing rockets. A laboratory-scale microwave thermal heat exchanger is constructed using a single channel in a cylindrical microwave resonant cavity, and new type of coupled electromagnetic-conduction-convection model is developed to simulate it. The resonant cavity approach to small-scale testing reveals several drawbacks, including an unexpected oscillatory behavior. Stable operation of the laboratory-scale thruster is nevertheless successful, and the simulations are consistent with the experimental results. In addition to proposing a new type of propulsion and demonstrating it, this thesis provides three other principal contributions: The first is a new perspective on the launch problem, placing it in a wider economic context. The second is a new type of ascent trajectory that significantly reduces the diameter, and hence cost, of the ground-based phased array. The third is an eclectic collection of data, techniques, and
Application of geoelectrical methods in the DS sinkhole problem, Israel and Jordan
NASA Astrophysics Data System (ADS)
Levi, E.; Abueladas, A.-R.; Al-Zoubi, A.; Akkawi, E.; Ezersky, M.
2012-04-01
We consider a new approach to use the geoelectric methods for studying the both uppermost part of section and salt layer conditions in the sinkhole development sites. The Electric Resistivity Tomography (ERT) is used here to detect shallow deformations in subsurface sediments. Resistivity prospecting yields information about both lateral and vertical distribution of resistivity through the geological section and can therefore be used in both qualitative and quantitative ways for the identification of structure and features at shallow depths. As it follows from the modified Archie's Law, resistivity of the unsaturated sediments is determined by their porosity. The higher the porosity, the higher is the resistivity. It will also depend on volume of the electrolyte in pores and resistivity of the fluid . Note that after mechanical models available the higher porosity in sinkhole development sites is caused by the void presence at depth. 2D and 3D mapping was carried out in the Mineral Beach area in Israel and in the Ghor Al-Haditha site in Jordan. ERT method shown high resistivity anomaly of some thousands Ohm-m located along the salt edge. The Transient Electromagnetic Method (TEM) also referred to as the Time Domain Electromagnetic Method (TDEM) is sensitive to the bulk resistivity (conductivity) of the studied medium, especially in the low-resistivity range. Transient Electromagnetic (TEM) method in its FAST modification was used for studying the salt layer conditions (salt porosity, depth of the salt top and thickness of the salt layer) and distribution of bulk resistivity in vicinity of the salt border (to resolve the problem of water salinity). The methodology includes numerous measurements through the sinkhole development areas. Earlier the TEM method was used extensively worldwide for locating the fresh-saline water interface in coastal areas and for estimating groundwater salinity. In our study we have mapped salt layer geometrical parameters (e.g. depth to
NASA Astrophysics Data System (ADS)
Fairhurst, Robert; Barrow, Wendy; Rollinson, Gavyn
2010-05-01
Automated scanning electron microscopy-energy dispersive x-ray spectrometer (SEM-EDS) based mineral identification systems such as QEMSCAN have been in development for over 20 years, primarily as a tool to understand mineral liberation and element distribution in metal mining industry. This powerful technique is now being used in non mining applications such as metamorphic petrology where accurate mineral identification and metamorphic fabrics are key to deciphering the metamorphic history of samples. The QEMSCAN was developed by CSIRO for application in the mining industry where it is used to understand mineralogy, texture, mineral associations, the presence of gangue minerals and deleterious elements that may potentially interfere with mineral processing and planning, and the overall impact of mineralogy on grinding and flotation processes. It is capable of identifying most rock-forming minerals in milliseconds from their characteristic x-ray spectra. The collected x-ray spectra are compared to entries in a database containing the species identification profiles (SIPs) and are assigned a label accordingly. QEMSCAN is capable of searching large sample areas at high resolution resulting in the accurate and precise determination of all minerals present. Reports that were originally developed for the mining geologist can be equally useful to the petrologist, e.g. phase/mineral maps, modal mineral abundances and mineral association reports. Identification of key minerals is of great importance to determining the petrologic history of a sample. These key minerals may be few in number and present as small microinclusions (less than 100 μm) making them difficult to identify, if at all, with the petrographic microscope. Therefore, imaging by electron-microprobe or scanning electron microscope are the methods traditionally used. However, because of the small field of view available on these instruments at a magnification necessary to resolve micron sized relicts and
NASA Astrophysics Data System (ADS)
Bell, Thomas Alexander
Photon-dominated regions occur in many regions of astrophysical interest and an understanding of their underlying chemical and physical processes can provide an insight into the conditions within them. This thesis describes the development and implementation of a time-dependent version of the UCL_PDR model, including comprehensive benchmarking as part of an international effort to understand the differences between individual models and improve their agreement in key areas. The code has been applied to calculate theoretical values of the CO-to-H2 conversion factor, XqOi to investigate its sensitivity to physical parameter variation. Xqo is found to vary significantly from its canonical value under certain conditions and by over an order of magnitude in the case of high density or low metallicity. By fitting observed line intensity ratios in a sample of nearby galaxies, PDR models have been constructed to represent the conditions found in a range of galaxy types. These are used to derive appropriate values of Xqo for such objects and to investigate the possibility of using higher transition lines of CO as more reliable mass tracers in these environments. A parameter space search has also been conducted using the model to look for conditions that produce significant column densities of H2 with low levels of emission that would be undetectable or overlooked by current surveys. A plausible region of parameter space is found to produce such molecular dark matter, capable of concealing significant masses of gas that may form reservoirs for future star formation. Additional applications of the ucl_pdr model are discussed, including a study of the chemistry within transient microstructure in the diffuse interstellar medium and models of the time-dependent expansion of a molecular shell around a massive star cluster, applied to observations of the central starburst in M 82.
Application of the TEMPEST computer code to canister-filling heat transfer problems
Farnsworth, R.K.; Faletti, D.W.; Budden, M.J.
1988-03-01
Pacific Northwest Laboratory (PNL) researchers used the TEMPEST computer code to simulate thermal cooldown behavior of nuclear waste glass after it was poured into steel canisters for long-term storage. The objective of this work was to determine the accuracy and applicability of the TEMPEST code when used to compute canister thermal histories. First, experimental data were obtained to provide the basis for comparing TEMPEST-generated predictions. Five canisters were instrumented with appropriately located radial and axial thermocouples. The canister were filled using the pilot-scale ceramic melter (PSCM) at PNL. Each canister was filled in either a continous or a batch filling mode. One of the canisters was also filled within a turntable simulant (a group of cylindrical shells with heat transfer resistances similar to those in an actual melter turntable). This was necessary to provide a basis for assessing the ability of the TEMPEST code to also model the transient cooling of canisters in a melter turntable. The continous-fill model, Version M, was found to predict temperatures with more accuracy. The turntable simulant experiment demonstrated that TEMPEST can adequately model the asymmetric temperature field caused by the turntable geometry. Further, TEMPEST can acceptably predict the canister cooling history within a turntable, despite code limitations in computing simultaneous radiation and convection heat transfer between shells, along with uncertainty in stainless-steel surface emissivities. Based on the successful performance of TEMPEST Version M, development was initiated to incorporate 1) full viscous glass convection, 2) a dynamically adaptive grid that automatically follows the glass/air interface throughout the transient, and 3) a full enclosure radiation model to allow radiation heat transfer to non-nearest neighbor cells. 5 refs., 47 figs., 17 tabs.
Application of partially-coupled hydro-mechanical schemes to multiphase flow problems
NASA Astrophysics Data System (ADS)
Tillner, Elena; Kempka, Thomas
2016-04-01
Utilization of subsurface reservoirs by fluid storage or production generally triggers pore pressure changes and volumetric strains in reservoirs and cap rocks. The assessment of hydro-mechanical effects can be undertaken using different process coupling strategies. The fully-coupled geomechanics and flow simulation, constituting a monolithic system of equations, is rarely applied for simulations involving multiphase fluid flow due to the high computational efforts required. Pseudo-coupled simulations are driven by static tabular data on porosity and permeability changes as function of pore pressure or mean stress, resulting in a rather limited flexibility when encountering complex subsurface utilization schedules and realistic geological settings. Partially-coupled hydro-mechanical simulations can be distinguished into one-way and iterative two-way coupled schemes, whereby the latter one is based on calculations of flow and geomechanics, taking into account the iterative exchange of coupling parameters between the two respective numerical simulators until convergence is achieved. In contrast, the one-way coupling scheme is determined by the provision of pore pressure changes calculated by the flow simulator to the geomechanical simulator neglecting any feedback. In the present study, partially-coupled two-way schemes are discussed in view of fully-coupled single-phase flow and geomechanics, and their applicability to multiphase flow simulations. For that purpose, we introduce a comparison study between the different coupling schemes, using selected benchmarks to identify the main requirements for the partially-coupled approach to converge with the numerical solution of the fully-coupled one.
NASA Technical Reports Server (NTRS)
Mutterperl, William
1944-01-01
A method of conformal transformation is developed that maps an airfoil into a straight line, the line being chosen as the extended chord line of the airfoil. The mapping is accomplished by operating directly with the airfoil ordinates. The absence of any preliminary transformation is found to shorten the work substantially over that of previous methods. Use is made of the superposition of solutions to obtain a rigorous counterpart of the approximate methods of thin-airfoils theory. The method is applied to the solution of the direct and inverse problems for arbitrary airfoils and pressure distributions. Numerical examples are given. Applications to more general types of regions, in particular to biplanes and to cascades of airfoils, are indicated. (author)
NASA Technical Reports Server (NTRS)
Stahara, S. S.; Elliott, J. P.; Spreiter, J. R.
1983-01-01
An investigation was conducted to continue the development of perturbation procedures and associated computational codes for rapidly determining approximations to nonlinear flow solutions, with the purpose of establishing a method for minimizing computational requirements associated with parametric design studies of transonic flows in turbomachines. The results reported here concern the extension of the previously developed successful method for single parameter perturbations to simultaneous multiple-parameter perturbations, and the preliminary application of the multiple-parameter procedure in combination with an optimization method to blade design/optimization problem. In order to provide as severe a test as possible of the method, attention is focused in particular on transonic flows which are highly supercritical. Flows past both isolated blades and compressor cascades, involving simultaneous changes in both flow and geometric parameters, are considered. Comparisons with the corresponding exact nonlinear solutions display remarkable accuracy and range of validity, in direct correspondence with previous results for single-parameter perturbations.
ERIC Educational Resources Information Center
Laxman, Kumar
2010-01-01
Problem-based learning (PBL) is an instructional approach that is organized around the investigation and resolution of problems. Problems are neither uniform nor similar. Jonassen (1997, 2000) in his design theory of problem solving has categorized problems into two broad types--well-structured and ill-structured. He has also described a host of…
The long-solved problem of the best-fit straight line: application to isotopic mixing lines
Wehr, Richard; Saleska, Scott R.
2017-01-03
It has been almost 50 years since York published an exact and general solution for the best-fit straight line to independent points with normally distributed errors in both x and y. York's solution is highly cited in the geophysical literature but almost unknown outside of it, so that there has been no ebb in the tide of books and papers wrestling with the problem. Much of the post-1969 literature on straight-line fitting has sown confusion not merely by its content but by its very existence. The optimal least-squares fit is already known; the problem is already solved. Here we introducemore » the non-specialist reader to York's solution and demonstrate its application in the interesting case of the isotopic mixing line, an analytical tool widely used to determine the isotopic signature of trace gas sources for the study of biogeochemical cycles. The most commonly known linear regression methods – ordinary least-squares regression (OLS), geometric mean regression (GMR), and orthogonal distance regression (ODR) – have each been recommended as the best method for fitting isotopic mixing lines. In fact, OLS, GMR, and ODR are all special cases of York's solution that are valid only under particular measurement conditions, and those conditions do not hold in general for isotopic mixing lines. Using Monte Carlo simulations, we quantify the biases in OLS, GMR, and ODR under various conditions and show that York's general – and convenient – solution is always the least biased.« less
The long-solved problem of the best-fit straight line: application to isotopic mixing lines
NASA Astrophysics Data System (ADS)
Wehr, Richard; Saleska, Scott R.
2017-01-01
It has been almost 50 years since York published an exact and general solution for the best-fit straight line to independent points with normally distributed errors in both x and y. York's solution is highly cited in the geophysical literature but almost unknown outside of it, so that there has been no ebb in the tide of books and papers wrestling with the problem. Much of the post-1969 literature on straight-line fitting has sown confusion not merely by its content but by its very existence. The optimal least-squares fit is already known; the problem is already solved. Here we introduce the non-specialist reader to York's solution and demonstrate its application in the interesting case of the isotopic mixing line, an analytical tool widely used to determine the isotopic signature of trace gas sources for the study of biogeochemical cycles. The most commonly known linear regression methods - ordinary least-squares regression (OLS), geometric mean regression (GMR), and orthogonal distance regression (ODR) - have each been recommended as the best method for fitting isotopic mixing lines. In fact, OLS, GMR, and ODR are all special cases of York's solution that are valid only under particular measurement conditions, and those conditions do not hold in general for isotopic mixing lines. Using Monte Carlo simulations, we quantify the biases in OLS, GMR, and ODR under various conditions and show that York's general - and convenient - solution is always the least biased.
Electromagnetic Extended Finite Elements for High-Fidelity Multimaterial Problems LDRD Final Report
Siefert, Christopher; Bochev, Pavel Blagoveston; Kramer, Richard Michael Jack; Voth, Thomas Eugene; Cox, James
2014-09-01
Surface effects are critical to the accurate simulation of electromagnetics (EM) as current tends to concentrate near material surfaces. Sandia EM applications, which include exploding bridge wires for detonator design, electromagnetic launch of flyer plates for material testing and gun design, lightning blast-through for weapon safety, electromagnetic armor, and magnetic flux compression generators, all require accurate resolution of surface effects. These applications operate in a large deformation regime, where body-fitted meshes are impractical and multimaterial elements are the only feasible option. State-of-the-art methods use various mixture models to approximate the multi-physics of these elements. The empirical nature of these models can significantly compromise the accuracy of the simulation in this very important surface region. We propose to substantially improve the predictive capability of electromagnetic simulations by removing the need for empirical mixture models at material surfaces. We do this by developing an eXtended Finite Element Method (XFEM) and an associated Conformal Decomposition Finite Element Method (CDFEM) which satisfy the physically required compatibility conditions at material interfaces. We demonstrate the effectiveness of these methods for diffusion and diffusion-like problems on node, edge and face elements in 2D and 3D. We also present preliminary work on h -hierarchical elements and remap algorithms.
ERIC Educational Resources Information Center
Lindstrom, Peter A.; And Others
This document consists of four units. The first of these views calculus applications to work, area, and distance problems. It is designed to help students gain experience in: 1) computing limits of Riemann sums; 2) computing definite integrals; and 3) solving elementary area, distance, and work problems by integration. The second module views…