Reduced-Order Modeling: New Approaches for Computational Physics
NASA Technical Reports Server (NTRS)
Beran, Philip S.; Silva, Walter A.
2001-01-01
In this paper, we review the development of new reduced-order modeling techniques and discuss their applicability to various problems in computational physics. Emphasis is given to methods ba'sed on Volterra series representations and the proper orthogonal decomposition. Results are reported for different nonlinear systems to provide clear examples of the construction and use of reduced-order models, particularly in the multi-disciplinary field of computational aeroelasticity. Unsteady aerodynamic and aeroelastic behaviors of two- dimensional and three-dimensional geometries are described. Large increases in computational efficiency are obtained through the use of reduced-order models, thereby justifying the initial computational expense of constructing these models and inotivatim,- their use for multi-disciplinary design analysis.
Reverse time migration by Krylov subspace reduced order modeling
NASA Astrophysics Data System (ADS)
Basir, Hadi Mahdavi; Javaherian, Abdolrahim; Shomali, Zaher Hossein; Firouz-Abadi, Roohollah Dehghani; Gholamy, Shaban Ali
2018-04-01
Imaging is a key step in seismic data processing. To date, a myriad of advanced pre-stack depth migration approaches have been developed; however, reverse time migration (RTM) is still considered as the high-end imaging algorithm. The main limitations associated with the performance cost of reverse time migration are the intensive computation of the forward and backward simulations, time consumption, and memory allocation related to imaging condition. Based on the reduced order modeling, we proposed an algorithm, which can be adapted to all the aforementioned factors. Our proposed method benefit from Krylov subspaces method to compute certain mode shapes of the velocity model computed by as an orthogonal base of reduced order modeling. Reverse time migration by reduced order modeling is helpful concerning the highly parallel computation and strongly reduces the memory requirement of reverse time migration. The synthetic model results showed that suggested method can decrease the computational costs of reverse time migration by several orders of magnitudes, compared with reverse time migration by finite element method.
Wind Farm Flow Modeling using an Input-Output Reduced-Order Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Annoni, Jennifer; Gebraad, Pieter; Seiler, Peter
Wind turbines in a wind farm operate individually to maximize their own power regardless of the impact of aerodynamic interactions on neighboring turbines. There is the potential to increase power and reduce overall structural loads by properly coordinating turbines. To perform control design and analysis, a model needs to be of low computational cost, but retains the necessary dynamics seen in high-fidelity models. The objective of this work is to obtain a reduced-order model that represents the full-order flow computed using a high-fidelity model. A variety of methods, including proper orthogonal decomposition and dynamic mode decomposition, can be used tomore » extract the dominant flow structures and obtain a reduced-order model. In this paper, we combine proper orthogonal decomposition with a system identification technique to produce an input-output reduced-order model. This technique is used to construct a reduced-order model of the flow within a two-turbine array computed using a large-eddy simulation.« less
2013-03-01
reduced order model is created. Finally, previous research in this area of study will be examined, and its application to this research will be...TRAINING MANEUVER EVALUATION FOR REDUCED ORDER MODELING OF STABILITY & CONTROL PROPERTIES USING COMPUTATIONAL FLUID DYNAMICS THESIS Craig Curtis...Government and is not subject to copyright protection in the United States. AFIT-ENY-13-M-28 TRAINING MANEUVER EVALUATION FOR REDUCED ORDER MODELING OF
Bringing MapReduce Closer To Data With Active Drives
NASA Astrophysics Data System (ADS)
Golpayegani, N.; Prathapan, S.; Warmka, R.; Wyatt, B.; Halem, M.; Trantham, J. D.; Markey, C. A.
2017-12-01
Moving computation closer to the data location has been a much theorized improvement to computation for decades. The increase in processor performance, the decrease in processor size and power requirement combined with the increase in data intensive computing has created a push to move computation as close to data as possible. We will show the next logical step in this evolution in computing: moving computation directly to storage. Hypothetical systems, known as Active Drives, have been proposed as early as 1998. These Active Drives would have a general-purpose CPU on each disk allowing for computations to be performed on them without the need to transfer the data to the computer over the system bus or via a network. We will utilize Seagate's Active Drives to perform general purpose parallel computing using the MapReduce programming model directly on each drive. We will detail how the MapReduce programming model can be adapted to the Active Drive compute model to perform general purpose computing with comparable results to traditional MapReduce computations performed via Hadoop. We will show how an Active Drive based approach significantly reduces the amount of data leaving the drive when performing several common algorithms: subsetting and gridding. We will show that an Active Drive based design significantly improves data transfer speeds into and out of drives compared to Hadoop's HDFS while at the same time keeping comparable compute speeds as Hadoop.
The Research of the Parallel Computing Development from the Angle of Cloud Computing
NASA Astrophysics Data System (ADS)
Peng, Zhensheng; Gong, Qingge; Duan, Yanyu; Wang, Yun
2017-10-01
Cloud computing is the development of parallel computing, distributed computing and grid computing. The development of cloud computing makes parallel computing come into people’s lives. Firstly, this paper expounds the concept of cloud computing and introduces two several traditional parallel programming model. Secondly, it analyzes and studies the principles, advantages and disadvantages of OpenMP, MPI and Map Reduce respectively. Finally, it takes MPI, OpenMP models compared to Map Reduce from the angle of cloud computing. The results of this paper are intended to provide a reference for the development of parallel computing.
Reduced-Order Modeling: Cooperative Research and Development at the NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Beran, Philip S.; Cesnik, Carlos E. S.; Guendel, Randal E.; Kurdila, Andrew; Prazenica, Richard J.; Librescu, Liviu; Marzocca, Piergiovanni; Raveh, Daniella E.
2001-01-01
Cooperative research and development activities at the NASA Langley Research Center (LaRC) involving reduced-order modeling (ROM) techniques are presented. Emphasis is given to reduced-order methods and analyses based on Volterra series representations, although some recent results using Proper Orthogonal Deco in position (POD) are discussed as well. Results are reported for a variety of computational and experimental nonlinear systems to provide clear examples of the use of reduced-order models, particularly within the field of computational aeroelasticity. The need for and the relative performance (speed, accuracy, and robustness) of reduced-order modeling strategies is documented. The development of unsteady aerodynamic state-space models directly from computational fluid dynamics analyses is presented in addition to analytical and experimental identifications of Volterra kernels. Finally, future directions for this research activity are summarized.
NASA Astrophysics Data System (ADS)
Xu, M.; van Overloop, P. J.; van de Giesen, N. C.
2011-02-01
Model predictive control (MPC) of open channel flow is becoming an important tool in water management. The complexity of the prediction model has a large influence on the MPC application in terms of control effectiveness and computational efficiency. The Saint-Venant equations, called SV model in this paper, and the Integrator Delay (ID) model are either accurate but computationally costly, or simple but restricted to allowed flow changes. In this paper, a reduced Saint-Venant (RSV) model is developed through a model reduction technique, Proper Orthogonal Decomposition (POD), on the SV equations. The RSV model keeps the main flow dynamics and functions over a large flow range but is easier to implement in MPC. In the test case of a modeled canal reach, the number of states and disturbances in the RSV model is about 45 and 16 times less than the SV model, respectively. The computational time of MPC with the RSV model is significantly reduced, while the controller remains effective. Thus, the RSV model is a promising means to balance the control effectiveness and computational efficiency.
Tharmmaphornphilas, Wipawee; Green, Benjamin; Carnahan, Brian J; Norman, Bryan A
2003-01-01
This research developed worker schedules by using administrative controls and a computer programming model to reduce the likelihood of worker hearing loss. By rotating the workers through different jobs during the day it was possible to reduce their exposure to hazardous noise levels. Computer simulations were made based on data collected in a real setting. Worker schedules currently used at the site are compared with proposed worker schedules from the computer simulations. For the worker assignment plans found by the computer model, the authors calculate a significant decrease in time-weighted average (TWA) sound level exposure. The maximum daily dose that any worker is exposed to is reduced by 58.8%, and the maximum TWA value for the workers is reduced by 3.8 dB from the current schedule.
NASA Astrophysics Data System (ADS)
Raghupathy, Arun; Ghia, Karman; Ghia, Urmila
2008-11-01
Compact Thermal Models (CTM) to represent IC packages has been traditionally developed using the DELPHI-based (DEvelopment of Libraries of PHysical models for an Integrated design) methodology. The drawbacks of this method are presented, and an alternative method is proposed. A reduced-order model that provides the complete thermal information accurately with less computational resources can be effectively used in system level simulations. Proper Orthogonal Decomposition (POD), a statistical method, can be used to reduce the order of the degree of freedom or variables of the computations for such a problem. POD along with the Galerkin projection allows us to create reduced-order models that reproduce the characteristics of the system with a considerable reduction in computational resources while maintaining a high level of accuracy. The goal of this work is to show that this method can be applied to obtain a boundary condition independent reduced-order thermal model for complex components. The methodology is applied to the 1D transient heat equation.
Due to the computational cost of running regional-scale numerical air quality models, reduced form models (RFM) have been proposed as computationally efficient simulation tools for characterizing the pollutant response to many different types of emission reductions. The U.S. Envi...
REVEAL: An Extensible Reduced Order Model Builder for Simulation and Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, Khushbu; Sharma, Poorva; Ma, Jinliang
2013-04-30
Many science domains need to build computationally efficient and accurate representations of high fidelity, computationally expensive simulations. These computationally efficient versions are known as reduced-order models. This paper presents the design and implementation of a novel reduced-order model (ROM) builder, the REVEAL toolset. This toolset generates ROMs based on science- and engineering-domain specific simulations executed on high performance computing (HPC) platforms. The toolset encompasses a range of sampling and regression methods that can be used to generate a ROM, automatically quantifies the ROM accuracy, and provides support for an iterative approach to improve ROM accuracy. REVEAL is designed to bemore » extensible in order to utilize the core functionality with any simulator that has published input and output formats. It also defines programmatic interfaces to include new sampling and regression techniques so that users can ‘mix and match’ mathematical techniques to best suit the characteristics of their model. In this paper, we describe the architecture of REVEAL and demonstrate its usage with a computational fluid dynamics model used in carbon capture.« less
NASA Astrophysics Data System (ADS)
Amsallem, David; Tezaur, Radek; Farhat, Charbel
2016-12-01
A comprehensive approach for real-time computations using a database of parametric, linear, projection-based reduced-order models (ROMs) based on arbitrary underlying meshes is proposed. In the offline phase of this approach, the parameter space is sampled and linear ROMs defined by linear reduced operators are pre-computed at the sampled parameter points and stored. Then, these operators and associated ROMs are transformed into counterparts that satisfy a certain notion of consistency. In the online phase of this approach, a linear ROM is constructed in real-time at a queried but unsampled parameter point by interpolating the pre-computed linear reduced operators on matrix manifolds and therefore computing an interpolated linear ROM. The proposed overall model reduction framework is illustrated with two applications: a parametric inverse acoustic scattering problem associated with a mockup submarine, and a parametric flutter prediction problem associated with a wing-tank system. The second application is implemented on a mobile device, illustrating the capability of the proposed computational framework to operate in real-time.
AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelli, D.; Alfonsi, A.; Talbot, P.
2016-10-01
The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, themore » overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).« less
Leahy, P.P.
1982-01-01
The Trescott computer program for modeling groundwater flow in three dimensions has been modified to (1) treat aquifer and confining bed pinchouts more realistically and (2) reduce the computer memory requirements needed for the input data. Using the original program, simulation of aquifer systems with nonrectangular external boundaries may result in a large number of nodes that are not involved in the numerical solution of the problem, but require computer storage. (USGS)
A Reduced Order Model for Whole-Chip Thermal Analysis of Microfluidic Lab-on-a-Chip Systems
Wang, Yi; Song, Hongjun; Pant, Kapil
2013-01-01
This paper presents a Krylov subspace projection-based Reduced Order Model (ROM) for whole microfluidic chip thermal analysis, including conjugate heat transfer. Two key steps in the reduced order modeling procedure are described in detail, including (1) the acquisition of a 3D full-scale computational model in the state-space form to capture the dynamic thermal behavior of the entire microfluidic chip; and (2) the model order reduction using the Block Arnoldi algorithm to markedly lower the dimension of the full-scale model. Case studies using practically relevant thermal microfluidic chip are undertaken to establish the capability and to evaluate the computational performance of the reduced order modeling technique. The ROM is compared against the full-scale model and exhibits good agreement in spatiotemporal thermal profiles (<0.5% relative error in pertinent time scales) and over three orders-of-magnitude acceleration in computational speed. The salient model reusability and real-time simulation capability renders it amenable for operational optimization and in-line thermal control and management of microfluidic systems and devices. PMID:24443647
Optimizing ion channel models using a parallel genetic algorithm on graphical processors.
Ben-Shalom, Roy; Aviv, Amit; Razon, Benjamin; Korngreen, Alon
2012-01-01
We have recently shown that we can semi-automatically constrain models of voltage-gated ion channels by combining a stochastic search algorithm with ionic currents measured using multiple voltage-clamp protocols. Although numerically successful, this approach is highly demanding computationally, with optimization on a high performance Linux cluster typically lasting several days. To solve this computational bottleneck we converted our optimization algorithm for work on a graphical processing unit (GPU) using NVIDIA's CUDA. Parallelizing the process on a Fermi graphic computing engine from NVIDIA increased the speed ∼180 times over an application running on an 80 node Linux cluster, considerably reducing simulation times. This application allows users to optimize models for ion channel kinetics on a single, inexpensive, desktop "super computer," greatly reducing the time and cost of building models relevant to neuronal physiology. We also demonstrate that the point of algorithm parallelization is crucial to its performance. We substantially reduced computing time by solving the ODEs (Ordinary Differential Equations) so as to massively reduce memory transfers to and from the GPU. This approach may be applied to speed up other data intensive applications requiring iterative solutions of ODEs. Copyright © 2012 Elsevier B.V. All rights reserved.
Modeling and simulation of ocean wave propagation using lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Nuraiman, Dian
2017-10-01
In this paper, we present on modeling and simulation of ocean wave propagation from the deep sea to the shoreline. This requires high computational cost for simulation with large domain. We propose to couple a 1D shallow water equations (SWE) model with a 2D incompressible Navier-Stokes equations (NSE) model in order to reduce the computational cost. The coupled model is solved using the lattice Boltzmann method (LBM) with the lattice Bhatnagar-Gross-Krook (BGK) scheme. Additionally, a special method is implemented to treat the complex behavior of free surface close to the shoreline. The result shows the coupled model can reduce computational cost significantly compared to the full NSE model.
Projection-Based Reduced Order Modeling for Spacecraft Thermal Analysis
NASA Technical Reports Server (NTRS)
Qian, Jing; Wang, Yi; Song, Hongjun; Pant, Kapil; Peabody, Hume; Ku, Jentung; Butler, Charles D.
2015-01-01
This paper presents a mathematically rigorous, subspace projection-based reduced order modeling (ROM) methodology and an integrated framework to automatically generate reduced order models for spacecraft thermal analysis. Two key steps in the reduced order modeling procedure are described: (1) the acquisition of a full-scale spacecraft model in the ordinary differential equation (ODE) and differential algebraic equation (DAE) form to resolve its dynamic thermal behavior; and (2) the ROM to markedly reduce the dimension of the full-scale model. Specifically, proper orthogonal decomposition (POD) in conjunction with discrete empirical interpolation method (DEIM) and trajectory piece-wise linear (TPWL) methods are developed to address the strong nonlinear thermal effects due to coupled conductive and radiative heat transfer in the spacecraft environment. Case studies using NASA-relevant satellite models are undertaken to verify the capability and to assess the computational performance of the ROM technique in terms of speed-up and error relative to the full-scale model. ROM exhibits excellent agreement in spatiotemporal thermal profiles (<0.5% relative error in pertinent time scales) along with salient computational acceleration (up to two orders of magnitude speed-up) over the full-scale analysis. These findings establish the feasibility of ROM to perform rational and computationally affordable thermal analysis, develop reliable thermal control strategies for spacecraft, and greatly reduce the development cycle times and costs.
Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach
NASA Technical Reports Server (NTRS)
Aguilo, Miguel A.; Warner, James E.
2017-01-01
This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.
Reduced order surrogate modelling (ROSM) of high dimensional deterministic simulations
NASA Astrophysics Data System (ADS)
Mitry, Mina
Often, computationally expensive engineering simulations can prohibit the engineering design process. As a result, designers may turn to a less computationally demanding approximate, or surrogate, model to facilitate their design process. However, owing to the the curse of dimensionality, classical surrogate models become too computationally expensive for high dimensional data. To address this limitation of classical methods, we develop linear and non-linear Reduced Order Surrogate Modelling (ROSM) techniques. Two algorithms are presented, which are based on a combination of linear/kernel principal component analysis and radial basis functions. These algorithms are applied to subsonic and transonic aerodynamic data, as well as a model for a chemical spill in a channel. The results of this thesis show that ROSM can provide a significant computational benefit over classical surrogate modelling, sometimes at the expense of a minor loss in accuracy.
Comparison of different models for non-invasive FFR estimation
NASA Astrophysics Data System (ADS)
Mirramezani, Mehran; Shadden, Shawn
2017-11-01
Coronary artery disease is a leading cause of death worldwide. Fractional flow reserve (FFR), derived from invasively measuring the pressure drop across a stenosis, is considered the gold standard to diagnose disease severity and need for treatment. Non-invasive estimation of FFR has gained recent attention for its potential to reduce patient risk and procedural cost versus invasive FFR measurement. Non-invasive FFR can be obtained by using image-based computational fluid dynamics to simulate blood flow and pressure in a patient-specific coronary model. However, 3D simulations require extensive effort for model construction and numerical computation, which limits their routine use. In this study we compare (ordered by increasing computational cost/complexity): reduced-order algebraic models of pressure drop across a stenosis; 1D, 2D (multiring) and 3D CFD models; as well as 3D FSI for the computation of FFR in idealized and patient-specific stenosis geometries. We demonstrate the ability of an appropriate reduced order algebraic model to closely predict FFR when compared to FFR from a full 3D simulation. This work was supported by the NIH, Grant No. R01-HL103419.
Fast Learning for Immersive Engagement in Energy Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bush, Brian W; Bugbee, Bruce; Gruchalla, Kenny M
The fast computation which is critical for immersive engagement with and learning from energy simulations would be furthered by developing a general method for creating rapidly computed simplified versions of NREL's computation-intensive energy simulations. Created using machine learning techniques, these 'reduced form' simulations can provide statistically sound estimates of the results of the full simulations at a fraction of the computational cost with response times - typically less than one minute of wall-clock time - suitable for real-time human-in-the-loop design and analysis. Additionally, uncertainty quantification techniques can document the accuracy of the approximate models and their domain of validity. Approximationmore » methods are applicable to a wide range of computational models, including supply-chain models, electric power grid simulations, and building models. These reduced-form representations cannot replace or re-implement existing simulations, but instead supplement them by enabling rapid scenario design and quality assurance for large sets of simulations. We present an overview of the framework and methods we have implemented for developing these reduced-form representations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
McClanahan, Richard; De Leon, Phillip L.
The majority of state-of-the-art speaker recognition systems (SR) utilize speaker models that are derived from an adapted universal background model (UBM) in the form of a Gaussian mixture model (GMM). This is true for GMM supervector systems, joint factor analysis systems, and most recently i-vector systems. In all of the identified systems, the posterior probabilities and sufficient statistics calculations represent a computational bottleneck in both enrollment and testing. We propose a multi-layered hash system, employing a tree-structured GMM–UBM which uses Runnalls’ Gaussian mixture reduction technique, in order to reduce the number of these calculations. Moreover, with this tree-structured hash, wemore » can trade-off reduction in computation with a corresponding degradation of equal error rate (EER). As an example, we also reduce this computation by a factor of 15× while incurring less than 10% relative degradation of EER (or 0.3% absolute EER) when evaluated with NIST 2010 speaker recognition evaluation (SRE) telephone data.« less
McClanahan, Richard; De Leon, Phillip L.
2014-08-20
The majority of state-of-the-art speaker recognition systems (SR) utilize speaker models that are derived from an adapted universal background model (UBM) in the form of a Gaussian mixture model (GMM). This is true for GMM supervector systems, joint factor analysis systems, and most recently i-vector systems. In all of the identified systems, the posterior probabilities and sufficient statistics calculations represent a computational bottleneck in both enrollment and testing. We propose a multi-layered hash system, employing a tree-structured GMM–UBM which uses Runnalls’ Gaussian mixture reduction technique, in order to reduce the number of these calculations. Moreover, with this tree-structured hash, wemore » can trade-off reduction in computation with a corresponding degradation of equal error rate (EER). As an example, we also reduce this computation by a factor of 15× while incurring less than 10% relative degradation of EER (or 0.3% absolute EER) when evaluated with NIST 2010 speaker recognition evaluation (SRE) telephone data.« less
Noise Estimation in Electroencephalogram Signal by Using Volterra Series Coefficients
Hassani, Malihe; Karami, Mohammad Reza
2015-01-01
The Volterra model is widely used for nonlinearity identification in practical applications. In this paper, we employed Volterra model to find the nonlinearity relation between electroencephalogram (EEG) signal and the noise that is a novel approach to estimate noise in EEG signal. We show that by employing this method. We can considerably improve the signal to noise ratio by the ratio of at least 1.54. An important issue in implementing Volterra model is its computation complexity, especially when the degree of nonlinearity is increased. Hence, in many applications it is urgent to reduce the complexity of computation. In this paper, we use the property of EEG signal and propose a new and good approximation of delayed input signal to its adjacent samples in order to reduce the computation of finding Volterra series coefficients. The computation complexity is reduced by the ratio of at least 1/3 when the filter memory is 3. PMID:26284176
Parameterized reduced-order models using hyper-dual numbers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fike, Jeffrey A.; Brake, Matthew Robert
2013-10-01
The goal of most computational simulations is to accurately predict the behavior of a real, physical system. Accurate predictions often require very computationally expensive analyses and so reduced order models (ROMs) are commonly used. ROMs aim to reduce the computational cost of the simulations while still providing accurate results by including all of the salient physics of the real system in the ROM. However, real, physical systems often deviate from the idealized models used in simulations due to variations in manufacturing or other factors. One approach to this issue is to create a parameterized model in order to characterize themore » effect of perturbations from the nominal model on the behavior of the system. This report presents a methodology for developing parameterized ROMs, which is based on Craig-Bampton component mode synthesis and the use of hyper-dual numbers to calculate the derivatives necessary for the parameterization.« less
NASA Astrophysics Data System (ADS)
Han, Suyue; Chang, Gary Han; Schirmer, Clemens; Modarres-Sadeghi, Yahya
2016-11-01
We construct a reduced-order model (ROM) to study the Wall Shear Stress (WSS) distributions in image-based patient-specific aneurysms models. The magnitude of WSS has been shown to be a critical factor in growth and rupture of human aneurysms. We start the process by running a training case using Computational Fluid Dynamics (CFD) simulation with time-varying flow parameters, such that these parameters cover the range of parameters of interest. The method of snapshot Proper Orthogonal Decomposition (POD) is utilized to construct the reduced-order bases using the training CFD simulation. The resulting ROM enables us to study the flow patterns and the WSS distributions over a range of system parameters computationally very efficiently with a relatively small number of modes. This enables comprehensive analysis of the model system across a range of physiological conditions without the need to re-compute the simulation for small changes in the system parameters.
NASA Astrophysics Data System (ADS)
Kim, Euiyoung; Cho, Maenghyo
2017-11-01
In most non-linear analyses, the construction of a system matrix uses a large amount of computation time, comparable to the computation time required by the solving process. If the process for computing non-linear internal force matrices is substituted with an effective equivalent model that enables the bypass of numerical integrations and assembly processes used in matrix construction, efficiency can be greatly enhanced. A stiffness evaluation procedure (STEP) establishes non-linear internal force models using polynomial formulations of displacements. To efficiently identify an equivalent model, the method has evolved such that it is based on a reduced-order system. The reduction process, however, makes the equivalent model difficult to parameterize, which significantly affects the efficiency of the optimization process. In this paper, therefore, a new STEP, E-STEP, is proposed. Based on the element-wise nature of the finite element model, the stiffness evaluation is carried out element-by-element in the full domain. Since the unit of computation for the stiffness evaluation is restricted by element size, and since the computation is independent, the equivalent model can be constructed efficiently in parallel, even in the full domain. Due to the element-wise nature of the construction procedure, the equivalent E-STEP model is easily characterized by design parameters. Various reduced-order modeling techniques can be applied to the equivalent system in a manner similar to how they are applied in the original system. The reduced-order model based on E-STEP is successfully demonstrated for the dynamic analyses of non-linear structural finite element systems under varying design parameters.
Reduced Order Modeling of Combustion Instability in a Gas Turbine Model Combustor
NASA Astrophysics Data System (ADS)
Arnold-Medabalimi, Nicholas; Huang, Cheng; Duraisamy, Karthik
2017-11-01
Hydrocarbon fuel based propulsion systems are expected to remain relevant in aerospace vehicles for the foreseeable future. Design of these devices is complicated by combustion instabilities. The capability to model and predict these effects at reduced computational cost is a requirement for both design and control of these devices. This work focuses on computational studies on a dual swirl model gas turbine combustor in the context of reduced order model development. Full fidelity simulations are performed utilizing URANS and Hybrid RANS-LES with finite rate chemistry. Following this, data decomposition techniques are used to extract a reduced basis representation of the unsteady flow field. These bases are first used to identify sensor locations to guide experimental interrogations and controller feedback. Following this, initial results on developing a control-oriented reduced order model (ROM) will be presented. The capability of the ROM will be further assessed based on different operating conditions and geometric configurations.
Structural mode significance using INCA. [Interactive Controls Analysis computer program
NASA Technical Reports Server (NTRS)
Bauer, Frank H.; Downing, John P.; Thorpe, Christopher J.
1990-01-01
Structural finite element models are often too large to be used in the design and analysis of control systems. Model reduction techniques must be applied to reduce the structural model to manageable size. In the past, engineers either performed the model order reduction by hand or used distinct computer programs to retrieve the data, to perform the significance analysis and to reduce the order of the model. To expedite this process, the latest version of INCA has been expanded to include an interactive graphical structural mode significance and model order reduction capability.
Computational toxicology (CompTox) leverages the significant gains in computing power and computational techniques (e.g., numerical approaches, structure-activity relationships, bioinformatics) realized over the last few years, thereby reducing costs and increasing efficiency i...
Speedup computation of HD-sEMG signals using a motor unit-specific electrical source model.
Carriou, Vincent; Boudaoud, Sofiane; Laforet, Jeremy
2018-01-23
Nowadays, bio-reliable modeling of muscle contraction is becoming more accurate and complex. This increasing complexity induces a significant increase in computation time which prevents the possibility of using this model in certain applications and studies. Accordingly, the aim of this work is to significantly reduce the computation time of high-density surface electromyogram (HD-sEMG) generation. This will be done through a new model of motor unit (MU)-specific electrical source based on the fibers composing the MU. In order to assess the efficiency of this approach, we computed the normalized root mean square error (NRMSE) between several simulations on single generated MU action potential (MUAP) using the usual fiber electrical sources and the MU-specific electrical source. This NRMSE was computed for five different simulation sets wherein hundreds of MUAPs are generated and summed into HD-sEMG signals. The obtained results display less than 2% error on the generated signals compared to the same signals generated with fiber electrical sources. Moreover, the computation time of the HD-sEMG signal generation model is reduced to about 90% compared to the fiber electrical source model. Using this model with MU electrical sources, we can simulate HD-sEMG signals of a physiological muscle (hundreds of MU) in less than an hour on a classical workstation. Graphical Abstract Overview of the simulation of HD-sEMG signals using the fiber scale and the MU scale. Upscaling the electrical source to the MU scale reduces the computation time by 90% inducing only small deviation of the same simulated HD-sEMG signals.
Du, Dongping; Yang, Hui; Ednie, Andrew R; Bennett, Eric S
2016-09-01
Glycan structures account for up to 35% of the mass of cardiac sodium ( Nav ) channels. To question whether and how reduced sialylation affects Nav activity and cardiac electrical signaling, we conducted a series of in vitro experiments on ventricular apex myocytes under two different glycosylation conditions, reduced protein sialylation (ST3Gal4(-/-)) and full glycosylation (control). Although aberrant electrical signaling is observed in reduced sialylation, realizing a better understanding of mechanistic details of pathological variations in INa and AP is difficult without performing in silico studies. However, computer model of Nav channels and cardiac myocytes involves greater levels of complexity, e.g., high-dimensional parameter space, nonlinear and nonconvex equations. Traditional linear and nonlinear optimization methods have encountered many difficulties for model calibration. This paper presents a new statistical metamodeling approach for efficient computer experiments and optimization of Nav models. First, we utilize a fractional factorial design to identify control variables from the large set of model parameters, thereby reducing the dimensionality of parametric space. Further, we develop the Gaussian process model as a surrogate of expensive and time-consuming computer models and then identify the next best design point that yields the maximal probability of improvement. This process iterates until convergence, and the performance is evaluated and validated with real-world experimental data. Experimental results show the proposed algorithm achieves superior performance in modeling the kinetics of Nav channels under a variety of glycosylation conditions. As a result, in silico models provide a better understanding of glyco-altered mechanistic details in state transitions and distributions of Nav channels. Notably, ST3Gal4(-/-) myocytes are shown to have higher probabilities accumulated in intermediate inactivation during the repolarization and yield a shorter refractory period than WTs. The proposed statistical design of computer experiments is generally extensible to many other disciplines that involve large-scale and computationally expensive models.
Reduced Order Model Implementation in the Risk-Informed Safety Margin Characterization Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelli, Diego; Smith, Curtis L.; Alfonsi, Andrea
2015-09-01
The RISMC project aims to develop new advanced simulation-based tools to perform Probabilistic Risk Analysis (PRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermo-hydraulic behavior of the reactor primary and secondary systems but also external events temporal evolution and components/system ageing. Thus, this is not only a multi-physics problem but also a multi-scale problem (both spatial, µm-mm-m, and temporal, ms-s-minutes-years). As part of the RISMC PRA approach, a large amount of computationally expensive simulation runs are required. An important aspect is that even though computational power is regularly growing, themore » overall computational cost of a RISMC analysis may be not viable for certain cases. A solution that is being evaluated is the use of reduce order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RICM analysis computational cost by decreasing the number of simulations runs to perform and employ surrogate models instead of the actual simulation codes. This report focuses on the use of reduced order modeling techniques that can be applied to any RISMC analysis to generate, analyze and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (µs instead of hours/days). We apply reduced order and surrogate modeling techniques to several RISMC types of analyses using RAVEN and RELAP-7 and show the advantages that can be gained.« less
MaMR: High-performance MapReduce programming model for material cloud applications
NASA Astrophysics Data System (ADS)
Jing, Weipeng; Tong, Danyu; Wang, Yangang; Wang, Jingyuan; Liu, Yaqiu; Zhao, Peng
2017-02-01
With the increasing data size in materials science, existing programming models no longer satisfy the application requirements. MapReduce is a programming model that enables the easy development of scalable parallel applications to process big data on cloud computing systems. However, this model does not directly support the processing of multiple related data, and the processing performance does not reflect the advantages of cloud computing. To enhance the capability of workflow applications in material data processing, we defined a programming model for material cloud applications that supports multiple different Map and Reduce functions running concurrently based on hybrid share-memory BSP called MaMR. An optimized data sharing strategy to supply the shared data to the different Map and Reduce stages was also designed. We added a new merge phase to MapReduce that can efficiently merge data from the map and reduce modules. Experiments showed that the model and framework present effective performance improvements compared to previous work.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Xinping, E-mail: exping@126.com
Stochastic multiscale modeling has become a necessary approach to quantify uncertainty and characterize multiscale phenomena for many practical problems such as flows in stochastic porous media. The numerical treatment of the stochastic multiscale models can be very challengeable as the existence of complex uncertainty and multiple physical scales in the models. To efficiently take care of the difficulty, we construct a computational reduced model. To this end, we propose a multi-element least square high-dimensional model representation (HDMR) method, through which the random domain is adaptively decomposed into a few subdomains, and a local least square HDMR is constructed in eachmore » subdomain. These local HDMRs are represented by a finite number of orthogonal basis functions defined in low-dimensional random spaces. The coefficients in the local HDMRs are determined using least square methods. We paste all the local HDMR approximations together to form a global HDMR approximation. To further reduce computational cost, we present a multi-element reduced least-square HDMR, which improves both efficiency and approximation accuracy in certain conditions. To effectively treat heterogeneity properties and multiscale features in the models, we integrate multiscale finite element methods with multi-element least-square HDMR for stochastic multiscale model reduction. This approach significantly reduces the original model's complexity in both the resolution of the physical space and the high-dimensional stochastic space. We analyze the proposed approach, and provide a set of numerical experiments to demonstrate the performance of the presented model reduction techniques. - Highlights: • Multi-element least square HDMR is proposed to treat stochastic models. • Random domain is adaptively decomposed into some subdomains to obtain adaptive multi-element HDMR. • Least-square reduced HDMR is proposed to enhance computation efficiency and approximation accuracy in certain conditions. • Integrating MsFEM and multi-element least square HDMR can significantly reduce computation complexity.« less
Evaluation of Geometrically Nonlinear Reduced Order Models with Nonlinear Normal Modes
Kuether, Robert J.; Deaner, Brandon J.; Hollkamp, Joseph J.; ...
2015-09-15
Several reduced-order modeling strategies have been developed to create low-order models of geometrically nonlinear structures from detailed finite element models, allowing one to compute the dynamic response of the structure at a dramatically reduced cost. But, the parameters of these reduced-order models are estimated by applying a series of static loads to the finite element model, and the quality of the reduced-order model can be highly sensitive to the amplitudes of the static load cases used and to the type/number of modes used in the basis. Our paper proposes to combine reduced-order modeling and numerical continuation to estimate the nonlinearmore » normal modes of geometrically nonlinear finite element models. Not only does this make it possible to compute the nonlinear normal modes far more quickly than existing approaches, but the nonlinear normal modes are also shown to be an excellent metric by which the quality of the reduced-order model can be assessed. Hence, the second contribution of this work is to demonstrate how nonlinear normal modes can be used as a metric by which nonlinear reduced-order models can be compared. Moreover, various reduced-order models with hardening nonlinearities are compared for two different structures to demonstrate these concepts: a clamped–clamped beam model, and a more complicated finite element model of an exhaust panel cover.« less
NASA Technical Reports Server (NTRS)
Oluwole, Oluwayemisi O.; Wong, Hsi-Wu; Green, William
2012-01-01
AdapChem software enables high efficiency, low computational cost, and enhanced accuracy on computational fluid dynamics (CFD) numerical simulations used for combustion studies. The software dynamically allocates smaller, reduced chemical models instead of the larger, full chemistry models to evolve the calculation while ensuring the same accuracy to be obtained for steady-state CFD reacting flow simulations. The software enables detailed chemical kinetic modeling in combustion CFD simulations. AdapChem adapts the reaction mechanism used in the CFD to the local reaction conditions. Instead of a single, comprehensive reaction mechanism throughout the computation, a dynamic distribution of smaller, reduced models is used to capture accurately the chemical kinetics at a fraction of the cost of the traditional single-mechanism approach.
Hybrid reduced order modeling for assembly calculations
Bang, Youngsuk; Abdel-Khalik, Hany S.; Jessee, Matthew A.; ...
2015-08-14
While the accuracy of assembly calculations has greatly improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the usemore » of the reduced order modeling for a single physics code, such as a radiation transport calculation. This paper extends those works to coupled code systems as currently employed in assembly calculations. Finally, numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system.« less
NASA Astrophysics Data System (ADS)
Stockton, Gregory R.
2011-05-01
Over the last 10 years, very large government, military, and commercial computer and data center operators have spent millions of dollars trying to optimally cool data centers as each rack has begun to consume as much as 10 times more power than just a few years ago. In fact, the maximum amount of data computation in a computer center is becoming limited by the amount of available power, space and cooling capacity at some data centers. Tens of millions of dollars and megawatts of power are being annually spent to keep data centers cool. The cooling and air flows dynamically change away from any predicted 3-D computational fluid dynamic modeling during construction and as time goes by, and the efficiency and effectiveness of the actual cooling rapidly departs even farther from predicted models. By using 3-D infrared (IR) thermal mapping and other techniques to calibrate and refine the computational fluid dynamic modeling and make appropriate corrections and repairs, the required power for data centers can be dramatically reduced which reduces costs and also improves reliability.
A model-reduction approach to the micromechanical analysis of polycrystalline materials
NASA Astrophysics Data System (ADS)
Michel, Jean-Claude; Suquet, Pierre
2016-03-01
The present study is devoted to the extension to polycrystals of a model-reduction technique introduced by the authors, called the nonuniform transformation field analysis (NTFA). This new reduced model is obtained in two steps. First the local fields of internal variables are decomposed on a reduced basis of modes as in the NTFA. Second the dissipation potential of the phases is replaced by its tangent second-order (TSO) expansion. The reduced evolution equations of the model can be entirely expressed in terms of quantities which can be pre-computed once for all. Roughly speaking, these pre-computed quantities depend only on the average and fluctuations per phase of the modes and of the associated stress fields. The accuracy of the new NTFA-TSO model is assessed by comparison with full-field simulations on two specific applications, creep of polycrystalline ice and response of polycrystalline copper to a cyclic tension-compression test. The new reduced evolution equations is faster than the full-field computations by two orders of magnitude in the two examples.
NASA Technical Reports Server (NTRS)
Seldner, K.
1976-01-01
The development of control systems for jet engines requires a real-time computer simulation. The simulation provides an effective tool for evaluating control concepts and problem areas prior to actual engine testing. The development and use of a real-time simulation of the Pratt and Whitney F100-PW100 turbofan engine is described. The simulation was used in a multi-variable optimal controls research program using linear quadratic regulator theory. The simulation is used to generate linear engine models at selected operating points and evaluate the control algorithm. To reduce the complexity of the design, it is desirable to reduce the order of the linear model. A technique to reduce the order of the model; is discussed. Selected results between high and low order models are compared. The LQR control algorithms can be programmed on digital computer. This computer will control the engine simulation over the desired flight envelope.
Controller design via structural reduced modeling by FETM
NASA Technical Reports Server (NTRS)
Yousuff, Ajmal
1987-01-01
The Finite Element-Transfer Matrix (FETM) method has been developed to reduce the computations involved in analysis of structures. This widely accepted method, however, has certain limitations, and does not address the issues of control design. To overcome these, a modification of the FETM method has been developed. The new method easily produces reduced models tailored toward subsequent control design. Other features of this method are its ability to: (1) extract open loop frequencies and mode shapes with less computations, (2) overcome limitations of the original FETM method, and (3) simplify the design procedures for output feedback, constrained compensation, and decentralized control. This report presents the development of the new method, generation of reduced models by this method, their properties, and the role of these reduced models in control design. Examples are included to illustrate the methodology.
A genetic algorithm for solving supply chain network design model
NASA Astrophysics Data System (ADS)
Firoozi, Z.; Ismail, N.; Ariafar, S. H.; Tang, S. H.; Ariffin, M. K. M. A.
2013-09-01
Network design is by nature costly and optimization models play significant role in reducing the unnecessary cost components of a distribution network. This study proposes a genetic algorithm to solve a distribution network design model. The structure of the chromosome in the proposed algorithm is defined in a novel way that in addition to producing feasible solutions, it also reduces the computational complexity of the algorithm. Computational results are presented to show the algorithm performance.
NASA Astrophysics Data System (ADS)
Soldner, Dominic; Brands, Benjamin; Zabihyan, Reza; Steinmann, Paul; Mergheim, Julia
2017-10-01
Computing the macroscopic material response of a continuum body commonly involves the formulation of a phenomenological constitutive model. However, the response is mainly influenced by the heterogeneous microstructure. Computational homogenisation can be used to determine the constitutive behaviour on the macro-scale by solving a boundary value problem at the micro-scale for every so-called macroscopic material point within a nested solution scheme. Hence, this procedure requires the repeated solution of similar microscopic boundary value problems. To reduce the computational cost, model order reduction techniques can be applied. An important aspect thereby is the robustness of the obtained reduced model. Within this study reduced-order modelling (ROM) for the geometrically nonlinear case using hyperelastic materials is applied for the boundary value problem on the micro-scale. This involves the Proper Orthogonal Decomposition (POD) for the primary unknown and hyper-reduction methods for the arising nonlinearity. Therein three methods for hyper-reduction, differing in how the nonlinearity is approximated and the subsequent projection, are compared in terms of accuracy and robustness. Introducing interpolation or Gappy-POD based approximations may not preserve the symmetry of the system tangent, rendering the widely used Galerkin projection sub-optimal. Hence, a different projection related to a Gauss-Newton scheme (Gauss-Newton with Approximated Tensors- GNAT) is favoured to obtain an optimal projection and a robust reduced model.
Model reduction of the numerical analysis of Low Impact Developments techniques
NASA Astrophysics Data System (ADS)
Brunetti, Giuseppe; Šimůnek, Jirka; Wöhling, Thomas; Piro, Patrizia
2017-04-01
Mechanistic models have proven to be accurate and reliable tools for the numerical analysis of the hydrological behavior of Low Impact Development (LIDs) techniques. However, their widespread adoption is limited by their complexity and computational cost. Recent studies have tried to address this issue by investigating the application of new techniques, such as surrogate-based modeling. However, current results are still limited and fragmented. One of such approaches, the Model Order Reduction (MOR) technique, can represent a valuable tool for reducing the computational complexity of a numerical problems by computing an approximation of the original model. While this technique has been extensively used in water-related problems, no studies have evaluated its use in LIDs modeling. Thus, the main aim of this study is to apply the MOR technique for the development of a reduced order model (ROM) for the numerical analysis of the hydrologic behavior of LIDs, in particular green roofs. The model should be able to correctly reproduce all the hydrological processes of a green roof while reducing the computational cost. The proposed model decouples the subsurface water dynamic of a green roof in a) one-dimensional (1D) vertical flow through a green roof itself and b) one-dimensional saturated lateral flow along the impervious rooftop. The green roof is horizontally discretized in N elements. Each element represents a vertical domain, which can have different properties or boundary conditions. The 1D Richards equation is used to simulate flow in the substrate and drainage layers. Simulated outflow from the vertical domain is used as a recharge term for saturated lateral flow, which is described using the kinematic wave approximation of the Boussinesq equation. The proposed model has been compared with the mechanistic model HYDRUS-2D, which numerically solves the Richards equation for the whole domain. The HYDRUS-1D code has been used for the description of vertical flow, while a Finite Volume Scheme has been adopted for lateral flow. Two scenarios involving flat and steep green roofs were analyzed. Results confirmed the accuracy of the reduced order model, which was able to reproduce both subsurface outflow and the moisture distribution in the green roof, significantly reducing the computational cost.
Alsmadi, Othman M K; Abo-Hammour, Zaer S
2015-01-01
A robust computational technique for model order reduction (MOR) of multi-time-scale discrete systems (single input single output (SISO) and multi-input multioutput (MIMO)) is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA) with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.
Fitting the Reduced RUM with Mplus: A Tutorial
ERIC Educational Resources Information Center
Chiu, Chia-Yi; Köhn, Hans-Friedrich; Wu, Huey-Min
2016-01-01
The Reduced Reparameterized Unified Model (Reduced RUM) is a diagnostic classification model for educational assessment that has received considerable attention among psychometricians. However, the computational options for researchers and practitioners who wish to use the Reduced RUM in their work, but do not feel comfortable writing their own…
A Fast Synthetic Aperture Radar Raw Data Simulation Using Cloud Computing.
Li, Zhixin; Su, Dandan; Zhu, Haijiang; Li, Wei; Zhang, Fan; Li, Ruirui
2017-01-08
Synthetic Aperture Radar (SAR) raw data simulation is a fundamental problem in radar system design and imaging algorithm research. The growth of surveying swath and resolution results in a significant increase in data volume and simulation period, which can be considered to be a comprehensive data intensive and computing intensive issue. Although several high performance computing (HPC) methods have demonstrated their potential for accelerating simulation, the input/output (I/O) bottleneck of huge raw data has not been eased. In this paper, we propose a cloud computing based SAR raw data simulation algorithm, which employs the MapReduce model to accelerate the raw data computing and the Hadoop distributed file system (HDFS) for fast I/O access. The MapReduce model is designed for the irregular parallel accumulation of raw data simulation, which greatly reduces the parallel efficiency of graphics processing unit (GPU) based simulation methods. In addition, three kinds of optimization strategies are put forward from the aspects of programming model, HDFS configuration and scheduling. The experimental results show that the cloud computing based algorithm achieves 4_ speedup over the baseline serial approach in an 8-node cloud environment, and each optimization strategy can improve about 20%. This work proves that the proposed cloud algorithm is capable of solving the computing intensive and data intensive issues in SAR raw data simulation, and is easily extended to large scale computing to achieve higher acceleration.
NASA Astrophysics Data System (ADS)
Yuan, Shifei; Jiang, Lei; Yin, Chengliang; Wu, Hongjie; Zhang, Xi
2017-06-01
To guarantee the safety, high efficiency and long lifetime for lithium-ion battery, an advanced battery management system requires a physics-meaningful yet computationally efficient battery model. The pseudo-two dimensional (P2D) electrochemical model can provide physical information about the lithium concentration and potential distributions across the cell dimension. However, the extensive computation burden caused by the temporal and spatial discretization limits its real-time application. In this research, we propose a new simplified electrochemical model (SEM) by modifying the boundary conditions for electrolyte diffusion equations, which significantly facilitates the analytical solving process. Then to obtain a reduced order transfer function, the Padé approximation method is adopted to simplify the derived transcendental impedance solution. The proposed model with the reduced order transfer function can be briefly computable and preserve physical meanings through the presence of parameters such as the solid/electrolyte diffusion coefficients (Ds&De) and particle radius. The simulation illustrates that the proposed simplified model maintains high accuracy for electrolyte phase concentration (Ce) predictions, saying 0.8% and 0.24% modeling error respectively, when compared to the rigorous model under 1C-rate pulse charge/discharge and urban dynamometer driving schedule (UDDS) profiles. Meanwhile, this simplified model yields significantly reduced computational burden, which benefits its real-time application.
Proper Orthogonal Decomposition in Optimal Control of Fluids
NASA Technical Reports Server (NTRS)
Ravindran, S. S.
1999-01-01
In this article, we present a reduced order modeling approach suitable for active control of fluid dynamical systems based on proper orthogonal decomposition (POD). The rationale behind the reduced order modeling is that numerical simulation of Navier-Stokes equations is still too costly for the purpose of optimization and control of unsteady flows. We examine the possibility of obtaining reduced order models that reduce computational complexity associated with the Navier-Stokes equations while capturing the essential dynamics by using the POD. The POD allows extraction of certain optimal set of basis functions, perhaps few, from a computational or experimental data-base through an eigenvalue analysis. The solution is then obtained as a linear combination of these optimal set of basis functions by means of Galerkin projection. This makes it attractive for optimal control and estimation of systems governed by partial differential equations. We here use it in active control of fluid flows governed by the Navier-Stokes equations. We show that the resulting reduced order model can be very efficient for the computations of optimization and control problems in unsteady flows. Finally, implementational issues and numerical experiments are presented for simulations and optimal control of fluid flow through channels.
The influence of computational assumptions on analysing abdominal aortic aneurysm haemodynamics.
Ene, Florentina; Delassus, Patrick; Morris, Liam
2014-08-01
The variation in computational assumptions for analysing abdominal aortic aneurysm haemodynamics can influence the desired output results and computational cost. Such assumptions for abdominal aortic aneurysm modelling include static/transient pressures, steady/transient flows and rigid/compliant walls. Six computational methods and these various assumptions were simulated and compared within a realistic abdominal aortic aneurysm model with and without intraluminal thrombus. A full transient fluid-structure interaction was required to analyse the flow patterns within the compliant abdominal aortic aneurysms models. Rigid wall computational fluid dynamics overestimates the velocity magnitude by as much as 40%-65% and the wall shear stress by 30%-50%. These differences were attributed to the deforming walls which reduced the outlet volumetric flow rate for the transient fluid-structure interaction during the majority of the systolic phase. Static finite element analysis accurately approximates the deformations and von Mises stresses when compared with transient fluid-structure interaction. Simplifying the modelling complexity reduces the computational cost significantly. In conclusion, the deformation and von Mises stress can be approximately found by static finite element analysis, while for compliant models a full transient fluid-structure interaction analysis is required for acquiring the fluid flow phenomenon. © IMechE 2014.
Sensitivity Analysis for Coupled Aero-structural Systems
NASA Technical Reports Server (NTRS)
Giunta, Anthony A.
1999-01-01
A novel method has been developed for calculating gradients of aerodynamic force and moment coefficients for an aeroelastic aircraft model. This method uses the Global Sensitivity Equations (GSE) to account for the aero-structural coupling, and a reduced-order modal analysis approach to condense the coupling bandwidth between the aerodynamic and structural models. Parallel computing is applied to reduce the computational expense of the numerous high fidelity aerodynamic analyses needed for the coupled aero-structural system. Good agreement is obtained between aerodynamic force and moment gradients computed with the GSE/modal analysis approach and the same quantities computed using brute-force, computationally expensive, finite difference approximations. A comparison between the computational expense of the GSE/modal analysis method and a pure finite difference approach is presented. These results show that the GSE/modal analysis approach is the more computationally efficient technique if sensitivity analysis is to be performed for two or more aircraft design parameters.
Sherlock, M.; Brodrick, J. P.; Ridgers, C. P.
2017-08-08
Here, we compare the reduced non-local electron transport model developed to Vlasov-Fokker-Planck simulations. Two new test cases are considered: the propagation of a heat wave through a high density region into a lower density gas, and a one-dimensional hohlraum ablation problem. We find that the reduced model reproduces the peak heat flux well in the ablation region but significantly over-predicts the coronal preheat. The suitability of the reduced model for computing non-local transport effects other than thermal conductivity is considered by comparing the computed distribution function to the Vlasov-Fokker-Planck distribution function. It is shown that even when the reduced modelmore » reproduces the correct heat flux, the distribution function is significantly different to the Vlasov-Fokker-Planck prediction. Two simple modifications are considered which improve agreement between models in the coronal region.« less
Boundary Condition for Modeling Semiconductor Nanostructures
NASA Technical Reports Server (NTRS)
Lee, Seungwon; Oyafuso, Fabiano; von Allmen, Paul; Klimeck, Gerhard
2006-01-01
A recently proposed boundary condition for atomistic computational modeling of semiconductor nanostructures (particularly, quantum dots) is an improved alternative to two prior such boundary conditions. As explained, this boundary condition helps to reduce the amount of computation while maintaining accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sherlock, M.; Brodrick, J. P.; Ridgers, C. P.
Here, we compare the reduced non-local electron transport model developed to Vlasov-Fokker-Planck simulations. Two new test cases are considered: the propagation of a heat wave through a high density region into a lower density gas, and a one-dimensional hohlraum ablation problem. We find that the reduced model reproduces the peak heat flux well in the ablation region but significantly over-predicts the coronal preheat. The suitability of the reduced model for computing non-local transport effects other than thermal conductivity is considered by comparing the computed distribution function to the Vlasov-Fokker-Planck distribution function. It is shown that even when the reduced modelmore » reproduces the correct heat flux, the distribution function is significantly different to the Vlasov-Fokker-Planck prediction. Two simple modifications are considered which improve agreement between models in the coronal region.« less
Animal-Related Computer Simulation Programs for Use in Education and Research. AWIC Series Number 1.
ERIC Educational Resources Information Center
Engler, Kevin P.
Computer models have definite limitations regarding the representation of biological systems, but they do have useful applications in reducing the number of animals used to study physiological systems, especially for educational purposes. This guide lists computer models that simulate living systems and can be used to demonstrate physiological,…
Computer-Aided Process Model For Carbon/Phenolic Materials
NASA Technical Reports Server (NTRS)
Letson, Mischell A.; Bunker, Robert C.
1996-01-01
Computer program implements thermochemical model of processing of carbon-fiber/phenolic-matrix composite materials into molded parts of various sizes and shapes. Directed toward improving fabrication of rocket-engine-nozzle parts, also used to optimize fabrication of other structural components, and material-property parameters changed to apply to other materials. Reduces costs by reducing amount of laboratory trial and error needed to optimize curing processes and to predict properties of cured parts.
NASA Astrophysics Data System (ADS)
Michel, Jean-Claude; Suquet, Pierre
2016-05-01
In 2003 the authors proposed a model-reduction technique, called the Nonuniform Transformation Field Analysis (NTFA), based on a decomposition of the local fields of internal variables on a reduced basis of modes, to analyze the effective response of composite materials. The present study extends and improves on this approach in different directions. It is first shown that when the constitutive relations of the constituents derive from two potentials, this structure is passed to the NTFA model. Another structure-preserving model, the hybrid NTFA model of Fritzen and Leuschner, is analyzed and found to differ (slightly) from the primal NTFA model (it does not exhibit the same variational upper bound character). To avoid the "on-line" computation of local fields required by the hybrid model, new reduced evolution equations for the reduced variables are proposed, based on an expansion to second order (TSO) of the potential of the hybrid model. The coarse dynamics can then be entirely expressed in terms of quantities which can be pre-computed once for all. Roughly speaking, these pre-computed quantities depend only on the average and fluctuations per phase of the modes and of the associated stress fields. The accuracy of the new NTFA-TSO model is assessed by comparison with full-field simulations. The acceleration provided by the new coarse dynamics over the full-field computations (and over the hybrid model) is then spectacular, larger by three orders of magnitude than the acceleration due to the sole reduction of unknowns.
NASA Technical Reports Server (NTRS)
Hashemi-Kia, Mostafa; Toossi, Mostafa
1990-01-01
A computational procedure for the reduction of large finite element models was developed. This procedure is used to obtain a significantly reduced model while retaining the essential global dynamic characteristics of the full-size model. This reduction procedure is applied to the airframe finite element model of AH-64A Attack Helicopter. The resulting reduced model is then validated by application to a vibration reduction study.
Decreasing the temporal complexity for nonlinear, implicit reduced-order models by forecasting
Carlberg, Kevin; Ray, Jaideep; van Bloemen Waanders, Bart
2015-02-14
Implicit numerical integration of nonlinear ODEs requires solving a system of nonlinear algebraic equations at each time step. Each of these systems is often solved by a Newton-like method, which incurs a sequence of linear-system solves. Most model-reduction techniques for nonlinear ODEs exploit knowledge of system's spatial behavior to reduce the computational complexity of each linear-system solve. However, the number of linear-system solves for the reduced-order simulation often remains roughly the same as that for the full-order simulation. We propose exploiting knowledge of the model's temporal behavior to (1) forecast the unknown variable of the reduced-order system of nonlinear equationsmore » at future time steps, and (2) use this forecast as an initial guess for the Newton-like solver during the reduced-order-model simulation. To compute the forecast, we propose using the Gappy POD technique. As a result, the goal is to generate an accurate initial guess so that the Newton solver requires many fewer iterations to converge, thereby decreasing the number of linear-system solves in the reduced-order-model simulation.« less
Reduced complexity structural modeling for automated airframe synthesis
NASA Technical Reports Server (NTRS)
Hajela, Prabhat
1987-01-01
A procedure is developed for the optimum sizing of wing structures based on representing the built-up finite element assembly of the structure by equivalent beam models. The reduced-order beam models are computationally less demanding in an optimum design environment which dictates repetitive analysis of several trial designs. The design procedure is implemented in a computer program requiring geometry and loading information to create the wing finite element model and its equivalent beam model, and providing a rapid estimate of the optimum weight obtained from a fully stressed design approach applied to the beam. The synthesis procedure is demonstrated for representative conventional-cantilever and joined wing configurations.
First-order finite-Larmor-radius fluid modeling of tearing and relaxation in a plasma pincha)
NASA Astrophysics Data System (ADS)
King, J. R.; Sovinec, C. R.; Mirnov, V. V.
2012-05-01
Drift and Hall effects on magnetic tearing, island evolution, and relaxation in pinch configurations are investigated using a non-reduced first-order finite-Larmor-radius (FLR) fluid model with the nonideal magnetohydrodynamics (MHD) with rotation, open discussion (NIMROD) code [C.R. Sovinec and J. R. King, J. Comput. Phys. 229, 5803 (2010)]. An unexpected result with a uniform pressure profile is a drift effect that reduces the growth rate when the ion sound gyroradius (ρs) is smaller than the tearing-layer width. This drift is present only with warm-ion FLR modeling, and analytics show that it arises from ∇B and poloidal curvature represented in the Braginskii gyroviscous stress. Nonlinear single-helicity computations with experimentally relevant ρs values show that the warm-ion gyroviscous effects reduce saturated-island widths. Computations with multiple nonlinearly interacting tearing fluctuations find that m = 1 core-resonant-fluctuation amplitudes are reduced by a factor of two relative to single-fluid modeling by the warm-ion effects. These reduced core-resonant-fluctuation amplitudes compare favorably to edge coil measurements in the Madison Symmetric Torus (MST) reversed-field pinch [R. N. Dexter et al., Fusion Technol. 19, 131 (1991)]. The computations demonstrate that fluctuations induce both MHD- and Hall-dynamo emfs during relaxation events. The presence of a Hall-dynamo emf implies a fluctuation-induced Maxwell stress, and the simulation results show net transport of parallel momentum. The computed magnitude of force densities from the Maxwell and competing Reynolds stresses, and changes in the parallel flow profile, are qualitatively and semi-quantitatively similar to measurements during relaxation in MST.
First-order finite-Larmor-radius fluid modeling of tearing and relaxation in a plasma pinch
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, J. R.; Tech-X Corporation, 5621 Arapahoe Ave., Suite A Boulder, Colorado 80303; Sovinec, C. R.
Drift and Hall effects on magnetic tearing, island evolution, and relaxation in pinch configurations are investigated using a non-reduced first-order finite-Larmor-radius (FLR) fluid model with the nonideal magnetohydrodynamics (MHD) with rotation, open discussion (NIMROD) code [C.R. Sovinec and J. R. King, J. Comput. Phys. 229, 5803 (2010)]. An unexpected result with a uniform pressure profile is a drift effect that reduces the growth rate when the ion sound gyroradius ({rho}{sub s}) is smaller than the tearing-layer width. This drift is present only with warm-ion FLR modeling, and analytics show that it arises from {nabla}B and poloidal curvature represented in themore » Braginskii gyroviscous stress. Nonlinear single-helicity computations with experimentally relevant {rho}{sub s} values show that the warm-ion gyroviscous effects reduce saturated-island widths. Computations with multiple nonlinearly interacting tearing fluctuations find that m = 1 core-resonant-fluctuation amplitudes are reduced by a factor of two relative to single-fluid modeling by the warm-ion effects. These reduced core-resonant-fluctuation amplitudes compare favorably to edge coil measurements in the Madison Symmetric Torus (MST) reversed-field pinch [R. N. Dexter et al., Fusion Technol. 19, 131 (1991)]. The computations demonstrate that fluctuations induce both MHD- and Hall-dynamo emfs during relaxation events. The presence of a Hall-dynamo emf implies a fluctuation-induced Maxwell stress, and the simulation results show net transport of parallel momentum. The computed magnitude of force densities from the Maxwell and competing Reynolds stresses, and changes in the parallel flow profile, are qualitatively and semi-quantitatively similar to measurements during relaxation in MST.« less
NASA Astrophysics Data System (ADS)
Osorio-Murillo, C. A.; Over, M. W.; Frystacky, H.; Ames, D. P.; Rubin, Y.
2013-12-01
A new software application called MAD# has been coupled with the HTCondor high throughput computing system to aid scientists and educators with the characterization of spatial random fields and enable understanding the spatial distribution of parameters used in hydrogeologic and related modeling. MAD# is an open source desktop software application used to characterize spatial random fields using direct and indirect information through Bayesian inverse modeling technique called the Method of Anchored Distributions (MAD). MAD relates indirect information with a target spatial random field via a forward simulation model. MAD# executes inverse process running the forward model multiple times to transfer information from indirect information to the target variable. MAD# uses two parallelization profiles according to computational resources available: one computer with multiple cores and multiple computers - multiple cores through HTCondor. HTCondor is a system that manages a cluster of desktop computers for submits serial or parallel jobs using scheduling policies, resources monitoring, job queuing mechanism. This poster will show how MAD# reduces the time execution of the characterization of random fields using these two parallel approaches in different case studies. A test of the approach was conducted using 1D problem with 400 cells to characterize saturated conductivity, residual water content, and shape parameters of the Mualem-van Genuchten model in four materials via the HYDRUS model. The number of simulations evaluated in the inversion was 10 million. Using the one computer approach (eight cores) were evaluated 100,000 simulations in 12 hours (10 million - 1200 hours approximately). In the evaluation on HTCondor, 32 desktop computers (132 cores) were used, with a processing time of 60 hours non-continuous in five days. HTCondor reduced the processing time for uncertainty characterization by a factor of 20 (1200 hours reduced to 60 hours.)
A Fast Synthetic Aperture Radar Raw Data Simulation Using Cloud Computing
Li, Zhixin; Su, Dandan; Zhu, Haijiang; Li, Wei; Zhang, Fan; Li, Ruirui
2017-01-01
Synthetic Aperture Radar (SAR) raw data simulation is a fundamental problem in radar system design and imaging algorithm research. The growth of surveying swath and resolution results in a significant increase in data volume and simulation period, which can be considered to be a comprehensive data intensive and computing intensive issue. Although several high performance computing (HPC) methods have demonstrated their potential for accelerating simulation, the input/output (I/O) bottleneck of huge raw data has not been eased. In this paper, we propose a cloud computing based SAR raw data simulation algorithm, which employs the MapReduce model to accelerate the raw data computing and the Hadoop distributed file system (HDFS) for fast I/O access. The MapReduce model is designed for the irregular parallel accumulation of raw data simulation, which greatly reduces the parallel efficiency of graphics processing unit (GPU) based simulation methods. In addition, three kinds of optimization strategies are put forward from the aspects of programming model, HDFS configuration and scheduling. The experimental results show that the cloud computing based algorithm achieves 4× speedup over the baseline serial approach in an 8-node cloud environment, and each optimization strategy can improve about 20%. This work proves that the proposed cloud algorithm is capable of solving the computing intensive and data intensive issues in SAR raw data simulation, and is easily extended to large scale computing to achieve higher acceleration. PMID:28075343
Controller design via structural reduced modeling by FETM
NASA Technical Reports Server (NTRS)
Yousuff, A.
1986-01-01
The Finite Element - Transfer Matrix (FETM) method has been developed to reduce the computations involved in analysis of structures. This widely accepted method, however, has certain limitations, and does not directly produce reduced models for control design. To overcome these shortcomings, a modification of FETM method has been developed. The modified FETM method easily produces reduced models that are tailored toward subsequent control design. Other features of this method are its ability to: (1) extract open loop frequencies and mode shapes with less computations, (2) overcome limitations of the original FETM method, and (3) simplify the procedures for output feedback, constrained compensation, and decentralized control. This semi annual report presents the development of the modified FETM, and through an example, illustrates its applicability to an output feedback and a decentralized control design.
Performance Benchmark for a Prismatic Flow Solver
2007-03-26
Gauss- Seidel (LU-SGS) implicit method is used for time integration to reduce the computational time. A one-equation turbulence model by Goldberg and...numerical flux computations. The Lower-Upper-Symmetric Gauss- Seidel (LU-SGS) implicit method [1] is used for time integration to reduce the...Sharov, D. and Nakahashi, K., “Reordering of Hybrid Unstructured Grids for Lower-Upper Symmetric Gauss- Seidel Computations,” AIAA Journal, Vol. 36
Petersson, K J F; Friberg, L E; Karlsson, M O
2010-10-01
Computer models of biological systems grow more complex as computing power increase. Often these models are defined as differential equations and no analytical solutions exist. Numerical integration is used to approximate the solution; this can be computationally intensive, time consuming and be a large proportion of the total computer runtime. The performance of different integration methods depend on the mathematical properties of the differential equations system at hand. In this paper we investigate the possibility of runtime gains by calculating parts of or the whole differential equations system at given time intervals, outside of the differential equations solver. This approach was tested on nine models defined as differential equations with the goal to reduce runtime while maintaining model fit, based on the objective function value. The software used was NONMEM. In four models the computational runtime was successfully reduced (by 59-96%). The differences in parameter estimates, compared to using only the differential equations solver were less than 12% for all fixed effects parameters. For the variance parameters, estimates were within 10% for the majority of the parameters. Population and individual predictions were similar and the differences in OFV were between 1 and -14 units. When computational runtime seriously affects the usefulness of a model we suggest evaluating this approach for repetitive elements of model building and evaluation such as covariate inclusions or bootstraps.
Development of Reduced-Order Models for Aeroelastic and Flutter Prediction Using the CFL3Dv6.0 Code
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Bartels, Robert E.
2002-01-01
A reduced-order model (ROM) is developed for aeroelastic analysis using the CFL3D version 6.0 computational fluid dynamics (CFD) code, recently developed at the NASA Langley Research Center. This latest version of the flow solver includes a deforming mesh capability, a modal structural definition for nonlinear aeroelastic analyses, and a parallelization capability that provides a significant increase in computational efficiency. Flutter results for the AGARD 445.6 Wing computed using CFL3D v6.0 are presented, including discussion of associated computational costs. Modal impulse responses of the unsteady aerodynamic system are then computed using the CFL3Dv6 code and transformed into state-space form. Important numerical issues associated with the computation of the impulse responses are presented. The unsteady aerodynamic state-space ROM is then combined with a state-space model of the structure to create an aeroelastic simulation using the MATLAB/SIMULINK environment. The MATLAB/SIMULINK ROM is used to rapidly compute aeroelastic transients including flutter. The ROM shows excellent agreement with the aeroelastic analyses computed using the CFL3Dv6.0 code directly.
A comparison of non-local electron transport models relevant to inertial confinement fusion
NASA Astrophysics Data System (ADS)
Sherlock, Mark; Brodrick, Jonathan; Ridgers, Christopher
2017-10-01
We compare the reduced non-local electron transport model developed by Schurtz et al. to Vlasov-Fokker-Planck simulations. Two new test cases are considered: the propagation of a heat wave through a high density region into a lower density gas, and a 1-dimensional hohlraum ablation problem. We find the reduced model reproduces the peak heat flux well in the ablation region but significantly over-predicts the coronal preheat. The suitability of the reduced model for computing non-local transport effects other than thermal conductivity is considered by comparing the computed distribution function to the Vlasov-Fokker-Planck distribution function. It is shown that even when the reduced model reproduces the correct heat flux, the distribution function is significantly different to the Vlasov-Fokker-Planck prediction. Two simple modifications are considered which improve agreement between models in the coronal region. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Reducing software mass through behavior control. [of planetary roving robots
NASA Technical Reports Server (NTRS)
Miller, David P.
1992-01-01
Attention is given to the tradeoff between communication and computation as regards a planetary rover (both these subsystems are very power-intensive, and both can be the major driver of the rover's power subsystem, and therefore the minimum mass and size of the rover). Software techniques that can be used to reduce the requirements on both communciation and computation, allowing the overall robot mass to be greatly reduced, are discussed. Novel approaches to autonomous control, called behavior control, employ an entirely different approach, and for many tasks will yield a similar or superior level of autonomy to traditional control techniques, while greatly reducing the computational demand. Traditional systems have several expensive processes that operate serially, while behavior techniques employ robot capabilities that run in parallel. Traditional systems make extensive world models, while behavior control systems use minimal world models or none at all.
Modeling the effect of shroud contact and friction dampers on the mistuned response of turbopumps
NASA Technical Reports Server (NTRS)
Griffin, Jerry H.; Yang, M.-T.
1994-01-01
The contract has been revised. Under the revised scope of work a reduced order model has been developed that can be used to predict the steady-state response of mistuned bladed disks. The approach has been implemented in a computer code, LMCC. It is concluded that: the reduced order model displays structural fidelity comparable to that of a finite element model of an entire bladed disk system with significantly improved computational efficiency; and, when the disk is stiff, both the finite element model and LMCC predict significantly more amplitude variation than was predicted by earlier models. This second result may have important practical ramifications, especially in the case of integrally bladed disks.
Hadwin, Paul J; Peterson, Sean D
2017-04-01
The Bayesian framework for parameter inference provides a basis from which subject-specific reduced-order vocal fold models can be generated. Previously, it has been shown that a particle filter technique is capable of producing estimates and associated credibility intervals of time-varying reduced-order vocal fold model parameters. However, the particle filter approach is difficult to implement and has a high computational cost, which can be barriers to clinical adoption. This work presents an alternative estimation strategy based upon Kalman filtering aimed at reducing the computational cost of subject-specific model development. The robustness of this approach to Gaussian and non-Gaussian noise is discussed. The extended Kalman filter (EKF) approach is found to perform very well in comparison with the particle filter technique at dramatically lower computational cost. Based upon the test cases explored, the EKF is comparable in terms of accuracy to the particle filter technique when greater than 6000 particles are employed; if less particles are employed, the EKF actually performs better. For comparable levels of accuracy, the solution time is reduced by 2 orders of magnitude when employing the EKF. By virtue of the approximations used in the EKF, however, the credibility intervals tend to be slightly underpredicted.
Environmental models are products of the computer architecture and software tools available at the time of development. Scientifically sound algorithms may persist in their original state even as system architectures and software development approaches evolve and progress. Dating...
Reduced-Order Models for the Aeroelastic Analysis of Ares Launch Vehicles
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Vatsa, Veer N.; Biedron, Robert T.
2010-01-01
This document presents the development and application of unsteady aerodynamic, structural dynamic, and aeroelastic reduced-order models (ROMs) for the ascent aeroelastic analysis of the Ares I-X flight test and Ares I crew launch vehicles using the unstructured-grid, aeroelastic FUN3D computational fluid dynamics (CFD) code. The purpose of this work is to perform computationally-efficient aeroelastic response calculations that would be prohibitively expensive via computation of multiple full-order aeroelastic FUN3D solutions. These efficient aeroelastic ROM solutions provide valuable insight regarding the aeroelastic sensitivity of the vehicles to various parameters over a range of dynamic pressures.
NASA Astrophysics Data System (ADS)
Heller, Johann; Flisgen, Thomas; van Rienen, Ursula
The computation of electromagnetic fields and parameters derived thereof for lossless radio frequency (RF) structures filled with isotropic media is an important task for the design and operation of particle accelerators. Unfortunately, these computations are often highly demanding with regard to computational effort. The entire computational demand of the problem can be reduced using decomposition schemes in order to solve the field problems on standard workstations. This paper presents one of the first detailed comparisons between the recently proposed state-space concatenation approach (SSC) and a direct computation for an accelerator cavity with coupler-elements that break the rotational symmetry.
Active Subspace Methods for Data-Intensive Inverse Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Qiqi
2017-04-27
The project has developed theory and computational tools to exploit active subspaces to reduce the dimension in statistical calibration problems. This dimension reduction enables MCMC methods to calibrate otherwise intractable models. The same theoretical and computational tools can also reduce the measurement dimension for calibration problems that use large stores of data.
NASA Astrophysics Data System (ADS)
Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing; Zhang, Guannan; Ye, Ming; Wu, Jianfeng; Wu, Jichun
2017-12-01
Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we develop a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.
A Simplified Biosphere Model for Global Climate Studies.
NASA Astrophysics Data System (ADS)
Xue, Y.; Sellers, P. J.; Kinter, J. L.; Shukla, J.
1991-03-01
The Simple Biosphere Model (SiB) as described in Sellers et al. is a bio-physically based model of land surface-atmosphere interaction. For some general circulation model (GCM) climate studies, further simplifications are desirable to have greater computation efficiency, and more important, to consolidate the parametric representation. Three major reductions in the complexity of SiB have been achieved in the present study.The diurnal variation of surface albedo is computed in SiB by means of a comprehensive yet complex calculation. Since the diurnal cycle is quite regular for each vegetation type, this calculation can be simplified considerably. The effect of root zone soil moisture on stomatal resistance is substantial, but the computation in SiB is complicated and expensive. We have developed approximations, which simulate the effects of reduced soil moisture more simply, keeping the essence of the biophysical concepts used in SiB.The surface stress and the fluxes of heat and moisture between the top of the vegetation canopy and an atmospheric reference level have been parameterized in an off-line version of SiB based upon the studies by Businger et al. and Paulson. We have developed a linear relationship between Richardson number and aero-dynamic resistance. Finally, the second vegetation layer of the original model does not appear explicitly after simplification. Compared to the model of Sellers et al., we have reduced the number of input parameters from 44 to 21. A comparison of results using the reduced parameter biosphere with those from the original formulation in a GCM and a zero-dimensional model shows the simplified version to reproduce the original results quite closely. After simplification, the computational requirement of SiB was reduced by about 55%.
Ravazzani, Giovanni; Ghilardi, Matteo; Mendlik, Thomas; Gobiet, Andreas; Corbari, Chiara; Mancini, Marco
2014-01-01
Assessing the future effects of climate change on water availability requires an understanding of how precipitation and evapotranspiration rates will respond to changes in atmospheric forcing. Use of simplified hydrological models is required beacause of lack of meteorological forcings with the high space and time resolutions required to model hydrological processes in mountains river basins, and the necessity of reducing the computational costs. The main objective of this study was to quantify the differences between a simplified hydrological model, which uses only precipitation and temperature to compute the hydrological balance when simulating the impact of climate change, and an enhanced version of the model, which solves the energy balance to compute the actual evapotranspiration. For the meteorological forcing of future scenario, at-site bias-corrected time series based on two regional climate models were used. A quantile-based error-correction approach was used to downscale the regional climate model simulations to a point scale and to reduce its error characteristics. The study shows that a simple temperature-based approach for computing the evapotranspiration is sufficiently accurate for performing hydrological impact investigations of climate change for the Alpine river basin which was studied. PMID:25285917
Ravazzani, Giovanni; Ghilardi, Matteo; Mendlik, Thomas; Gobiet, Andreas; Corbari, Chiara; Mancini, Marco
2014-01-01
Assessing the future effects of climate change on water availability requires an understanding of how precipitation and evapotranspiration rates will respond to changes in atmospheric forcing. Use of simplified hydrological models is required because of lack of meteorological forcings with the high space and time resolutions required to model hydrological processes in mountains river basins, and the necessity of reducing the computational costs. The main objective of this study was to quantify the differences between a simplified hydrological model, which uses only precipitation and temperature to compute the hydrological balance when simulating the impact of climate change, and an enhanced version of the model, which solves the energy balance to compute the actual evapotranspiration. For the meteorological forcing of future scenario, at-site bias-corrected time series based on two regional climate models were used. A quantile-based error-correction approach was used to downscale the regional climate model simulations to a point scale and to reduce its error characteristics. The study shows that a simple temperature-based approach for computing the evapotranspiration is sufficiently accurate for performing hydrological impact investigations of climate change for the Alpine river basin which was studied.
Identification of Computational and Experimental Reduced-Order Models
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Hong, Moeljo S.; Bartels, Robert E.; Piatak, David J.; Scott, Robert C.
2003-01-01
The identification of computational and experimental reduced-order models (ROMs) for the analysis of unsteady aerodynamic responses and for efficient aeroelastic analyses is presented. For the identification of a computational aeroelastic ROM, the CFL3Dv6.0 computational fluid dynamics (CFD) code is used. Flutter results for the AGARD 445.6 Wing and for a Rigid Semispan Model (RSM) computed using CFL3Dv6.0 are presented, including discussion of associated computational costs. Modal impulse responses of the unsteady aerodynamic system are computed using the CFL3Dv6.0 code and transformed into state-space form. The unsteady aerodynamic state-space ROM is then combined with a state-space model of the structure to create an aeroelastic simulation using the MATLAB/SIMULINK environment. The MATLAB/SIMULINK ROM is then used to rapidly compute aeroelastic transients, including flutter. The ROM shows excellent agreement with the aeroelastic analyses computed using the CFL3Dv6.0 code directly. For the identification of experimental unsteady pressure ROMs, results are presented for two configurations: the RSM and a Benchmark Supercritical Wing (BSCW). Both models were used to acquire unsteady pressure data due to pitching oscillations on the Oscillating Turntable (OTT) system at the Transonic Dynamics Tunnel (TDT). A deconvolution scheme involving a step input in pitch and the resultant step response in pressure, for several pressure transducers, is used to identify the unsteady pressure impulse responses. The identified impulse responses are then used to predict the pressure responses due to pitching oscillations at several frequencies. Comparisons with the experimental data are then presented.
Computational methods to predict railcar response to track cross-level variations
DOT National Transportation Integrated Search
1976-09-01
The rocking response of railroad freight cars to track cross-level variations is studied using: (1) a reduced complexity digital simulation model, and (2) a quasi-linear describing function analysis. The reduced complexity digital simulation model em...
Model-Invariant Hybrid Computations of Separated Flows for RCA Standard Test Cases
NASA Technical Reports Server (NTRS)
Woodruff, Stephen
2016-01-01
NASA's Revolutionary Computational Aerosciences (RCA) subproject has identified several smooth-body separated flows as standard test cases to emphasize the challenge these flows present for computational methods and their importance to the aerospace community. Results of computations of two of these test cases, the NASA hump and the FAITH experiment, are presented. The computations were performed with the model-invariant hybrid LES-RANS formulation, implemented in the NASA code VULCAN-CFD. The model- invariant formulation employs gradual LES-RANS transitions and compensation for model variation to provide more accurate and efficient hybrid computations. Comparisons revealed that the LES-RANS transitions employed in these computations were sufficiently gradual that the compensating terms were unnecessary. Agreement with experiment was achieved only after reducing the turbulent viscosity to mitigate the effect of numerical dissipation. The stream-wise evolution of peak Reynolds shear stress was employed as a measure of turbulence dynamics in separated flows useful for evaluating computations.
Channel Model Optimization with Reflection Residual Component for Indoor MIMO-VLC System
NASA Astrophysics Data System (ADS)
Chen, Yong; Li, Tengfei; Liu, Huanlin; Li, Yichao
2017-12-01
A fast channel modeling method is studied to solve the problem of reflection channel gain for multiple input multiple output-visible light communications (MIMO-VLC) in the paper. For reducing the computational complexity when associating with the reflection times, no more than 3 reflections are taken into consideration in VLC. We think that higher order reflection link consists of corresponding many times line of sight link and firstly present reflection residual component to characterize higher reflection (more than 2 reflections). We perform computer simulation results for point-to-point channel impulse response, receiving optical power and receiving signal to noise ratio. Based on theoretical analysis and simulation results, the proposed method can effectively reduce the computational complexity of higher order reflection in channel modeling.
New reflective symmetry design capability in the JPL-IDEAS Structure Optimization Program
NASA Technical Reports Server (NTRS)
Strain, D.; Levy, R.
1986-01-01
The JPL-IDEAS antenna structure analysis and design optimization computer program was modified to process half structure models of symmetric structures subjected to arbitrary external static loads, synthesize the performance, and optimize the design of the full structure. Significant savings in computation time and cost (more than 50%) were achieved compared to the cost of full model computer runs. The addition of the new reflective symmetry analysis design capabilities to the IDEAS program allows processing of structure models whose size would otherwise prevent automated design optimization. The new program produced synthesized full model iterative design results identical to those of actual full model program executions at substantially reduced cost, time, and computer storage.
Remote sensing image ship target detection method based on visual attention model
NASA Astrophysics Data System (ADS)
Sun, Yuejiao; Lei, Wuhu; Ren, Xiaodong
2017-11-01
The traditional methods of detecting ship targets in remote sensing images mostly use sliding window to search the whole image comprehensively. However, the target usually occupies only a small fraction of the image. This method has high computational complexity for large format visible image data. The bottom-up selective attention mechanism can selectively allocate computing resources according to visual stimuli, thus improving the computational efficiency and reducing the difficulty of analysis. Considering of that, a method of ship target detection in remote sensing images based on visual attention model was proposed in this paper. The experimental results show that the proposed method can reduce the computational complexity while improving the detection accuracy, and improve the detection efficiency of ship targets in remote sensing images.
BESIU Physical Analysis on Hadoop Platform
NASA Astrophysics Data System (ADS)
Huo, Jing; Zang, Dongsong; Lei, Xiaofeng; Li, Qiang; Sun, Gongxing
2014-06-01
In the past 20 years, computing cluster has been widely used for High Energy Physics data processing. The jobs running on the traditional cluster with a Data-to-Computing structure, have to read large volumes of data via the network to the computing nodes for analysis, thereby making the I/O latency become a bottleneck of the whole system. The new distributed computing technology based on the MapReduce programming model has many advantages, such as high concurrency, high scalability and high fault tolerance, and it can benefit us in dealing with Big Data. This paper brings the idea of using MapReduce model to do BESIII physical analysis, and presents a new data analysis system structure based on Hadoop platform, which not only greatly improve the efficiency of data analysis, but also reduces the cost of system building. Moreover, this paper establishes an event pre-selection system based on the event level metadata(TAGs) database to optimize the data analyzing procedure.
ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING
The overall goal of the EPA-ORD NERL research program on Computational Toxicology (CompTox) is to provide the Agency with the tools of modern chemistry, biology, and computing to improve quantitative risk assessments and reduce uncertainties in the source-to-adverse outcome conti...
Computational modeling in the optimization of corrosion control to reduce lead in drinking water
An international “proof-of-concept” research project (UK, US, CA) will present its findings during this presentation. An established computational modeling system developed in the UK is being calibrated and validated in U.S. and Canadian case studies. It predicts LCR survey resul...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, David; Agarwal, Deborah A.; Sun, Xin
2011-09-01
The Carbon Capture Simulation Initiative is developing state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technology. The CCSI Toolset consists of an integrated multi-scale modeling and simulation framework, which includes extensive use of reduced order models (ROMs) and a comprehensive uncertainty quantification (UQ) methodology. This paper focuses on the interrelation among high performance computing, detailed device simulations, ROMs for scale-bridging, UQ and the integration framework.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, D.; Agarwal, D.; Sun, X.
2011-01-01
The Carbon Capture Simulation Initiative is developing state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technology. The CCSI Toolset consists of an integrated multi-scale modeling and simulation framework, which includes extensive use of reduced order models (ROMs) and a comprehensive uncertainty quantification (UQ) methodology. This paper focuses on the interrelation among high performance computing, detailed device simulations, ROMs for scale-bridging, UQ and the integration framework.
Queueing Network Models for Parallel Processing of Task Systems: an Operational Approach
NASA Technical Reports Server (NTRS)
Mak, Victor W. K.
1986-01-01
Computer performance modeling of possibly complex computations running on highly concurrent systems is considered. Earlier works in this area either dealt with a very simple program structure or resulted in methods with exponential complexity. An efficient procedure is developed to compute the performance measures for series-parallel-reducible task systems using queueing network models. The procedure is based on the concept of hierarchical decomposition and a new operational approach. Numerical results for three test cases are presented and compared to those of simulations.
Model reduction for agent-based social simulation: coarse-graining a civil violence model.
Zou, Yu; Fonoberov, Vladimir A; Fonoberova, Maria; Mezic, Igor; Kevrekidis, Ioannis G
2012-06-01
Agent-based modeling (ABM) constitutes a powerful computational tool for the exploration of phenomena involving emergent dynamic behavior in the social sciences. This paper demonstrates a computer-assisted approach that bridges the significant gap between the single-agent microscopic level and the macroscopic (coarse-grained population) level, where fundamental questions must be rationally answered and policies guiding the emergent dynamics devised. Our approach will be illustrated through an agent-based model of civil violence. This spatiotemporally varying ABM incorporates interactions between a heterogeneous population of citizens [active (insurgent), inactive, or jailed] and a population of police officers. Detailed simulations exhibit an equilibrium punctuated by periods of social upheavals. We show how to effectively reduce the agent-based dynamics to a stochastic model with only two coarse-grained degrees of freedom: the number of jailed citizens and the number of active ones. The coarse-grained model captures the ABM dynamics while drastically reducing the computation time (by a factor of approximately 20).
Model reduction for agent-based social simulation: Coarse-graining a civil violence model
NASA Astrophysics Data System (ADS)
Zou, Yu; Fonoberov, Vladimir A.; Fonoberova, Maria; Mezic, Igor; Kevrekidis, Ioannis G.
2012-06-01
Agent-based modeling (ABM) constitutes a powerful computational tool for the exploration of phenomena involving emergent dynamic behavior in the social sciences. This paper demonstrates a computer-assisted approach that bridges the significant gap between the single-agent microscopic level and the macroscopic (coarse-grained population) level, where fundamental questions must be rationally answered and policies guiding the emergent dynamics devised. Our approach will be illustrated through an agent-based model of civil violence. This spatiotemporally varying ABM incorporates interactions between a heterogeneous population of citizens [active (insurgent), inactive, or jailed] and a population of police officers. Detailed simulations exhibit an equilibrium punctuated by periods of social upheavals. We show how to effectively reduce the agent-based dynamics to a stochastic model with only two coarse-grained degrees of freedom: the number of jailed citizens and the number of active ones. The coarse-grained model captures the ABM dynamics while drastically reducing the computation time (by a factor of approximately 20).
Properties of Vector Preisach Models
NASA Technical Reports Server (NTRS)
Kahler, Gary R.; Patel, Umesh D.; Torre, Edward Della
2004-01-01
This paper discusses rotational anisotropy and rotational accommodation of magnetic particle tape. These effects have a performance impact during the reading and writing of the recording process. We introduce the reduced vector model as the basis for the computations. Rotational magnetization models must accurately compute the anisotropic characteristics of ellipsoidally magnetizable media. An ellipticity factor is derived for these media that computes the two-dimensional magnetization trajectory for all applied fields. An orientation correction must be applied to the computed rotational magnetization. For isotropic materials, an orientation correction has been developed and presented. For anisotropic materials, an orientation correction is introduced.
Computer Modeling of Direct Metal Laser Sintering
NASA Technical Reports Server (NTRS)
Cross, Matthew
2014-01-01
A computational approach to modeling direct metal laser sintering (DMLS) additive manufacturing process is presented. The primary application of the model is for determining the temperature history of parts fabricated using DMLS to evaluate residual stresses found in finished pieces and to assess manufacturing process strategies to reduce part slumping. The model utilizes MSC SINDA as a heat transfer solver with imbedded FORTRAN computer code to direct laser motion, apply laser heating as a boundary condition, and simulate the addition of metal powder layers during part fabrication. Model results are compared to available data collected during in situ DMLS part manufacture.
Computational Aeroelastic Analyses of a Low-Boom Supersonic Configuration
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Sanetrik, Mark D.; Chwalowski, Pawel; Connolly, Joseph
2015-01-01
An overview of NASA's Commercial Supersonic Technology (CST) Aeroservoelasticity (ASE) element is provided with a focus on recent computational aeroelastic analyses of a low-boom supersonic configuration developed by Lockheed-Martin and referred to as the N+2 configuration. The overview includes details of the computational models developed to date including a linear finite element model (FEM), linear unsteady aerodynamic models, unstructured CFD grids, and CFD-based aeroelastic analyses. In addition, a summary of the work involving the development of aeroelastic reduced-order models (ROMs) and the development of an aero-propulso-servo-elastic (APSE) model is provided.
Mirror neurons and imitation: a computationally guided review.
Oztop, Erhan; Kawato, Mitsuo; Arbib, Michael
2006-04-01
Neurophysiology reveals the properties of individual mirror neurons in the macaque while brain imaging reveals the presence of 'mirror systems' (not individual neurons) in the human. Current conceptual models attribute high level functions such as action understanding, imitation, and language to mirror neurons. However, only the first of these three functions is well-developed in monkeys. We thus distinguish current opinions (conceptual models) on mirror neuron function from more detailed computational models. We assess the strengths and weaknesses of current computational models in addressing the data and speculations on mirror neurons (macaque) and mirror systems (human). In particular, our mirror neuron system (MNS), mental state inference (MSI) and modular selection and identification for control (MOSAIC) models are analyzed in more detail. Conceptual models often overlook the computational requirements for posited functions, while too many computational models adopt the erroneous hypothesis that mirror neurons are interchangeable with imitation ability. Our meta-analysis underlines the gap between conceptual and computational models and points out the research effort required from both sides to reduce this gap.
Lee, Wei-Po; Hsiao, Yu-Ting; Hwang, Wei-Che
2014-01-16
To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel computational framework, high quality solutions can be obtained within relatively short time. This integrated approach is a promising way for inferring large networks.
A Hybrid MPI/OpenMP Approach for Parallel Groundwater Model Calibration on Multicore Computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Guoping; D'Azevedo, Ed F; Zhang, Fan
2010-01-01
Groundwater model calibration is becoming increasingly computationally time intensive. We describe a hybrid MPI/OpenMP approach to exploit two levels of parallelism in software and hardware to reduce calibration time on multicore computers with minimal parallelization effort. At first, HydroGeoChem 5.0 (HGC5) is parallelized using OpenMP for a uranium transport model with over a hundred species involving nearly a hundred reactions, and a field scale coupled flow and transport model. In the first application, a single parallelizable loop is identified to consume over 97% of the total computational time. With a few lines of OpenMP compiler directives inserted into the code,more » the computational time reduces about ten times on a compute node with 16 cores. The performance is further improved by selectively parallelizing a few more loops. For the field scale application, parallelizable loops in 15 of the 174 subroutines in HGC5 are identified to take more than 99% of the execution time. By adding the preconditioned conjugate gradient solver and BICGSTAB, and using a coloring scheme to separate the elements, nodes, and boundary sides, the subroutines for finite element assembly, soil property update, and boundary condition application are parallelized, resulting in a speedup of about 10 on a 16-core compute node. The Levenberg-Marquardt (LM) algorithm is added into HGC5 with the Jacobian calculation and lambda search parallelized using MPI. With this hybrid approach, compute nodes at the number of adjustable parameters (when the forward difference is used for Jacobian approximation), or twice that number (if the center difference is used), are used to reduce the calibration time from days and weeks to a few hours for the two applications. This approach can be extended to global optimization scheme and Monte Carol analysis where thousands of compute nodes can be efficiently utilized.« less
2014-01-01
Background To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. Results This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Conclusions Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel computational framework, high quality solutions can be obtained within relatively short time. This integrated approach is a promising way for inferring large networks. PMID:24428926
A Reduced-Order Model for Efficient Simulation of Synthetic Jet Actuators
NASA Technical Reports Server (NTRS)
Yamaleev, Nail K.; Carpenter, Mark H.
2003-01-01
A new reduced-order model of multidimensional synthetic jet actuators that combines the accuracy and conservation properties of full numerical simulation methods with the efficiency of simplified zero-order models is proposed. The multidimensional actuator is simulated by solving the time-dependent compressible quasi-1-D Euler equations, while the diaphragm is modeled as a moving boundary. The governing equations are approximated with a fourth-order finite difference scheme on a moving mesh such that one of the mesh boundaries coincides with the diaphragm. The reduced-order model of the actuator has several advantages. In contrast to the 3-D models, this approach provides conservation of mass, momentum, and energy. Furthermore, the new method is computationally much more efficient than the multidimensional Navier-Stokes simulation of the actuator cavity flow, while providing practically the same accuracy in the exterior flowfield. The most distinctive feature of the present model is its ability to predict the resonance characteristics of synthetic jet actuators; this is not practical when using the 3-D models because of the computational cost involved. Numerical results demonstrating the accuracy of the new reduced-order model and its limitations are presented.
NASA Astrophysics Data System (ADS)
Rachmawati, Vimala; Khusnul Arif, Didik; Adzkiya, Dieky
2018-03-01
The systems contained in the universe often have a large order. Thus, the mathematical model has many state variables that affect the computation time. In addition, generally not all variables are known, so estimations are needed to measure the magnitude of the system that cannot be measured directly. In this paper, we discuss the model reduction and estimation of state variables in the river system to measure the water level. The model reduction of a system is an approximation method of a system with a lower order without significant errors but has a dynamic behaviour that is similar to the original system. The Singular Perturbation Approximation method is one of the model reduction methods where all state variables of the equilibrium system are partitioned into fast and slow modes. Then, The Kalman filter algorithm is used to estimate state variables of stochastic dynamic systems where estimations are computed by predicting state variables based on system dynamics and measurement data. Kalman filters are used to estimate state variables in the original system and reduced system. Then, we compare the estimation results of the state and computational time between the original and reduced system.
BEST (bioreactor economics, size and time of operation) is an Excel™ spreadsheet-based model that is used in conjunction with the public domain geochemical modeling software, PHREEQCI. The BEST model is used in the design process of sulfate-reducing bacteria (SRB) field bioreacto...
Accelerating the spin-up of the coupled carbon and nitrogen cycle model in CLM4
Fang, Yilin; Liu, Chongxuan; Leung, Lai-Yung R.
2015-03-24
The commonly adopted biogeochemistry spin-up process in an Earth system model (ESM) is to run the model for hundreds to thousands of years subject to periodic atmospheric forcing to reach dynamic steady state of the carbon–nitrogen (CN) models. A variety of approaches have been proposed to reduce the computation time of the spin-up process. Significant improvement in computational efficiency has been made recently. However, a long simulation time is still required to reach the common convergence criteria of the coupled carbon–nitrogen model. A gradient projection method was proposed and used to further reduce the computation time after examining the trendmore » of the dominant carbon pools. The Community Land Model version 4 (CLM4) with a carbon and nitrogen component was used in this study. From point-scale simulations, we found that the method can reduce the computation time by 20–69% compared to one of the fastest approaches in the literature. We also found that the cyclic stability of total carbon for some cases differs from that of the periodic atmospheric forcing, and some cases even showed instability. Close examination showed that one case has a carbon periodicity much longer than that of the atmospheric forcing due to the annual fire disturbance that is longer than half a year. The rest was caused by the instability of water table calculation in the hydrology model of CLM4. The instability issue is resolved after we replaced the hydrology scheme in CLM4 with a flow model for variably saturated porous media.« less
rpe v5: an emulator for reduced floating-point precision in large numerical simulations
NASA Astrophysics Data System (ADS)
Dawson, Andrew; Düben, Peter D.
2017-06-01
This paper describes the rpe (reduced-precision emulator) library which has the capability to emulate the use of arbitrary reduced floating-point precision within large numerical models written in Fortran. The rpe software allows model developers to test how reduced floating-point precision affects the result of their simulations without having to make extensive code changes or port the model onto specialized hardware. The software can be used to identify parts of a program that are problematic for numerical precision and to guide changes to the program to allow a stronger reduction in precision.The development of rpe was motivated by the strong demand for more computing power. If numerical precision can be reduced for an application under consideration while still achieving results of acceptable quality, computational cost can be reduced, since a reduction in numerical precision may allow an increase in performance or a reduction in power consumption. For simulations with weather and climate models, savings due to a reduction in precision could be reinvested to allow model simulations at higher spatial resolution or complexity, or to increase the number of ensemble members to improve predictions. rpe was developed with a particular focus on the community of weather and climate modelling, but the software could be used with numerical simulations from other domains.
Mathematical Modeling of Diverse Phenomena
NASA Technical Reports Server (NTRS)
Howard, J. C.
1979-01-01
Tensor calculus is applied to the formulation of mathematical models of diverse phenomena. Aeronautics, fluid dynamics, and cosmology are among the areas of application. The feasibility of combining tensor methods and computer capability to formulate problems is demonstrated. The techniques described are an attempt to simplify the formulation of mathematical models by reducing the modeling process to a series of routine operations, which can be performed either manually or by computer.
A modified Finite Element-Transfer Matrix for control design of space structures
NASA Technical Reports Server (NTRS)
Tan, T.-M.; Yousuff, A.; Bahar, L. Y.; Konstandinidis, M.
1990-01-01
The Finite Element-Transfer Matrix (FETM) method was developed for reducing the computational efforts involved in structural analysis. While being widely used by structural analysts, this method does, however, have certain limitations, particularly when used for the control design of large flexible structures. In this paper, a new formulation based on the FETM method is presented. The new method effectively overcomes the limitations in the original FETM method, and also allows an easy construction of reduced models that are tailored for the control design. Other advantages of this new method include the ability to extract open loop frequencies and mode shapes with less computation, and simplification of the design procedures for output feedback, constrained compensation, and decentralized control. The development of this new method and the procedures for generating reduced models using this method are described in detail and the role of the reduced models in control design is discussed through an illustrative example.
Challenges in reducing the computational time of QSTS simulations for distribution system analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deboever, Jeremiah; Zhang, Xiaochen; Reno, Matthew J.
The rapid increase in penetration of distributed energy resources on the electric power distribution system has created a need for more comprehensive interconnection modelling and impact analysis. Unlike conventional scenario - based studies , quasi - static time - series (QSTS) simulation s can realistically model time - dependent voltage controllers and the diversity of potential impacts that can occur at different times of year . However, to accurately model a distribution system with all its controllable devices, a yearlong simulation at 1 - second resolution is often required , which could take conventional computers a computational time of 10more » to 120 hours when an actual unbalanced distribution feeder is modeled . This computational burden is a clear l imitation to the adoption of QSTS simulation s in interconnection studies and for determining optimal control solutions for utility operations . Our ongoing research to improve the speed of QSTS simulation has revealed many unique aspects of distribution system modelling and sequential power flow analysis that make fast QSTS a very difficult problem to solve. In this report , the most relevant challenges in reducing the computational time of QSTS simulations are presented: number of power flows to solve, circuit complexity, time dependence between time steps, multiple valid power flow solutions, controllable element interactions, and extensive accurate simulation analysis.« less
Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing; ...
2017-12-27
Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we developmore » a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing
Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we developmore » a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.« less
MapReduce Based Parallel Bayesian Network for Manufacturing Quality Control
NASA Astrophysics Data System (ADS)
Zheng, Mao-Kuan; Ming, Xin-Guo; Zhang, Xian-Yu; Li, Guo-Ming
2017-09-01
Increasing complexity of industrial products and manufacturing processes have challenged conventional statistics based quality management approaches in the circumstances of dynamic production. A Bayesian network and big data analytics integrated approach for manufacturing process quality analysis and control is proposed. Based on Hadoop distributed architecture and MapReduce parallel computing model, big volume and variety quality related data generated during the manufacturing process could be dealt with. Artificial intelligent algorithms, including Bayesian network learning, classification and reasoning, are embedded into the Reduce process. Relying on the ability of the Bayesian network in dealing with dynamic and uncertain problem and the parallel computing power of MapReduce, Bayesian network of impact factors on quality are built based on prior probability distribution and modified with posterior probability distribution. A case study on hull segment manufacturing precision management for ship and offshore platform building shows that computing speed accelerates almost directly proportionally to the increase of computing nodes. It is also proved that the proposed model is feasible for locating and reasoning of root causes, forecasting of manufacturing outcome, and intelligent decision for precision problem solving. The integration of bigdata analytics and BN method offers a whole new perspective in manufacturing quality control.
Development of Comprehensive Reduced Kinetic Models for Supersonic Reacting Shear Layer Simulations
NASA Technical Reports Server (NTRS)
Zambon, A. C.; Chelliah, H. K.; Drummond, J. P.
2006-01-01
Large-scale simulations of multi-dimensional unsteady turbulent reacting flows with detailed chemistry and transport can be computationally extremely intensive even on distributed computing architectures. With the development of suitable reduced chemical kinetic models, the number of scalar variables to be integrated can be decreased, leading to a significant reduction in the computational time required for the simulation with limited loss of accuracy in the results. A general MATLAB-based automated mechanism reduction procedure is presented to reduce any complex starting mechanism (detailed or skeletal) with minimal human intervention. Based on the application of the quasi steady-state (QSS) approximation for certain chemical species and on the elimination of the fast reaction rates in the mechanism, several comprehensive reduced models, capable of handling different fuels such as C2H4, CH4 and H2, have been developed and thoroughly tested for several combustion problems (ignition, propagation and extinction) and physical conditions (reactant compositions, temperatures, and pressures). A key feature of the present reduction procedure is the explicit solution of the concentrations of the QSS species, needed for the evaluation of the elementary reaction rates. In contrast, previous approaches relied on an implicit solution due to the strong coupling between QSS species, requiring computationally expensive inner iterations. A novel algorithm, based on the definition of a QSS species coupling matrix, is presented to (i) introduce appropriate truncations to the QSS algebraic relations and (ii) identify the optimal sequence for the explicit solution of the concentration of the QSS species. With the automatic generation of the relevant source code, the resulting reduced models can be readily implemented into numerical codes.
NASA Astrophysics Data System (ADS)
Rodriguez Marco, Albert
Battery management systems (BMS) require computationally simple but highly accurate models of the battery cells they are monitoring and controlling. Historically, empirical equivalent-circuit models have been used, but increasingly researchers are focusing their attention on physics-based models due to their greater predictive capabilities. These models are of high intrinsic computational complexity and so must undergo some kind of order-reduction process to make their use by a BMS feasible: we favor methods based on a transfer-function approach of battery cell dynamics. In prior works, transfer functions have been found from full-order PDE models via two simplifying assumptions: (1) a linearization assumption--which is a fundamental necessity in order to make transfer functions--and (2) an assumption made out of expedience that decouples the electrolyte-potential and electrolyte-concentration PDEs in order to render an approach to solve for the transfer functions from the PDEs. This dissertation improves the fidelity of physics-based models by eliminating the need for the second assumption and, by linearizing nonlinear dynamics around different constant currents. Electrochemical transfer functions are infinite-order and cannot be expressed as a ratio of polynomials in the Laplace variable s. Thus, for practical use, these systems need to be approximated using reduced-order models that capture the most significant dynamics. This dissertation improves the generation of physics-based reduced-order models by introducing different realization algorithms, which produce a low-order model from the infinite-order electrochemical transfer functions. Physics-based reduced-order models are linear and describe cell dynamics if operated near the setpoint at which they have been generated. Hence, multiple physics-based reduced-order models need to be generated at different setpoints (i.e., state-of-charge, temperature and C-rate) in order to extend the cell operating range. This dissertation improves the implementation of physics-based reduced-order models by introducing different blending approaches that combine the pre-computed models generated (offline) at different setpoints in order to produce good electrochemical estimates (online) along the cell state-of-charge, temperature and C-rate range.
System reliability of randomly vibrating structures: Computational modeling and laboratory testing
NASA Astrophysics Data System (ADS)
Sundar, V. S.; Ammanagi, S.; Manohar, C. S.
2015-09-01
The problem of determination of system reliability of randomly vibrating structures arises in many application areas of engineering. We discuss in this paper approaches based on Monte Carlo simulations and laboratory testing to tackle problems of time variant system reliability estimation. The strategy we adopt is based on the application of Girsanov's transformation to the governing stochastic differential equations which enables estimation of probability of failure with significantly reduced number of samples than what is needed in a direct simulation study. Notably, we show that the ideas from Girsanov's transformation based Monte Carlo simulations can be extended to conduct laboratory testing to assess system reliability of engineering structures with reduced number of samples and hence with reduced testing times. Illustrative examples include computational studies on a 10-degree of freedom nonlinear system model and laboratory/computational investigations on road load response of an automotive system tested on a four-post test rig.
NASA Astrophysics Data System (ADS)
Galbraith, Eric D.; Dunne, John P.; Gnanadesikan, Anand; Slater, Richard D.; Sarmiento, Jorge L.; Dufour, Carolina O.; de Souza, Gregory F.; Bianchi, Daniele; Claret, Mariona; Rodgers, Keith B.; Marvasti, Seyedehsafoura Sedigh
2015-12-01
Earth System Models increasingly include ocean biogeochemistry models in order to predict changes in ocean carbon storage, hypoxia, and biological productivity under climate change. However, state-of-the-art ocean biogeochemical models include many advected tracers, that significantly increase the computational resources required, forcing a trade-off with spatial resolution. Here, we compare a state-of-the art model with 30 prognostic tracers (TOPAZ) with two reduced-tracer models, one with 6 tracers (BLING), and the other with 3 tracers (miniBLING). The reduced-tracer models employ parameterized, implicit biological functions, which nonetheless capture many of the most important processes resolved by TOPAZ. All three are embedded in the same coupled climate model. Despite the large difference in tracer number, the absence of tracers for living organic matter is shown to have a minimal impact on the transport of nutrient elements, and the three models produce similar mean annual preindustrial distributions of macronutrients, oxygen, and carbon. Significant differences do exist among the models, in particular the seasonal cycle of biomass and export production, but it does not appear that these are necessary consequences of the reduced tracer number. With increasing CO2, changes in dissolved oxygen and anthropogenic carbon uptake are very similar across the different models. Thus, while the reduced-tracer models do not explicitly resolve the diversity and internal dynamics of marine ecosystems, we demonstrate that such models are applicable to a broad suite of major biogeochemical concerns, including anthropogenic change. These results are very promising for the further development and application of reduced-tracer biogeochemical models that incorporate "sub-ecosystem-scale" parameterizations.
NASA Technical Reports Server (NTRS)
Shishir, Pandya; Chaderjian, Neal; Ahmad, Jsaim; Kwak, Dochan (Technical Monitor)
2001-01-01
Flow simulations using the time-dependent Navier-Stokes equations remain a challenge for several reasons. Principal among them are the difficulty to accurately model complex flows, and the time needed to perform the computations. A parametric study of such complex problems is not considered practical due to the large cost associated with computing many time-dependent solutions. The computation time for each solution must be reduced in order to make a parametric study possible. With successful reduction of computation time, the issue of accuracy, and appropriateness of turbulence models will become more tractable.
Parallelization of a hydrological model using the message passing interface
Wu, Yiping; Li, Tiejian; Sun, Liqun; Chen, Ji
2013-01-01
With the increasing knowledge about the natural processes, hydrological models such as the Soil and Water Assessment Tool (SWAT) are becoming larger and more complex with increasing computation time. Additionally, other procedures such as model calibration, which may require thousands of model iterations, can increase running time and thus further reduce rapid modeling and analysis. Using the widely-applied SWAT as an example, this study demonstrates how to parallelize a serial hydrological model in a Windows® environment using a parallel programing technology—Message Passing Interface (MPI). With a case study, we derived the optimal values for the two parameters (the number of processes and the corresponding percentage of work to be distributed to the master process) of the parallel SWAT (P-SWAT) on an ordinary personal computer and a work station. Our study indicates that model execution time can be reduced by 42%–70% (or a speedup of 1.74–3.36) using multiple processes (two to five) with a proper task-distribution scheme (between the master and slave processes). Although the computation time cost becomes lower with an increasing number of processes (from two to five), this enhancement becomes less due to the accompanied increase in demand for message passing procedures between the master and all slave processes. Our case study demonstrates that the P-SWAT with a five-process run may reach the maximum speedup, and the performance can be quite stable (fairly independent of a project size). Overall, the P-SWAT can help reduce the computation time substantially for an individual model run, manual and automatic calibration procedures, and optimization of best management practices. In particular, the parallelization method we used and the scheme for deriving the optimal parameters in this study can be valuable and easily applied to other hydrological or environmental models.
DESIGNING SULFATE-REDUCING BACTERIA FIELD BIOREACTORS USING THE BEST MODEL
BEST (bioreactor economics, size and time of operation) is a spreadsheet-based model that is used in conjunction with a public domain computer software package, PHREEQCI. BEST is intended to be used in the design process of sulfate-reducing bacteria (SRB)field bioreactors to pas...
Tencer, John; Carlberg, Kevin; Larsen, Marvin; ...
2017-06-17
Radiation heat transfer is an important phenomenon in many physical systems of practical interest. When participating media is important, the radiative transfer equation (RTE) must be solved for the radiative intensity as a function of location, time, direction, and wavelength. In many heat-transfer applications, a quasi-steady assumption is valid, thereby removing time dependence. The dependence on wavelength is often treated through a weighted sum of gray gases (WSGG) approach. The discrete ordinates method (DOM) is one of the most common methods for approximating the angular (i.e., directional) dependence. The DOM exactly solves for the radiative intensity for a finite numbermore » of discrete ordinate directions and computes approximations to integrals over the angular space using a quadrature rule; the chosen ordinate directions correspond to the nodes of this quadrature rule. This paper applies a projection-based model-reduction approach to make high-order quadrature computationally feasible for the DOM for purely absorbing applications. First, the proposed approach constructs a reduced basis from (high-fidelity) solutions of the radiative intensity computed at a relatively small number of ordinate directions. Then, the method computes inexpensive approximations of the radiative intensity at the (remaining) quadrature points of a high-order quadrature using a reduced-order model constructed from the reduced basis. Finally, this results in a much more accurate solution than might have been achieved using only the ordinate directions used to compute the reduced basis. One- and three-dimensional test problems highlight the efficiency of the proposed method.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tencer, John; Carlberg, Kevin; Larsen, Marvin
Radiation heat transfer is an important phenomenon in many physical systems of practical interest. When participating media is important, the radiative transfer equation (RTE) must be solved for the radiative intensity as a function of location, time, direction, and wavelength. In many heat-transfer applications, a quasi-steady assumption is valid, thereby removing time dependence. The dependence on wavelength is often treated through a weighted sum of gray gases (WSGG) approach. The discrete ordinates method (DOM) is one of the most common methods for approximating the angular (i.e., directional) dependence. The DOM exactly solves for the radiative intensity for a finite numbermore » of discrete ordinate directions and computes approximations to integrals over the angular space using a quadrature rule; the chosen ordinate directions correspond to the nodes of this quadrature rule. This paper applies a projection-based model-reduction approach to make high-order quadrature computationally feasible for the DOM for purely absorbing applications. First, the proposed approach constructs a reduced basis from (high-fidelity) solutions of the radiative intensity computed at a relatively small number of ordinate directions. Then, the method computes inexpensive approximations of the radiative intensity at the (remaining) quadrature points of a high-order quadrature using a reduced-order model constructed from the reduced basis. Finally, this results in a much more accurate solution than might have been achieved using only the ordinate directions used to compute the reduced basis. One- and three-dimensional test problems highlight the efficiency of the proposed method.« less
Closed-form solution of decomposable stochastic models
NASA Technical Reports Server (NTRS)
Sjogren, Jon A.
1990-01-01
Markov and semi-Markov processes are increasingly being used in the modeling of complex reconfigurable systems (fault tolerant computers). The estimation of the reliability (or some measure of performance) of the system reduces to solving the process for its state probabilities. Such a model may exhibit numerous states and complicated transition distributions, contributing to an expensive and numerically delicate solution procedure. Thus, when a system exhibits a decomposition property, either structurally (autonomous subsystems), or behaviorally (component failure versus reconfiguration), it is desirable to exploit this decomposition in the reliability calculation. In interesting cases there can be failure states which arise from non-failure states of the subsystems. Equations are presented which allow the computation of failure probabilities of the total (combined) model without requiring a complete solution of the combined model. This material is presented within the context of closed-form functional representation of probabilities as utilized in the Symbolic Hierarchical Automated Reliability and Performance Evaluator (SHARPE) tool. The techniques adopted enable one to compute such probability functions for a much wider class of systems at a reduced computational cost. Several examples show how the method is used, especially in enhancing the versatility of the SHARPE tool.
NASA Technical Reports Server (NTRS)
Bartels, Robert E.
2012-01-01
This paper presents the implementation of gust modeling capability in the CFD code FUN3D. The gust capability is verified by computing the response of an airfoil to a sharp edged gust. This result is compared with the theoretical result. The present simulations will be compared with other CFD gust simulations. This paper also serves as a users manual for FUN3D gust analyses using a variety of gust profiles. Finally, the development of an Auto-Regressive Moving-Average (ARMA) reduced order gust model using a gust with a Gaussian profile in the FUN3D code is presented. ARMA simulated results of a sequence of one-minus-cosine gusts is shown to compare well with the same gust profile computed with FUN3D. Proper Orthogonal Decomposition (POD) is combined with the ARMA modeling technique to predict the time varying pressure coefficient increment distribution due to a novel gust profile. The aeroelastic response of a pitch/plunge airfoil to a gust environment is computed with a reduced order model, and compared with a direct simulation of the system in the FUN3D code. The two results are found to agree very well.
Li, Ji-Qing; Zhang, Yu-Shan; Ji, Chang-Ming; Wang, Ai-Jing; Lund, Jay R
2013-01-01
This paper examines long-term optimal operation using dynamic programming for a large hydropower system of 10 reservoirs in Northeast China. Besides considering flow and hydraulic head, the optimization explicitly includes time-varying electricity market prices to maximize benefit. Two techniques are used to reduce the 'curse of dimensionality' of dynamic programming with many reservoirs. Discrete differential dynamic programming (DDDP) reduces the search space and computer memory needed. Object-oriented programming (OOP) and the ability to dynamically allocate and release memory with the C++ language greatly reduces the cumulative effect of computer memory for solving multi-dimensional dynamic programming models. The case study shows that the model can reduce the 'curse of dimensionality' and achieve satisfactory results.
Using Agent Base Models to Optimize Large Scale Network for Large System Inventories
NASA Technical Reports Server (NTRS)
Shameldin, Ramez Ahmed; Bowling, Shannon R.
2010-01-01
The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.
Computational and Organotypic Modeling of Microcephaly (Teratology Society)
Microcephaly is associated with reduced cortical surface area and ventricular dilations. Many genetic and environmental factors precipitate this malformation, including prenatal alcohol exposure and maternal Zika infection. This complexity motivates the engineering of computation...
NASA Astrophysics Data System (ADS)
Ghattas, O.; Petra, N.; Cui, T.; Marzouk, Y.; Benjamin, P.; Willcox, K.
2016-12-01
Model-based projections of the dynamics of the polar ice sheets play a central role in anticipating future sea level rise. However, a number of mathematical and computational challenges place significant barriers on improving predictability of these models. One such challenge is caused by the unknown model parameters (e.g., in the basal boundary conditions) that must be inferred from heterogeneous observational data, leading to an ill-posed inverse problem and the need to quantify uncertainties in its solution. In this talk we discuss the problem of estimating the uncertainty in the solution of (large-scale) ice sheet inverse problems within the framework of Bayesian inference. Computing the general solution of the inverse problem--i.e., the posterior probability density--is intractable with current methods on today's computers, due to the expense of solving the forward model (3D full Stokes flow with nonlinear rheology) and the high dimensionality of the uncertain parameters (which are discretizations of the basal sliding coefficient field). To overcome these twin computational challenges, it is essential to exploit problem structure (e.g., sensitivity of the data to parameters, the smoothing property of the forward model, and correlations in the prior). To this end, we present a data-informed approach that identifies low-dimensional structure in both parameter space and the forward model state space. This approach exploits the fact that the observations inform only a low-dimensional parameter space and allows us to construct a parameter-reduced posterior. Sampling this parameter-reduced posterior still requires multiple evaluations of the forward problem, therefore we also aim to identify a low dimensional state space to reduce the computational cost. To this end, we apply a proper orthogonal decomposition (POD) approach to approximate the state using a low-dimensional manifold constructed using ``snapshots'' from the parameter reduced posterior, and the discrete empirical interpolation method (DEIM) to approximate the nonlinearity in the forward problem. We show that using only a limited number of forward solves, the resulting subspaces lead to an efficient method to explore the high-dimensional posterior.
Energy consumption program: A computer model simulating energy loads in buildings
NASA Technical Reports Server (NTRS)
Stoller, F. W.; Lansing, F. L.; Chai, V. W.; Higgins, S.
1978-01-01
The JPL energy consumption computer program developed as a useful tool in the on-going building modification studies in the DSN energy conservation project is described. The program simulates building heating and cooling loads and computes thermal and electric energy consumption and cost. The accuracy of computations are not sacrificed, however, since the results lie within + or - 10 percent margin compared to those read from energy meters. The program is carefully structured to reduce both user's time and running cost by asking minimum information from the user and reducing many internal time-consuming computational loops. Many unique features were added to handle two-level electronics control rooms not found in any other program.
The use of analytical models in human-computer interface design
NASA Technical Reports Server (NTRS)
Gugerty, Leo
1993-01-01
Recently, a large number of human-computer interface (HCI) researchers have investigated building analytical models of the user, which are often implemented as computer models. These models simulate the cognitive processes and task knowledge of the user in ways that allow a researcher or designer to estimate various aspects of an interface's usability, such as when user errors are likely to occur. This information can lead to design improvements. Analytical models can supplement design guidelines by providing designers rigorous ways of analyzing the information-processing requirements of specific tasks (i.e., task analysis). These models offer the potential of improving early designs and replacing some of the early phases of usability testing, thus reducing the cost of interface design. This paper describes some of the many analytical models that are currently being developed and evaluates the usefulness of analytical models for human-computer interface design. This paper will focus on computational, analytical models, such as the GOMS model, rather than less formal, verbal models, because the more exact predictions and task descriptions of computational models may be useful to designers. The paper also discusses some of the practical requirements for using analytical models in complex design organizations such as NASA.
NASA Astrophysics Data System (ADS)
Mathai, Pramod P.
This thesis focuses on applying and augmenting 'Reduced Order Modeling' (ROM) techniques to large scale problems. ROM refers to the set of mathematical techniques that are used to reduce the computational expense of conventional modeling techniques, like finite element and finite difference methods, while minimizing the loss of accuracy that typically accompanies such a reduction. The first problem that we address pertains to the prediction of the level of heat dissipation in electronic and MEMS devices. With the ever decreasing feature sizes in electronic devices, and the accompanied rise in Joule heating, the electronics industry has, since the 1990s, identified a clear need for computationally cheap heat transfer modeling techniques that can be incorporated along with the electronic design process. We demonstrate how one can create reduced order models for simulating heat conduction in individual components that constitute an idealized electronic device. The reduced order models are created using Krylov Subspace Techniques (KST). We introduce a novel 'plug and play' approach, based on the small gain theorem in control theory, to interconnect these component reduced order models (according to the device architecture) to reliably and cheaply replicate whole device behavior. The final aim is to have this technique available commercially as a computationally cheap and reliable option that enables a designer to optimize for heat dissipation among competing VLSI architectures. Another place where model reduction is crucial to better design is Isoelectric Focusing (IEF) - the second problem in this thesis - which is a popular technique that is used to separate minute amounts of proteins from the other constituents that are present in a typical biological tissue sample. Fundamental questions about how to design IEF experiments still remain because of the high dimensional and highly nonlinear nature of the differential equations that describe the IEF process as well as the uncertainty in the parameters of the differential equations. There is a clear need to design better experiments for IEF without the current overhead of expensive chemicals and labor. We show how with a simpler modeling of the underlying chemistry, we can still achieve the accuracy that has been achieved in existing literature for modeling small ranges of pH (hydrogen ion concentration) in IEF, but with far less computational time. We investigate a further reduction of time by modeling the IEF problem using the Proper Orthogonal Decomposition (POD) technique and show why POD may not be sufficient due to the underlying constraints. The final problem that we address in this thesis addresses a certain class of dynamics with high stiffness - in particular, differential algebraic equations. With the help of simple examples, we show how the traditional POD procedure will fail to model certain high stiffness problems due to a particular behavior of the vector field which we will denote as twist. We further show how a novel augmentation to the traditional POD algorithm can model-reduce problems with twist in a computationally cheap manner without any additional data requirements.
Estimation of zeta potential of electroosmotic flow in a microchannel using a reduced-order model.
Park, H M; Hong, S M; Lee, J S
2007-10-01
A reduced-order model is derived for electroosmotic flow in a microchannel of nonuniform cross section using the Karhunen-Loève Galerkin (KLG) procedure. The resulting reduced-order model is shown to predict electroosmotic flows accurately with minimal consumption of computer time for a wide range of zeta potential zeta and dielectric constant epsilon. Using the reduced-order model, a practical method is devised to estimate zeta from the velocity measurements of the electroosmotic flow in the microchannel. The proposed method is found to estimate zeta with reasonable accuracy even with noisy velocity measurements.
NASA Astrophysics Data System (ADS)
Fasnacht, Z.; Qin, W.; Haffner, D. P.; Loyola, D. G.; Joiner, J.; Krotkov, N. A.; Vasilkov, A. P.; Spurr, R. J. D.
2017-12-01
In order to estimate surface reflectance used in trace gas retrieval algorithms, radiative transfer models (RTM) such as the Vector Linearized Discrete Ordinate Radiative Transfer Model (VLIDORT) can be used to simulate the top of the atmosphere (TOA) radiances with advanced models of surface properties. With large volumes of satellite data, these model simulations can become computationally expensive. Look up table interpolation can improve the computational cost of the calculations, but the non-linear nature of the radiances requires a dense node structure if interpolation errors are to be minimized. In order to reduce our computational effort and improve the performance of look-up tables, neural networks can be trained to predict these radiances. We investigate the impact of using look-up table interpolation versus a neural network trained using the smart sampling technique, and show that neural networks can speed up calculations and reduce errors while using significantly less memory and RTM calls. In future work we will implement a neural network in operational processing to meet growing demands for reflectance modeling in support of high spatial resolution satellite missions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keasler, Samuel J., E-mail: samuel.keasler@vcsu.edu; Department of Science, Valley City State University, 101 College Street SW, Valley City, North Dakota 58072; Siepmann, J. Ilja
2015-10-28
Simulations are used to investigate the vapor-to-liquid nucleation of water for several different force fields at various sets of physical conditions. The nucleation free energy barrier is found to be extremely sensitive to the force field at the same absolute conditions. However, when the results are compared at the same supersaturation and reduced temperature or the same metastability parameter and reduced temperature, then the differences in the nucleation free energies of the different models are dramatically reduced. This finding suggests that comparisons of experimental data and computational predictions are most meaningful at the same relative conditions and emphasizes the importancemore » of knowing the phase diagram of a given computational model, but such information is usually not available for models where the interaction energy is determined directly from electronic structure calculations.« less
When Does Model-Based Control Pay Off?
2016-01-01
Many accounts of decision making and reinforcement learning posit the existence of two distinct systems that control choice: a fast, automatic system and a slow, deliberative system. Recent research formalizes this distinction by mapping these systems to “model-free” and “model-based” strategies in reinforcement learning. Model-free strategies are computationally cheap, but sometimes inaccurate, because action values can be accessed by inspecting a look-up table constructed through trial-and-error. In contrast, model-based strategies compute action values through planning in a causal model of the environment, which is more accurate but also more cognitively demanding. It is assumed that this trade-off between accuracy and computational demand plays an important role in the arbitration between the two strategies, but we show that the hallmark task for dissociating model-free and model-based strategies, as well as several related variants, do not embody such a trade-off. We describe five factors that reduce the effectiveness of the model-based strategy on these tasks by reducing its accuracy in estimating reward outcomes and decreasing the importance of its choices. Based on these observations, we describe a version of the task that formally and empirically obtains an accuracy-demand trade-off between model-free and model-based strategies. Moreover, we show that human participants spontaneously increase their reliance on model-based control on this task, compared to the original paradigm. Our novel task and our computational analyses may prove important in subsequent empirical investigations of how humans balance accuracy and demand. PMID:27564094
When Does Model-Based Control Pay Off?
Kool, Wouter; Cushman, Fiery A; Gershman, Samuel J
2016-08-01
Many accounts of decision making and reinforcement learning posit the existence of two distinct systems that control choice: a fast, automatic system and a slow, deliberative system. Recent research formalizes this distinction by mapping these systems to "model-free" and "model-based" strategies in reinforcement learning. Model-free strategies are computationally cheap, but sometimes inaccurate, because action values can be accessed by inspecting a look-up table constructed through trial-and-error. In contrast, model-based strategies compute action values through planning in a causal model of the environment, which is more accurate but also more cognitively demanding. It is assumed that this trade-off between accuracy and computational demand plays an important role in the arbitration between the two strategies, but we show that the hallmark task for dissociating model-free and model-based strategies, as well as several related variants, do not embody such a trade-off. We describe five factors that reduce the effectiveness of the model-based strategy on these tasks by reducing its accuracy in estimating reward outcomes and decreasing the importance of its choices. Based on these observations, we describe a version of the task that formally and empirically obtains an accuracy-demand trade-off between model-free and model-based strategies. Moreover, we show that human participants spontaneously increase their reliance on model-based control on this task, compared to the original paradigm. Our novel task and our computational analyses may prove important in subsequent empirical investigations of how humans balance accuracy and demand.
MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning
Yang, Jie; Huang, Yuan; Xu, Lixiong; Li, Siguang; Qi, Man
2015-01-01
Artificial neural networks (ANNs) have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing model to facilitate data intensive applications. Three data intensive scenarios are considered in the parallelization process in terms of the volume of classification data, the size of the training data, and the number of neurons in the neural network. The performance of the parallelized neural networks is evaluated in an experimental MapReduce computer cluster from the aspects of accuracy in classification and efficiency in computation. PMID:26681933
MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning.
Liu, Yang; Yang, Jie; Huang, Yuan; Xu, Lixiong; Li, Siguang; Qi, Man
2015-01-01
Artificial neural networks (ANNs) have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing model to facilitate data intensive applications. Three data intensive scenarios are considered in the parallelization process in terms of the volume of classification data, the size of the training data, and the number of neurons in the neural network. The performance of the parallelized neural networks is evaluated in an experimental MapReduce computer cluster from the aspects of accuracy in classification and efficiency in computation.
A high-order multiscale finite-element method for time-domain acoustic-wave modeling
NASA Astrophysics Data System (ADS)
Gao, Kai; Fu, Shubin; Chung, Eric T.
2018-05-01
Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructs high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss-Lobatto-Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.
A high-order multiscale finite-element method for time-domain acoustic-wave modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Kai; Fu, Shubin; Chung, Eric T.
Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructsmore » high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss–Lobatto–Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.« less
A high-order multiscale finite-element method for time-domain acoustic-wave modeling
Gao, Kai; Fu, Shubin; Chung, Eric T.
2018-02-04
Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructsmore » high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss–Lobatto–Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.« less
Complex Instruction Set Quantum Computing
NASA Astrophysics Data System (ADS)
Sanders, G. D.; Kim, K. W.; Holton, W. C.
1998-03-01
In proposed quantum computers, electromagnetic pulses are used to implement logic gates on quantum bits (qubits). Gates are unitary transformations applied to coherent qubit wavefunctions and a universal computer can be created using a minimal set of gates. By applying many elementary gates in sequence, desired quantum computations can be performed. This reduced instruction set approach to quantum computing (RISC QC) is characterized by serial application of a few basic pulse shapes and a long coherence time. However, the unitary matrix of the overall computation is ultimately a unitary matrix of the same size as any of the elementary matrices. This suggests that we might replace a sequence of reduced instructions with a single complex instruction using an optimally taylored pulse. We refer to this approach as complex instruction set quantum computing (CISC QC). One trades the requirement for long coherence times for the ability to design and generate potentially more complex pulses. We consider a model system of coupled qubits interacting through nearest neighbor coupling and show that CISC QC can reduce the time required to perform quantum computations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDaniel, Dwayne; Dulikravich, George; Cizmas, Paul
2017-11-27
This report summarizes the objectives, tasks and accomplishments made during the three year duration of this research project. The report presents the results obtained by applying advanced computational techniques to develop reduced-order models (ROMs) in the case of reacting multiphase flows based on high fidelity numerical simulation of gas-solids flow structures in risers and vertical columns obtained by the Multiphase Flow with Interphase eXchanges (MFIX) software. The research includes a numerical investigation of reacting and non-reacting gas-solids flow systems and computational analysis that will involve model development to accelerate the scale-up process for the design of fluidization systems by providingmore » accurate solutions that match the full-scale models. The computational work contributes to the development of a methodology for obtaining ROMs that is applicable to the system of gas-solid flows. Finally, the validity of the developed ROMs is evaluated by comparing the results against those obtained using the MFIX code. Additionally, the robustness of existing POD-based ROMs for multiphase flows is improved by avoiding non-physical solutions of the gas void fraction and ensuring that the reduced kinetics models used for reactive flows in fluidized beds are thermodynamically consistent.« less
Proctor, CJ; Macdonald, C; Milner, JM; Rowan, AD; Cawston, TE
2014-01-01
Objective To use a novel computational approach to examine the molecular pathways involved in cartilage breakdown and to use computer simulation to test possible interventions for reducing collagen release. Methods We constructed a computational model of the relevant molecular pathways using the Systems Biology Markup Language, a computer-readable format of a biochemical network. The model was constructed using our experimental data showing that interleukin-1 (IL-1) and oncostatin M (OSM) act synergistically to up-regulate collagenase protein levels and activity and initiate cartilage collagen breakdown. Simulations were performed using the COPASI software package. Results The model predicted that simulated inhibition of JNK or p38 MAPK, and overexpression of tissue inhibitor of metalloproteinases 3 (TIMP-3) led to a reduction in collagen release. Overexpression of TIMP-1 was much less effective than that of TIMP-3 and led to a delay, rather than a reduction, in collagen release. Simulated interventions of receptor antagonists and inhibition of JAK-1, the first kinase in the OSM pathway, were ineffective. So, importantly, the model predicts that it is more effective to intervene at targets that are downstream, such as the JNK pathway, rather than those that are close to the cytokine signal. In vitro experiments confirmed the effectiveness of JNK inhibition. Conclusion Our study shows the value of computer modeling as a tool for examining possible interventions by which to reduce cartilage collagen breakdown. The model predicts that interventions that either prevent transcription or inhibit the activity of collagenases are promising strategies and should be investigated further in an experimental setting. PMID:24757149
Inverse kinematics of a dual linear actuator pitch/roll heliostat
NASA Astrophysics Data System (ADS)
Freeman, Joshua; Shankar, Balakrishnan; Sundaram, Ganesh
2017-06-01
This work presents a simple, computationally efficient inverse kinematics solution for a pitch/roll heliostat using two linear actuators. The heliostat design and kinematics have been developed, modeled and tested using computer simulation software. A physical heliostat prototype was fabricated to validate the theoretical computations and data. Pitch/roll heliostats have numerous advantages including reduced cost potential and reduced space requirements, with a primary disadvantage being the significantly more complicated kinematics, which are solved here. Novel methods are applied to simplify the inverse kinematics problem which could be applied to other similar problems.
Distributed geospatial model sharing based on open interoperability standards
Feng, Min; Liu, Shuguang; Euliss, Ned H.; Fang, Yin
2009-01-01
Numerous geospatial computational models have been developed based on sound principles and published in journals or presented in conferences. However modelers have made few advances in the development of computable modules that facilitate sharing during model development or utilization. Constraints hampering development of model sharing technology includes limitations on computing, storage, and connectivity; traditional stand-alone and closed network systems cannot fully support sharing and integrating geospatial models. To address this need, we have identified methods for sharing geospatial computational models using Service Oriented Architecture (SOA) techniques and open geospatial standards. The service-oriented model sharing service is accessible using any tools or systems compliant with open geospatial standards, making it possible to utilize vast scientific resources available from around the world to solve highly sophisticated application problems. The methods also allow model services to be empowered by diverse computational devices and technologies, such as portable devices and GRID computing infrastructures. Based on the generic and abstract operations and data structures required for Web Processing Service (WPS) standards, we developed an interactive interface for model sharing to help reduce interoperability problems for model use. Geospatial computational models are shared on model services, where the computational processes provided by models can be accessed through tools and systems compliant with WPS. We developed a platform to help modelers publish individual models in a simplified and efficient way. Finally, we illustrate our technique using wetland hydrological models we developed for the prairie pothole region of North America.
Galbraith, Eric D.; Dunne, John P.; Gnanadesikan, Anand; ...
2015-12-21
Earth System Models increasingly include ocean biogeochemistry models in order to predict changes in ocean carbon storage, hypoxia, and biological productivity under climate change. However, state-of-the-art ocean biogeochemical models include many advected tracers, that significantly increase the computational resources required, forcing a trade-off with spatial resolution. Here, we compare a state-of the art model with 30 prognostic tracers (TOPAZ) with two reduced-tracer models, one with 6 tracers (BLING), and the other with 3 tracers (miniBLING). The reduced-tracer models employ parameterized, implicit biological functions, which nonetheless capture many of the most important processes resolved by TOPAZ. All three are embedded inmore » the same coupled climate model. Despite the large difference in tracer number, the absence of tracers for living organic matter is shown to have a minimal impact on the transport of nutrient elements, and the three models produce similar mean annual preindustrial distributions of macronutrients, oxygen, and carbon. Significant differences do exist among the models, in particular the seasonal cycle of biomass and export production, but it does not appear that these are necessary consequences of the reduced tracer number. With increasing CO2, changes in dissolved oxygen and anthropogenic carbon uptake are very similar across the different models. Thus, while the reduced-tracer models do not explicitly resolve the diversity and internal dynamics of marine ecosystems, we demonstrate that such models are applicable to a broad suite of major biogeochemical concerns, including anthropogenic change. Lastly, these results are very promising for the further development and application of reduced-tracer biogeochemical models that incorporate ‘‘sub-ecosystem-scale’’ parameterizations.« less
NASA Astrophysics Data System (ADS)
Hou, Zeyu; Lu, Wenxi
2018-05-01
Knowledge of groundwater contamination sources is critical for effectively protecting groundwater resources, estimating risks, mitigating disaster, and designing remediation strategies. Many methods for groundwater contamination source identification (GCSI) have been developed in recent years, including the simulation-optimization technique. This study proposes utilizing a support vector regression (SVR) model and a kernel extreme learning machine (KELM) model to enrich the content of the surrogate model. The surrogate model was itself key in replacing the simulation model, reducing the huge computational burden of iterations in the simulation-optimization technique to solve GCSI problems, especially in GCSI problems of aquifers contaminated by dense nonaqueous phase liquids (DNAPLs). A comparative study between the Kriging, SVR, and KELM models is reported. Additionally, there is analysis of the influence of parameter optimization and the structure of the training sample dataset on the approximation accuracy of the surrogate model. It was found that the KELM model was the most accurate surrogate model, and its performance was significantly improved after parameter optimization. The approximation accuracy of the surrogate model to the simulation model did not always improve with increasing numbers of training samples. Using the appropriate number of training samples was critical for improving the performance of the surrogate model and avoiding unnecessary computational workload. It was concluded that the KELM model developed in this work could reasonably predict system responses in given operation conditions. Replacing the simulation model with a KELM model considerably reduced the computational burden of the simulation-optimization process and also maintained high computation accuracy.
Salient regions detection using convolutional neural networks and color volume
NASA Astrophysics Data System (ADS)
Liu, Guang-Hai; Hou, Yingkun
2018-03-01
Convolutional neural network is an important technique in machine learning, pattern recognition and image processing. In order to reduce the computational burden and extend the classical LeNet-5 model to the field of saliency detection, we propose a simple and novel computing model based on LeNet-5 network. In the proposed model, hue, saturation and intensity are utilized to extract depth cues, and then we integrate depth cues and color volume to saliency detection following the basic structure of the feature integration theory. Experimental results show that the proposed computing model outperforms some existing state-of-the-art methods on MSRA1000 and ECSSD datasets.
A novel patient-specific model to compute coronary fractional flow reserve.
Kwon, Soon-Sung; Chung, Eui-Chul; Park, Jin-Seo; Kim, Gook-Tae; Kim, Jun-Woo; Kim, Keun-Hong; Shin, Eun-Seok; Shim, Eun Bo
2014-09-01
The fractional flow reserve (FFR) is a widely used clinical index to evaluate the functional severity of coronary stenosis. A computer simulation method based on patients' computed tomography (CT) data is a plausible non-invasive approach for computing the FFR. This method can provide a detailed solution for the stenosed coronary hemodynamics by coupling computational fluid dynamics (CFD) with the lumped parameter model (LPM) of the cardiovascular system. In this work, we have implemented a simple computational method to compute the FFR. As this method uses only coronary arteries for the CFD model and includes only the LPM of the coronary vascular system, it provides simpler boundary conditions for the coronary geometry and is computationally more efficient than existing approaches. To test the efficacy of this method, we simulated a three-dimensional straight vessel using CFD coupled with the LPM. The computed results were compared with those of the LPM. To validate this method in terms of clinically realistic geometry, a patient-specific model of stenosed coronary arteries was constructed from CT images, and the computed FFR was compared with clinically measured results. We evaluated the effect of a model aorta on the computed FFR and compared this with a model without the aorta. Computationally, the model without the aorta was more efficient than that with the aorta, reducing the CPU time required for computing a cardiac cycle to 43.4%. Copyright © 2014. Published by Elsevier Ltd.
A Novel Biobjective Risk-Based Model for Stochastic Air Traffic Network Flow Optimization Problem.
Cai, Kaiquan; Jia, Yaoguang; Zhu, Yanbo; Xiao, Mingming
2015-01-01
Network-wide air traffic flow management (ATFM) is an effective way to alleviate demand-capacity imbalances globally and thereafter reduce airspace congestion and flight delays. The conventional ATFM models assume the capacities of airports or airspace sectors are all predetermined. However, the capacity uncertainties due to the dynamics of convective weather may make the deterministic ATFM measures impractical. This paper investigates the stochastic air traffic network flow optimization (SATNFO) problem, which is formulated as a weighted biobjective 0-1 integer programming model. In order to evaluate the effect of capacity uncertainties on ATFM, the operational risk is modeled via probabilistic risk assessment and introduced as an extra objective in SATNFO problem. Computation experiments using real-world air traffic network data associated with simulated weather data show that presented model has far less constraints compared to stochastic model with nonanticipative constraints, which means our proposed model reduces the computation complexity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vidal-Codina, F., E-mail: fvidal@mit.edu; Nguyen, N.C., E-mail: cuongng@mit.edu; Giles, M.B., E-mail: mike.giles@maths.ox.ac.uk
We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basismore » approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.« less
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Sanetrik, Mark D.; Chwalowski, Pawel; Connolly, Joseph; Kopasakis, George
2016-01-01
An overview of recent applications of the FUN3D CFD code to computational aeroelastic, sonic boom, and aeropropulsoservoelasticity (APSE) analyses of a low-boom supersonic configuration is presented. The overview includes details of the computational models developed including multiple unstructured CFD grids suitable for aeroelastic and sonic boom analyses. In addition, aeroelastic Reduced-Order Models (ROMs) are generated and used to rapidly compute the aeroelastic response and utter boundaries at multiple flight conditions.
Butler, Troy; Wildey, Timothy
2018-01-01
In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, Troy; Wildey, Timothy
In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less
NASA Astrophysics Data System (ADS)
Matha, Denis; Sandner, Frank; Schlipf, David
2014-12-01
Design verification of wind turbines is performed by simulation of design load cases (DLC) defined in the IEC 61400-1 and -3 standards or equivalent guidelines. Due to the resulting large number of necessary load simulations, here a method is presented to reduce the computational effort for DLC simulations significantly by introducing a reduced nonlinear model and simplified hydro- and aerodynamics. The advantage of the formulation is that the nonlinear ODE system only contains basic mathematic operations and no iterations or internal loops which makes it very computationally efficient. Global turbine extreme and fatigue loads such as rotor thrust, tower base bending moment and mooring line tension, as well as platform motions are outputs of the model. They can be used to identify critical and less critical load situations to be then analysed with a higher fidelity tool and so speed up the design process. Results from these reduced model DLC simulations are presented and compared to higher fidelity models. Results in frequency and time domain as well as extreme and fatigue load predictions demonstrate that good agreement between the reduced and advanced model is achieved, allowing to efficiently exclude less critical DLC simulations, and to identify the most critical subset of cases for a given design. Additionally, the model is applicable for brute force optimization of floater control system parameters.
Preserving Lagrangian Structure in Nonlinear Model Reduction with Application to Structural Dynamics
Carlberg, Kevin; Tuminaro, Ray; Boggs, Paul
2015-03-11
Our work proposes a model-reduction methodology that preserves Lagrangian structure and achieves computational efficiency in the presence of high-order nonlinearities and arbitrary parameter dependence. As such, the resulting reduced-order model retains key properties such as energy conservation and symplectic time-evolution maps. We focus on parameterized simple mechanical systems subjected to Rayleigh damping and external forces, and consider an application to nonlinear structural dynamics. To preserve structure, the method first approximates the system's “Lagrangian ingredients''---the Riemannian metric, the potential-energy function, the dissipation function, and the external force---and subsequently derives reduced-order equations of motion by applying the (forced) Euler--Lagrange equation with thesemore » quantities. Moreover, from the algebraic perspective, key contributions include two efficient techniques for approximating parameterized reduced matrices while preserving symmetry and positive definiteness: matrix gappy proper orthogonal decomposition and reduced-basis sparsification. Our results for a parameterized truss-structure problem demonstrate the practical importance of preserving Lagrangian structure and illustrate the proposed method's merits: it reduces computation time while maintaining high accuracy and stability, in contrast to existing nonlinear model-reduction techniques that do not preserve structure.« less
Preserving Lagrangian Structure in Nonlinear Model Reduction with Application to Structural Dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlberg, Kevin; Tuminaro, Ray; Boggs, Paul
Our work proposes a model-reduction methodology that preserves Lagrangian structure and achieves computational efficiency in the presence of high-order nonlinearities and arbitrary parameter dependence. As such, the resulting reduced-order model retains key properties such as energy conservation and symplectic time-evolution maps. We focus on parameterized simple mechanical systems subjected to Rayleigh damping and external forces, and consider an application to nonlinear structural dynamics. To preserve structure, the method first approximates the system's “Lagrangian ingredients''---the Riemannian metric, the potential-energy function, the dissipation function, and the external force---and subsequently derives reduced-order equations of motion by applying the (forced) Euler--Lagrange equation with thesemore » quantities. Moreover, from the algebraic perspective, key contributions include two efficient techniques for approximating parameterized reduced matrices while preserving symmetry and positive definiteness: matrix gappy proper orthogonal decomposition and reduced-basis sparsification. Our results for a parameterized truss-structure problem demonstrate the practical importance of preserving Lagrangian structure and illustrate the proposed method's merits: it reduces computation time while maintaining high accuracy and stability, in contrast to existing nonlinear model-reduction techniques that do not preserve structure.« less
ERIC Educational Resources Information Center
Carleton, Renee E.
2012-01-01
Computer-aided learning (CAL) is used increasingly to teach anatomy in post-secondary programs. Studies show that augmentation of traditional cadaver dissection and model examination by CAL can be associated with positive student learning outcomes. In order to reduce costs associated with the purchase of skeletons and models and to encourage study…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Guoping; D'Azevedo, Ed F; Zhang, Fan
2010-01-01
Calibration of groundwater models involves hundreds to thousands of forward solutions, each of which may solve many transient coupled nonlinear partial differential equations, resulting in a computationally intensive problem. We describe a hybrid MPI/OpenMP approach to exploit two levels of parallelisms in software and hardware to reduce calibration time on multi-core computers. HydroGeoChem 5.0 (HGC5) is parallelized using OpenMP for direct solutions for a reactive transport model application, and a field-scale coupled flow and transport model application. In the reactive transport model, a single parallelizable loop is identified to account for over 97% of the total computational time using GPROF.more » Addition of a few lines of OpenMP compiler directives to the loop yields a speedup of about 10 on a 16-core compute node. For the field-scale model, parallelizable loops in 14 of 174 HGC5 subroutines that require 99% of the execution time are identified. As these loops are parallelized incrementally, the scalability is found to be limited by a loop where Cray PAT detects over 90% cache missing rates. With this loop rewritten, similar speedup as the first application is achieved. The OpenMP-parallelized code can be run efficiently on multiple workstations in a network or multiple compute nodes on a cluster as slaves using parallel PEST to speedup model calibration. To run calibration on clusters as a single task, the Levenberg Marquardt algorithm is added to HGC5 with the Jacobian calculation and lambda search parallelized using MPI. With this hybrid approach, 100 200 compute cores are used to reduce the calibration time from weeks to a few hours for these two applications. This approach is applicable to most of the existing groundwater model codes for many applications.« less
NASA Astrophysics Data System (ADS)
Cousquer, Yohann; Pryet, Alexandre; Atteia, Olivier; Ferré, Ty P. A.; Delbart, Célestine; Valois, Rémi; Dupuy, Alain
2018-03-01
The inverse problem of groundwater models is often ill-posed and model parameters are likely to be poorly constrained. Identifiability is improved if diverse data types are used for parameter estimation. However, some models, including detailed solute transport models, are further limited by prohibitive computation times. This often precludes the use of concentration data for parameter estimation, even if those data are available. In the case of surface water-groundwater (SW-GW) models, concentration data can provide SW-GW mixing ratios, which efficiently constrain the estimate of exchange flow, but are rarely used. We propose to reduce computational limits by simulating SW-GW exchange at a sink (well or drain) based on particle tracking under steady state flow conditions. Particle tracking is used to simulate advective transport. A comparison between the particle tracking surrogate model and an advective-dispersive model shows that dispersion can often be neglected when the mixing ratio is computed for a sink, allowing for use of the particle tracking surrogate model. The surrogate model was implemented to solve the inverse problem for a real SW-GW transport problem with heads and concentrations combined in a weighted hybrid objective function. The resulting inversion showed markedly reduced uncertainty in the transmissivity field compared to calibration on head data alone.
Scientific Discovery through Advanced Computing (SciDAC-3) Partnership Project Annual Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, Forest M.; Bochev, Pavel B.; Cameron-Smith, Philip J..
The Applying Computationally Efficient Schemes for BioGeochemical Cycles ACES4BGC Project is advancing the predictive capabilities of Earth System Models (ESMs) by reducing two of the largest sources of uncertainty, aerosols and biospheric feedbacks, with a highly efficient computational approach. In particular, this project is implementing and optimizing new computationally efficient tracer advection algorithms for large numbers of tracer species; adding important biogeochemical interactions between the atmosphere, land, and ocean models; and applying uncertainty quanti cation (UQ) techniques to constrain process parameters and evaluate uncertainties in feedbacks between biogeochemical cycles and the climate system.
Multi-Scale Modeling to Improve Single-Molecule, Single-Cell Experiments
NASA Astrophysics Data System (ADS)
Munsky, Brian; Shepherd, Douglas
2014-03-01
Single-cell, single-molecule experiments are producing an unprecedented amount of data to capture the dynamics of biological systems. When integrated with computational models, observations of spatial, temporal and stochastic fluctuations can yield powerful quantitative insight. We concentrate on experiments that localize and count individual molecules of mRNA. These high precision experiments have large imaging and computational processing costs, and we explore how improved computational analyses can dramatically reduce overall data requirements. In particular, we show how analyses of spatial, temporal and stochastic fluctuations can significantly enhance parameter estimation results for small, noisy data sets. We also show how full probability distribution analyses can constrain parameters with far less data than bulk analyses or statistical moment closures. Finally, we discuss how a systematic modeling progression from simple to more complex analyses can reduce total computational costs by orders of magnitude. We illustrate our approach using single-molecule, spatial mRNA measurements of Interleukin 1-alpha mRNA induction in human THP1 cells following stimulation. Our approach could improve the effectiveness of single-molecule gene regulation analyses for many other process.
Model's sparse representation based on reduced mixed GMsFE basis methods
NASA Astrophysics Data System (ADS)
Jiang, Lijian; Li, Qiuqi
2017-06-01
In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a large number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.
Model's sparse representation based on reduced mixed GMsFE basis methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Qiuqi, E-mail: qiuqili@hnu.edu.cn
2017-06-01
In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a largemore » number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.« less
Extreme learning machine for reduced order modeling of turbulent geophysical flows.
San, Omer; Maulik, Romit
2018-04-01
We investigate the application of artificial neural networks to stabilize proper orthogonal decomposition-based reduced order models for quasistationary geophysical turbulent flows. An extreme learning machine concept is introduced for computing an eddy-viscosity closure dynamically to incorporate the effects of the truncated modes. We consider a four-gyre wind-driven ocean circulation problem as our prototype setting to assess the performance of the proposed data-driven approach. Our framework provides a significant reduction in computational time and effectively retains the dynamics of the full-order model during the forward simulation period beyond the training data set. Furthermore, we show that the method is robust for larger choices of time steps and can be used as an efficient and reliable tool for long time integration of general circulation models.
Extreme learning machine for reduced order modeling of turbulent geophysical flows
NASA Astrophysics Data System (ADS)
San, Omer; Maulik, Romit
2018-04-01
We investigate the application of artificial neural networks to stabilize proper orthogonal decomposition-based reduced order models for quasistationary geophysical turbulent flows. An extreme learning machine concept is introduced for computing an eddy-viscosity closure dynamically to incorporate the effects of the truncated modes. We consider a four-gyre wind-driven ocean circulation problem as our prototype setting to assess the performance of the proposed data-driven approach. Our framework provides a significant reduction in computational time and effectively retains the dynamics of the full-order model during the forward simulation period beyond the training data set. Furthermore, we show that the method is robust for larger choices of time steps and can be used as an efficient and reliable tool for long time integration of general circulation models.
NASA Astrophysics Data System (ADS)
Latypov, Marat I.; Kalidindi, Surya R.
2017-10-01
There is a critical need for the development and verification of practically useful multiscale modeling strategies for simulating the mechanical response of multiphase metallic materials with heterogeneous microstructures. In this contribution, we present data-driven reduced order models for effective yield strength and strain partitioning in such microstructures. These models are built employing the recently developed framework of Materials Knowledge Systems that employ 2-point spatial correlations (or 2-point statistics) for the quantification of the heterostructures and principal component analyses for their low-dimensional representation. The models are calibrated to a large collection of finite element (FE) results obtained for a diverse range of microstructures with various sizes, shapes, and volume fractions of the phases. The performance of the models is evaluated by comparing the predictions of yield strength and strain partitioning in two-phase materials with the corresponding predictions from a classical self-consistent model as well as results of full-field FE simulations. The reduced-order models developed in this work show an excellent combination of accuracy and computational efficiency, and therefore present an important advance towards computationally efficient microstructure-sensitive multiscale modeling frameworks.
Computational Methods for HSCT-Inlet Controls/CFD Interdisciplinary Research
NASA Technical Reports Server (NTRS)
Cole, Gary L.; Melcher, Kevin J.; Chicatelli, Amy K.; Hartley, Tom T.; Chung, Joongkee
1994-01-01
A program aimed at facilitating the use of computational fluid dynamics (CFD) simulations by the controls discipline is presented. The objective is to reduce the development time and cost for propulsion system controls by using CFD simulations to obtain high-fidelity system models for control design and as numerical test beds for control system testing and validation. An interdisciplinary team has been formed to develop analytical and computational tools in three discipline areas: controls, CFD, and computational technology. The controls effort has focused on specifying requirements for an interface between the controls specialist and CFD simulations and a new method for extracting linear, reduced-order control models from CFD simulations. Existing CFD codes are being modified to permit time accurate execution and provide realistic boundary conditions for controls studies. Parallel processing and distributed computing techniques, along with existing system integration software, are being used to reduce CFD execution times and to support the development of an integrated analysis/design system. This paper describes: the initial application for the technology being developed, the high speed civil transport (HSCT) inlet control problem; activities being pursued in each discipline area; and a prototype analysis/design system in place for interactive operation and visualization of a time-accurate HSCT-inlet simulation.
A microphysical parameterization of aqSOA and sulfate formation in clouds
NASA Astrophysics Data System (ADS)
McVay, Renee; Ervens, Barbara
2017-07-01
Sulfate and secondary organic aerosol (cloud aqSOA) can be chemically formed in cloud water. Model implementation of these processes represents a computational burden due to the large number of microphysical and chemical parameters. Chemical mechanisms have been condensed by reducing the number of chemical parameters. Here an alternative is presented to reduce the number of microphysical parameters (number of cloud droplet size classes). In-cloud mass formation is surface and volume dependent due to surface-limited oxidant uptake and/or size-dependent pH. Box and parcel model simulations show that using the effective cloud droplet diameter (proportional to total volume-to-surface ratio) reproduces sulfate and aqSOA formation rates within ≤30% as compared to full droplet distributions; other single diameters lead to much greater deviations. This single-class approach reduces computing time significantly and can be included in models when total liquid water content and effective diameter are available.
Simplifying silicon burning: Application of quasi-equilibrium to (alpha) network nucleosynthesis
NASA Technical Reports Server (NTRS)
Hix, W. R.; Thielemann, F.-K.; Khokhlov, A. M.; Wheeler, J. C.
1997-01-01
While the need for accurate calculation of nucleosynthesis and the resulting rate of thermonuclear energy release within hydrodynamic models of stars and supernovae is clear, the computational expense of these nucleosynthesis calculations often force a compromise in accuracy to reduce the computational cost. To redress this trade-off of accuracy for speed, the authors present an improved nuclear network which takes advantage of quasi- equilibrium in order to reduce the number of independent nuclei, and hence the computational cost of nucleosynthesis, without significant reduction in accuracy. In this paper they will discuss the first application of this method, the further reduction in size of the minimal alpha network. The resultant QSE- reduced alpha network is twice as fast as the conventional alpha network it replaces and requires the tracking of half as many abundance variables, while accurately estimating the rate of energy generation. Such reduction in cost is particularly necessary for future generation of multi-dimensional models for supernovae.
Improved dense trajectories for action recognition based on random projection and Fisher vectors
NASA Astrophysics Data System (ADS)
Ai, Shihui; Lu, Tongwei; Xiong, Yudian
2018-03-01
As an important application of intelligent monitoring system, the action recognition in video has become a very important research area of computer vision. In order to improve the accuracy rate of the action recognition in video with improved dense trajectories, one advanced vector method is introduced. Improved dense trajectories combine Fisher Vector with Random Projection. The method realizes the reduction of the characteristic trajectory though projecting the high-dimensional trajectory descriptor into the low-dimensional subspace based on defining and analyzing Gaussian mixture model by Random Projection. And a GMM-FV hybrid model is introduced to encode the trajectory feature vector and reduce dimension. The computational complexity is reduced by Random Projection which can drop Fisher coding vector. Finally, a Linear SVM is used to classifier to predict labels. We tested the algorithm in UCF101 dataset and KTH dataset. Compared with existed some others algorithm, the result showed that the method not only reduce the computational complexity but also improved the accuracy of action recognition.
Dynamic Modeling of Cell-Free Biochemical Networks Using Effective Kinetic Models
2015-03-16
sensitivity value was the maximum uncertainty in that value estimated by the Sobol method. 2.4. Global Sensitivity Analysis of the Reduced Order Coagulation...sensitivity analysis, using the variance-based method of Sobol , to estimate which parameters controlled the performance of the reduced order model [69]. We...Environment. Comput. Sci. Eng. 2007, 9, 90–95. 69. Sobol , I. Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates
ERIC Educational Resources Information Center
Mazzotti, Valerie L.; Wood, Charles L.; Test, David W.; Fowler, Catherine H.
2012-01-01
Instruction about goal setting can increase students' self-determination and reduce problem behavior. Computer-assisted instruction could offer teachers another format for teaching goal setting and self-determination. This study used a multiple probes across participants design to examine the effects of computer-assisted instruction on students'…
MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce
2015-01-01
Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223
MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce.
Idris, Muhammad; Hussain, Shujaat; Siddiqi, Muhammad Hameed; Hassan, Waseem; Syed Muhammad Bilal, Hafiz; Lee, Sungyoung
2015-01-01
Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement.
Dynamic Average-Value Modeling of Doubly-Fed Induction Generator Wind Energy Conversion Systems
NASA Astrophysics Data System (ADS)
Shahab, Azin
In a Doubly-fed Induction Generator (DFIG) wind energy conversion system, the rotor of a wound rotor induction generator is connected to the grid via a partial scale ac/ac power electronic converter which controls the rotor frequency and speed. In this research, detailed models of the DFIG wind energy conversion system with Sinusoidal Pulse-Width Modulation (SPWM) scheme and Optimal Pulse-Width Modulation (OPWM) scheme for the power electronic converter are developed in detail in PSCAD/EMTDC. As the computer simulation using the detailed models tends to be computationally extensive, time consuming and even sometimes not practical in terms of speed, two modified approaches (switching-function modeling and average-value modeling) are proposed to reduce the simulation execution time. The results demonstrate that the two proposed approaches reduce the simulation execution time while the simulation results remain close to those obtained using the detailed model simulation.
Software Framework for Advanced Power Plant Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
John Widmann; Sorin Munteanu; Aseem Jain
2010-08-01
This report summarizes the work accomplished during the Phase II development effort of the Advanced Process Engineering Co-Simulator (APECS). The objective of the project is to develop the tools to efficiently combine high-fidelity computational fluid dynamics (CFD) models with process modeling software. During the course of the project, a robust integration controller was developed that can be used in any CAPE-OPEN compliant process modeling environment. The controller mediates the exchange of information between the process modeling software and the CFD software. Several approaches to reducing the time disparity between CFD simulations and process modeling have been investigated and implemented. Thesemore » include enabling the CFD models to be run on a remote cluster and enabling multiple CFD models to be run simultaneously. Furthermore, computationally fast reduced-order models (ROMs) have been developed that can be 'trained' using the results from CFD simulations and then used directly within flowsheets. Unit operation models (both CFD and ROMs) can be uploaded to a model database and shared between multiple users.« less
Cloud computing task scheduling strategy based on improved differential evolution algorithm
NASA Astrophysics Data System (ADS)
Ge, Junwei; He, Qian; Fang, Yiqiu
2017-04-01
In order to optimize the cloud computing task scheduling scheme, an improved differential evolution algorithm for cloud computing task scheduling is proposed. Firstly, the cloud computing task scheduling model, according to the model of the fitness function, and then used improved optimization calculation of the fitness function of the evolutionary algorithm, according to the evolution of generation of dynamic selection strategy through dynamic mutation strategy to ensure the global and local search ability. The performance test experiment was carried out in the CloudSim simulation platform, the experimental results show that the improved differential evolution algorithm can reduce the cloud computing task execution time and user cost saving, good implementation of the optimal scheduling of cloud computing tasks.
Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce
Pratx, Guillem; Xing, Lei
2011-01-01
Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes. PMID:22191916
Modeling Macro- and Micro-Scale Turbulent Mixing and Chemistry in Engine Exhaust Plumes
NASA Technical Reports Server (NTRS)
Menon, Suresh
1998-01-01
Simulation of turbulent mixing and chemical processes in the near-field plume and plume-vortex regimes has been successfully carried out recently using a reduced gas phase kinetics mechanism which substantially decreased the computational cost. A detailed mechanism including gas phase HOx, NOx, and SOx chemistry between the aircraft exhaust and the ambient air in near-field aircraft plumes is compiled. A reduced mechanism capturing the major chemical pathways is developed. Predictions by the reduced mechanism are found to be in good agreement with those by the detailed mechanism. With the reduced chemistry, the computer CPU time is saved by a factor of more than 3.5 for the near-field plume modeling. Distributions of major chemical species are obtained and analyzed. The computed sensitivities of major species with respect to reaction step are deduced for identification of the dominant gas phase kinetic reaction pathways in the jet plume. Both the near field plume and the plume-vortex regimes were investigated using advanced mixing models. In the near field, a stand-alone mixing model was used to investigate the impact of turbulent mixing on the micro- and macro-scale mixing processes using a reduced reaction kinetics model. The plume-vortex regime was simulated using a large-eddy simulation model. Vortex plume behind Boeing 737 and 747 aircraft was simulated along with relevant kinetics. Many features of the computed flow field show reasonable agreement with data. The entrainment of the engine plumes into the wing tip vortices and also the partial detrainment of the plume were numerically captured. The impact of fluid mechanics on the chemical processes was also studied. Results show that there are significant differences between spatial and temporal simulations especially in the predicted SO3 concentrations. This has important implications for the prediction of sulfuric acid aerosols in the wake and may partly explain the discrepancy between past numerical studies (that employed parabolic or temporal approximations) and the measured data. Finally to address the major uncertainty in the near-field plume modeling related to the plume processing of sulfur compounds and advanced model was developed to evaluate its impact on the chemical processes in the near wake. A comprehensive aerosol model is developed and it is coupled with chemical kinetics and the axisymmetric turbulent jet flow models. The integrated model is used to simulate microphysical processes in the near-field jet plume, including sulfuric acid and water binary homogeneous nucleation, coagulation, non-equilibrium heteromolecular condensation, and sulfur-induced soot activation. The formation and evolution of aerosols are computed and analyzed. The computed results show that a large number of ultra-fine (0.3--0.6 nm in radius) volatile HSO4 - HO embryos are generated in the near-field plume. These embryos further grow in size by self coagulation and condensation. Soot particles can be activated by both heterogeneous nucleation and scavenging of H2SO4-H2O aerosols. These activated soot particles can serve as water condensation nuclei for contrail formation. Conditions under which ice contrails can form behind aircrafts are studied. The sensitivities of the threshold temperature for contrail formation with respect to aircraft propulsion efficiency, relative humidity, and ambient pressure are evaluated. The computed aerosol properties for different extent of fuel sulfur conversion to S(VI) (SO3 and H2SO4) in engine are examined and the results are found to be sensitive to this conversion fraction.
LORAN-C LATITUDE-LONGITUDE CONVERSION AT SEA: PROGRAMMING CONSIDERATIONS.
McCullough, James R.; Irwin, Barry J.; Bowles, Robert M.
1985-01-01
Comparisons are made of the precision of arc-length routines as computer precision is reduced. Overland propagation delays are discussed and illustrated with observations from offshore New England. Present practice of LORAN-C error budget modeling is then reviewed with the suggestion that additional terms be considered in future modeling. Finally, some detailed numeric examples are provided to help with new computer program checkout.
NASA Astrophysics Data System (ADS)
Michaelis, A.; Wang, W.; Melton, F. S.; Votava, P.; Milesi, C.; Hashimoto, H.; Nemani, R. R.; Hiatt, S. H.
2009-12-01
As the length and diversity of the global earth observation data records grow, modeling and analyses of biospheric conditions increasingly requires multiple terabytes of data from a diversity of models and sensors. With network bandwidth beginning to flatten, transmission of these data from centralized data archives presents an increasing challenge, and costs associated with local storage and management of data and compute resources are often significant for individual research and application development efforts. Sharing community valued intermediary data sets, results and codes from individual efforts with others that are not in direct funded collaboration can also be a challenge with respect to time, cost and expertise. We purpose a modeling, data and knowledge center that houses NASA satellite data, climate data and ancillary data where a focused community may come together to share modeling and analysis codes, scientific results, knowledge and expertise on a centralized platform, named Ecosystem Modeling Center (EMC). With the recent development of new technologies for secure hardware virtualization, an opportunity exists to create specific modeling, analysis and compute environments that are customizable, “archiveable” and transferable. Allowing users to instantiate such environments on large compute infrastructures that are directly connected to large data archives may significantly reduce costs and time associated with scientific efforts by alleviating users from redundantly retrieving and integrating data sets and building modeling analysis codes. The EMC platform also provides the possibility for users receiving indirect assistance from expertise through prefabricated compute environments, potentially reducing study “ramp up” times.
The remote sensing image segmentation mean shift algorithm parallel processing based on MapReduce
NASA Astrophysics Data System (ADS)
Chen, Xi; Zhou, Liqing
2015-12-01
With the development of satellite remote sensing technology and the remote sensing image data, traditional remote sensing image segmentation technology cannot meet the massive remote sensing image processing and storage requirements. This article put cloud computing and parallel computing technology in remote sensing image segmentation process, and build a cheap and efficient computer cluster system that uses parallel processing to achieve MeanShift algorithm of remote sensing image segmentation based on the MapReduce model, not only to ensure the quality of remote sensing image segmentation, improved split speed, and better meet the real-time requirements. The remote sensing image segmentation MeanShift algorithm parallel processing algorithm based on MapReduce shows certain significance and a realization of value.
NASA Technical Reports Server (NTRS)
Alfano, Robert R. (Inventor); Cai, Wei (Inventor)
2007-01-01
A reconstruction technique for reducing computation burden in the 3D image processes, wherein the reconstruction procedure comprises an inverse and a forward model. The inverse model uses a hybrid dual Fourier algorithm that combines a 2D Fourier inversion with a 1D matrix inversion to thereby provide high-speed inverse computations. The inverse algorithm uses a hybrid transfer to provide fast Fourier inversion for data of multiple sources and multiple detectors. The forward model is based on an analytical cumulant solution of a radiative transfer equation. The accurate analytical form of the solution to the radiative transfer equation provides an efficient formalism for fast computation of the forward model.
Dynamic reduction of dimensions of a document vector in a document search and retrieval system
Jiao, Yu; Potok, Thomas E.
2011-05-03
The method and system of the invention involves processing each new document (20) coming into the system into a document vector (16), and creating a document vector with reduced dimensionality (17) for comparison with the data model (15) without recomputing the data model (15). These operations are carried out by a first computer (11) while a second computer (12) updates the data model (18), which can be comprised of an initial large group of documents (19) and is premised on the computing an initial data model (13, 14, 15) to provide a reference point for determining document vectors from documents processed from the data stream (20).
Andrianov, Alexey; Szabo, Aron; Sergeev, Alexander; Kim, Arkady; Chvykov, Vladimir; Kalashnikov, Mikhail
2016-11-14
We developed an improved approach to calculate the Fourier transform of signals with arbitrary large quadratic phase which can be efficiently implemented in numerical simulations utilizing Fast Fourier transform. The proposed algorithm significantly reduces the computational cost of Fourier transform of a highly chirped and stretched pulse by splitting it into two separate transforms of almost transform limited pulses, thereby reducing the required grid size roughly by a factor of the pulse stretching. The application of our improved Fourier transform algorithm in the split-step method for numerical modeling of CPA and OPCPA shows excellent agreement with standard algorithms.
Lee, David; Park, Sang-Hoon; Lee, Sang-Goog
2017-10-07
In this paper, we propose a set of wavelet-based combined feature vectors and a Gaussian mixture model (GMM)-supervector to enhance training speed and classification accuracy in motor imagery brain-computer interfaces. The proposed method is configured as follows: first, wavelet transforms are applied to extract the feature vectors for identification of motor imagery electroencephalography (EEG) and principal component analyses are used to reduce the dimensionality of the feature vectors and linearly combine them. Subsequently, the GMM universal background model is trained by the expectation-maximization (EM) algorithm to purify the training data and reduce its size. Finally, a purified and reduced GMM-supervector is used to train the support vector machine classifier. The performance of the proposed method was evaluated for three different motor imagery datasets in terms of accuracy, kappa, mutual information, and computation time, and compared with the state-of-the-art algorithms. The results from the study indicate that the proposed method achieves high accuracy with a small amount of training data compared with the state-of-the-art algorithms in motor imagery EEG classification.
Zhan, Yijian; Meschke, Günther
2017-07-08
The effective analysis of the nonlinear behavior of cement-based engineering structures not only demands physically-reliable models, but also computationally-efficient algorithms. Based on a continuum interface element formulation that is suitable to capture complex cracking phenomena in concrete materials and structures, an adaptive mesh processing technique is proposed for computational simulations of plain and fiber-reinforced concrete structures to progressively disintegrate the initial finite element mesh and to add degenerated solid elements into the interfacial gaps. In comparison with the implementation where the entire mesh is processed prior to the computation, the proposed adaptive cracking model allows simulating the failure behavior of plain and fiber-reinforced concrete structures with remarkably reduced computational expense.
Zhan, Yijian
2017-01-01
The effective analysis of the nonlinear behavior of cement-based engineering structures not only demands physically-reliable models, but also computationally-efficient algorithms. Based on a continuum interface element formulation that is suitable to capture complex cracking phenomena in concrete materials and structures, an adaptive mesh processing technique is proposed for computational simulations of plain and fiber-reinforced concrete structures to progressively disintegrate the initial finite element mesh and to add degenerated solid elements into the interfacial gaps. In comparison with the implementation where the entire mesh is processed prior to the computation, the proposed adaptive cracking model allows simulating the failure behavior of plain and fiber-reinforced concrete structures with remarkably reduced computational expense. PMID:28773130
Computationally efficient methods for modelling laser wakefield acceleration in the blowout regime
NASA Astrophysics Data System (ADS)
Cowan, B. M.; Kalmykov, S. Y.; Beck, A.; Davoine, X.; Bunkers, K.; Lifschitz, A. F.; Lefebvre, E.; Bruhwiler, D. L.; Shadwick, B. A.; Umstadter, D. P.; Umstadter
2012-08-01
Electron self-injection and acceleration until dephasing in the blowout regime is studied for a set of initial conditions typical of recent experiments with 100-terawatt-class lasers. Two different approaches to computationally efficient, fully explicit, 3D particle-in-cell modelling are examined. First, the Cartesian code vorpal (Nieter, C. and Cary, J. R. 2004 VORPAL: a versatile plasma simulation code. J. Comput. Phys. 196, 538) using a perfect-dispersion electromagnetic solver precisely describes the laser pulse and bubble dynamics, taking advantage of coarser resolution in the propagation direction, with a proportionally larger time step. Using third-order splines for macroparticles helps suppress the sampling noise while keeping the usage of computational resources modest. The second way to reduce the simulation load is using reduced-geometry codes. In our case, the quasi-cylindrical code calder-circ (Lifschitz, A. F. et al. 2009 Particle-in-cell modelling of laser-plasma interaction using Fourier decomposition. J. Comput. Phys. 228(5), 1803-1814) uses decomposition of fields and currents into a set of poloidal modes, while the macroparticles move in the Cartesian 3D space. Cylindrical symmetry of the interaction allows using just two modes, reducing the computational load to roughly that of a planar Cartesian simulation while preserving the 3D nature of the interaction. This significant economy of resources allows using fine resolution in the direction of propagation and a small time step, making numerical dispersion vanishingly small, together with a large number of particles per cell, enabling good particle statistics. Quantitative agreement of two simulations indicates that these are free of numerical artefacts. Both approaches thus retrieve the physically correct evolution of the plasma bubble, recovering the intrinsic connection of electron self-injection to the nonlinear optical evolution of the driver.
Modeling and Simulation of Explosively Driven Electromechanical Devices
NASA Astrophysics Data System (ADS)
Demmie, Paul N.
2002-07-01
Components that store electrical energy in ferroelectric materials and produce currents when their permittivity is explosively reduced are used in a variety of applications. The modeling and simulation of such devices is a challenging problem since one has to represent the coupled physics of detonation, shock propagation, and electromagnetic field generation. The high fidelity modeling and simulation of complicated electromechanical devices was not feasible prior to having the Accelerated Strategic Computing Initiative (ASCI) computers and the ASCI developed codes at Sandia National Laboratories (SNL). The EMMA computer code is used to model such devices and simulate their operation. In this paper, I discuss the capabilities of the EMMA code for the modeling and simulation of one such electromechanical device, a slim-loop ferroelectric (SFE) firing set.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huff, Kathryn D.
Component level and system level abstraction of detailed computational geologic repository models have resulted in four rapid computational models of hydrologic radionuclide transport at varying levels of detail. Those models are described, as is their implementation in Cyder, a software library of interchangeable radionuclide transport models appropriate for representing natural and engineered barrier components of generic geology repository concepts. A proof of principle demonstration was also conducted in which these models were used to represent the natural and engineered barrier components of a repository concept in a reducing, homogenous, generic geology. This base case demonstrates integration of the Cyder openmore » source library with the Cyclus computational fuel cycle systems analysis platform to facilitate calculation of repository performance metrics with respect to fuel cycle choices. (authors)« less
NASA Technical Reports Server (NTRS)
Hambric, Stephen A.; Hanford, Amanda D.; Shepherd, Micah R.; Campbell, Robert L.; Smith, Edward C.
2010-01-01
A computational approach for simulating the effects of rolling element and journal bearings on the vibration and sound transmission through gearboxes has been demonstrated. The approach, using ARL/Penn State s CHAMP methodology, uses Component Mode Synthesis of housing and shafting modes computed using Finite Element (FE) models to allow for rapid adjustment of bearing impedances in gearbox models. The approach has been demonstrated on NASA GRC s test gearbox with three different bearing configurations: in the first condition, traditional rolling element (ball and roller) bearings were installed, and in the second and third conditions, the traditional bearings were replaced with journal and wave bearings (wave bearings are journal bearings with a multi-lobed wave pattern on the bearing surface). A methodology for computing the stiffnesses and damping in journal and wave bearings has been presented, and demonstrated for the journal and wave bearings used in the NASA GRC test gearbox. The FE model of the gearbox, along with the rolling element bearing coupling impedances, was analyzed to compute dynamic transfer functions between forces applied to the meshing gears and accelerations on the gearbox housing, including several locations near the bearings. A Boundary Element (BE) acoustic model was used to compute the sound radiated by the gearbox. Measurements of the Gear Mesh Frequency (GMF) tones were made by NASA GRC at several operational speeds for the rolling element and journal bearing gearbox configurations. Both the measurements and the CHAMP numerical model indicate that the journal bearings reduce vibration and noise for the second harmonic of the gear meshing tones, but show no clear benefit to using journal bearings to reduce the amplitudes of the fundamental gear meshing tones. Also, the numerical model shows that the gearbox vibrations and radiated sound are similar for journal and wave bearing configurations.
Distributed parallel computing in stochastic modeling of groundwater systems.
Dong, Yanhui; Li, Guomin; Xu, Haizhen
2013-03-01
Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.
An analysis of the viscous flow through a compact radial turbine by the average passage approach
NASA Technical Reports Server (NTRS)
Heidmann, James D.; Beach, Timothy A.
1990-01-01
A steady, three-dimensional viscous average passage computer code is used to analyze the flow through a compact radial turbine rotor. The code models the flow as spatially periodic from blade passage to blade passage. Results from the code using varying computational models are compared with each other and with experimental data. These results include blade surface velocities and pressures, exit vorticity and entropy contour plots, shroud pressures, and spanwise exit total temperature, total pressure, and swirl distributions. The three computational models used are inviscid, viscous with no blade clearance, and viscous with blade clearance. It is found that modeling viscous effects improves correlation with experimental data, while modeling hub and tip clearances further improves some comparisons. Experimental results such as a local maximum of exit swirl, reduced exit total pressures at the walls, and exit total temperature magnitudes are explained by interpretation of the flow physics and computed secondary flows. Trends in the computed blade loading diagrams are similarly explained.
Mathematical and computational aspects of nonuniform frictional slip modeling
NASA Astrophysics Data System (ADS)
Gorbatikh, Larissa
2004-07-01
A mechanics-based model of non-uniform frictional sliding is studied from the mathematical/computational analysis point of view. This problem is of a key importance for a number of applications (particularly geomechanical ones), where materials interfaces undergo partial frictional sliding under compression and shear. We show that the problem is reduced to Dirichlet's problem for monotonic loading and to Riemman's problem for cyclic loading. The problem may look like a traditional crack interaction problem, however, it is confounded by the fact that locations of n sliding intervals are not known. They are to be determined from the condition for the stress intensity factors: KII=0 at the ends of the sliding zones. Computationally, it reduces to solving a system of 2n coupled non-linear algebraic equations involving singular integrals with unknown limits of integration.
Niu, Ran; Skliar, Mikhail
2012-07-01
In this paper, we develop and validate a method to identify computationally efficient site- and patient-specific models of ultrasound thermal therapies from MR thermal images. The models of the specific absorption rate of the transduced energy and the temperature response of the therapy target are identified in the reduced basis of proper orthogonal decomposition of thermal images, acquired in response to a mild thermal test excitation. The method permits dynamic reidentification of the treatment models during the therapy by recursively utilizing newly acquired images. Such adaptation is particularly important during high-temperature therapies, which are known to substantially and rapidly change tissue properties and blood perfusion. The developed theory was validated for the case of focused ultrasound heating of a tissue phantom. The experimental and computational results indicate that the developed approach produces accurate low-dimensional treatment models despite temporal and spatial noises in MR images and slow image acquisition rate.
Dynamics of an HBV Model with Drug Resistance Under Intermittent Antiviral Therapy
NASA Astrophysics Data System (ADS)
Zhang, Ben-Gong; Tanaka, Gouhei; Aihara, Kazuyuki; Honda, Masao; Kaneko, Shuichi; Chen, Luonan
2015-06-01
This paper studies the dynamics of the hepatitis B virus (HBV) model and the therapy regimens of HBV disease. First, we propose a new mathematical model of HBV with drug resistance, and then analyze its qualitative and dynamical properties. Combining the clinical data and theoretical analysis, we demonstrate that our model is biologically plausible and also computationally viable. Second, we demonstrate that the intermittent antiviral therapy regimen is one of the possible strategies to treat this kind of complex disease. There are two main advantages of this regimen, i.e. it not only may delay the development of drug resistance, but also may reduce the duration of on-treatment time compared with the long-term continuous medication. Moreover, such an intermittent antiviral therapy can reduce the adverse side effects. Our theoretical model and computational results provide qualitative insight into the progression of HBV, and also a possible new therapy for HBV disease.
Development of a High Resolution 3D Infant Stomach Model for Surgical Planning
NASA Astrophysics Data System (ADS)
Chaudry, Qaiser; Raza, S. Hussain; Lee, Jeonggyu; Xu, Yan; Wulkan, Mark; Wang, May D.
Medical surgical procedures have not changed much during the past century due to the lack of accurate low-cost workbench for testing any new improvement. The increasingly cheaper and powerful computer technologies have made computer-based surgery planning and training feasible. In our work, we have developed an accurate 3D stomach model, which aims to improve the surgical procedure that treats the infant pediatric and neonatal gastro-esophageal reflux disease (GERD). We generate the 3-D infant stomach model based on in vivo computer tomography (CT) scans of an infant. CT is a widely used clinical imaging modality that is cheap, but with low spatial resolution. To improve the model accuracy, we use the high resolution Visible Human Project (VHP) in model building. Next, we add soft muscle material properties to make the 3D model deformable. Then we use virtual reality techniques such as haptic devices to make the 3D stomach model deform upon touching force. This accurate 3D stomach model provides a workbench for testing new GERD treatment surgical procedures. It has the potential to reduce or eliminate the extensive cost associated with animal testing when improving any surgical procedure, and ultimately, to reduce the risk associated with infant GERD surgery.
High-Performance Signal Detection for Adverse Drug Events using MapReduce Paradigm.
Fan, Kai; Sun, Xingzhi; Tao, Ying; Xu, Linhao; Wang, Chen; Mao, Xianling; Peng, Bo; Pan, Yue
2010-11-13
Post-marketing pharmacovigilance is important for public health, as many Adverse Drug Events (ADEs) are unknown when those drugs were approved for marketing. However, due to the large number of reported drugs and drug combinations, detecting ADE signals by mining these reports is becoming a challenging task in terms of computational complexity. Recently, a parallel programming model, MapReduce has been introduced by Google to support large-scale data intensive applications. In this study, we proposed a MapReduce-based algorithm, for common ADE detection approach, Proportional Reporting Ratio (PRR), and tested it in mining spontaneous ADE reports from FDA. The purpose is to investigate the possibility of using MapReduce principle to speed up biomedical data mining tasks using this pharmacovigilance case as one specific example. The results demonstrated that MapReduce programming model could improve the performance of common signal detection algorithm for pharmacovigilance in a distributed computation environment at approximately liner speedup rates.
Space shuttle propulsion parameter estimation using optional estimation techniques
NASA Technical Reports Server (NTRS)
1983-01-01
A regression analyses on tabular aerodynamic data provided. A representative aerodynamic model for coefficient estimation. It also reduced the storage requirements for the "normal' model used to check out the estimation algorithms. The results of the regression analyses are presented. The computer routines for the filter portion of the estimation algorithm and the :"bringing-up' of the SRB predictive program on the computer was developed. For the filter program, approximately 54 routines were developed. The routines were highly subsegmented to facilitate overlaying program segments within the partitioned storage space on the computer.
A Status Review of the Commercial Supersonic Technology (CST) Aeroservoelasticity (ASE) Project
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Sanetrik, Mark D.; Chwalowski, Pawel; Funk, Christy; Keller, Donald F.; Ringertz, Ulf
2016-01-01
An overview of recent progress regarding the computational aeroelastic and aeroservoelastic (ASE) analyses of a low-boom supersonic configuration is presented. The overview includes details of the computational models developed to date with a focus on unstructured CFD grids, computational aeroelastic analyses, sonic boom propagation studies that include static aeroelastic effects, and gust loads analyses. In addition, flutter boundaries using aeroelastic Reduced-Order Models (ROMs) are presented at various Mach numbers of interest. Details regarding a collaboration with the Royal Institute of Technology (KTH, Stockholm, Sweden) to design, fabricate, and test a full-span aeroelastic wind-tunnel model are also presented.
Improving the Aircraft Design Process Using Web-Based Modeling and Simulation
NASA Technical Reports Server (NTRS)
Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.; Follen, Gregory J. (Technical Monitor)
2000-01-01
Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and multifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.
Improving the Aircraft Design Process Using Web-based Modeling and Simulation
NASA Technical Reports Server (NTRS)
Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.
2003-01-01
Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and muitifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.
NASA Astrophysics Data System (ADS)
Neverov, V. V.; Kozhukhov, Y. V.; Yablokov, A. M.; Lebedev, A. A.
2017-08-01
Nowadays the optimization using computational fluid dynamics (CFD) plays an important role in the design process of turbomachines. However, for the successful and productive optimization it is necessary to define a simulation model correctly and rationally. The article deals with the choice of a grid and computational domain parameters for optimization of centrifugal compressor impellers using computational fluid dynamics. Searching and applying optimal parameters of the grid model, the computational domain and solver settings allows engineers to carry out a high-accuracy modelling and to use computational capability effectively. The presented research was conducted using Numeca Fine/Turbo package with Spalart-Allmaras and Shear Stress Transport turbulence models. Two radial impellers was investigated: the high-pressure at ψT=0.71 and the low-pressure at ψT=0.43. The following parameters of the computational model were considered: the location of inlet and outlet boundaries, type of mesh topology, size of mesh and mesh parameter y+. Results of the investigation demonstrate that the choice of optimal parameters leads to the significant reduction of the computational time. Optimal parameters in comparison with non-optimal but visually similar parameters can reduce the calculation time up to 4 times. Besides, it is established that some parameters have a major impact on the result of modelling.
NASA Technical Reports Server (NTRS)
Swisshelm, Julie M.
1989-01-01
An explicit flow solver, applicable to the hierarchy of model equations ranging from Euler to full Navier-Stokes, is combined with several techniques designed to reduce computational expense. The computational domain consists of local grid refinements embedded in a global coarse mesh, where the locations of these refinements are defined by the physics of the flow. Flow characteristics are also used to determine which set of model equations is appropriate for solution in each region, thereby reducing not only the number of grid points at which the solution must be obtained, but also the computational effort required to get that solution. Acceleration to steady-state is achieved by applying multigrid on each of the subgrids, regardless of the particular model equations being solved. Since each of these components is explicit, advantage can readily be taken of the vector- and parallel-processing capabilities of machines such as the Cray X-MP and Cray-2.
An adaptive model order reduction by proper snapshot selection for nonlinear dynamical problems
NASA Astrophysics Data System (ADS)
Nigro, P. S. B.; Anndif, M.; Teixeira, Y.; Pimenta, P. M.; Wriggers, P.
2016-04-01
Model Order Reduction (MOR) methods are employed in many fields of Engineering in order to reduce the processing time of complex computational simulations. A usual approach to achieve this is the application of Galerkin projection to generate representative subspaces (reduced spaces). However, when strong nonlinearities in a dynamical system are present and this technique is employed several times along the simulation, it can be very inefficient. This work proposes a new adaptive strategy, which ensures low computational cost and small error to deal with this problem. This work also presents a new method to select snapshots named Proper Snapshot Selection (PSS). The objective of the PSS is to obtain a good balance between accuracy and computational cost by improving the adaptive strategy through a better snapshot selection in real time (online analysis). With this method, it is possible a substantial reduction of the subspace, keeping the quality of the model without the use of the Proper Orthogonal Decomposition (POD).
NASA Astrophysics Data System (ADS)
Siegel, J.; Siegel, Edward Carl-Ludwig
2011-03-01
Cook-Levin computational-"complexity"(C-C) algorithmic-equivalence reduction-theorem reducibility equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited with Gauss modular/clock-arithmetic/model congruences = signal X noise PRODUCT reinterpretation. Siegel-Baez FUZZYICS=CATEGORYICS(SON of ``TRIZ''): Category-Semantics(C-S) tabular list-format truth-table matrix analytics predicts and implements "noise"-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics(1987)]-Sipser[Intro. Theory Computation(1997) algorithmic C-C: "NIT-picking" to optimize optimization-problems optimally(OOPO). Versus iso-"noise" power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, this "NIT-picking" is "noise" power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-"science" algorithmic C-C models: Turing-machine, finite-state-models/automata, are identified as early-days once-workable but NOW ONLY LIMITING CRUTCHES IMPEDING latter-days new-insights!!!
Utilizing Direct Numerical Simulations of Transition and Turbulence in Design Optimization
NASA Technical Reports Server (NTRS)
Rai, Man M.
2015-01-01
Design optimization methods that use the Reynolds-averaged Navier-Stokes equations with the associated turbulence and transition models, or other model-based forms of the governing equations, may result in aerodynamic designs with actual performance levels that are noticeably different from the expected values because of the complexity of modeling turbulence/transition accurately in certain flows. Flow phenomena such as wake-blade interaction and trailing edge vortex shedding in turbines and compressors (examples of such flows) may require a computational approach that is free of transition/turbulence models, such as direct numerical simulations (DNS), for the underlying physics to be computed accurately. Here we explore the possibility of utilizing DNS data in designing a turbine blade section. The ultimate objective is to substantially reduce differences between predicted performance metrics and those obtained in reality. The redesign of a typical low-pressure turbine blade section with the goal of reducing total pressure loss in the row is provided as an example. The basic ideas presented here are of course just as applicable elsewhere in aerodynamic shape optimization as long as the computational costs are not excessive.
NASA Astrophysics Data System (ADS)
Young, Frederic; Siegel, Edward
Cook-Levin theorem theorem algorithmic computational-complexity(C-C) algorithmic-equivalence reducibility/completeness equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited via Siegel FUZZYICS =CATEGORYICS = ANALOGYICS =PRAGMATYICS/CATEGORY-SEMANTICS ONTOLOGY COGNITION ANALYTICS-Aristotle ``square-of-opposition'' tabular list-format truth-table matrix analytics predicts and implements ''noise''-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics (1987)]-Sipser[Intro.Thy. Computation(`97)] algorithmic C-C: ''NIT-picking''(!!!), to optimize optimization-problems optimally(OOPO). Versus iso-''noise'' power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, ''NIT-picking'' is ''noise'' power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-''science''/SEANCE algorithmic C-C models: Turing-machine, finite-state-models, finite-automata,..., discrete-maths graph-theory equivalence to physics Feynman-diagrams are identified as early-days once-workable valid but limiting IMPEDING CRUTCHES(!!!), ONLY IMPEDE latter-days new-insights!!!
Procedures for Geometric Data Reduction in Solid Log Modelling
Luis G. Occeña; Wenzhen Chen; Daniel L. Schmoldt
1995-01-01
One of the difficulties in solid log modelling is working with huge data sets, such as those that come from computed axial tomographic imaging. Algorithmic procedures are described in this paper that have successfully reduced data without sacrificing modelling integrity.
NASA Astrophysics Data System (ADS)
Siade, Adam J.; Hall, Joel; Karelse, Robert N.
2017-11-01
Regional groundwater flow models play an important role in decision making regarding water resources; however, the uncertainty embedded in model parameters and model assumptions can significantly hinder the reliability of model predictions. One way to reduce this uncertainty is to collect new observation data from the field. However, determining where and when to obtain such data is not straightforward. There exist a number of data-worth and experimental design strategies developed for this purpose. However, these studies often ignore issues related to real-world groundwater models such as computational expense, existing observation data, high-parameter dimension, etc. In this study, we propose a methodology, based on existing methods and software, to efficiently conduct such analyses for large-scale, complex regional groundwater flow systems for which there is a wealth of available observation data. The method utilizes the well-established d-optimality criterion, and the minimax criterion for robust sampling strategies. The so-called Null-Space Monte Carlo method is used to reduce the computational burden associated with uncertainty quantification. And, a heuristic methodology, based on the concept of the greedy algorithm, is proposed for developing robust designs with subsets of the posterior parameter samples. The proposed methodology is tested on a synthetic regional groundwater model, and subsequently applied to an existing, complex, regional groundwater system in the Perth region of Western Australia. The results indicate that robust designs can be obtained efficiently, within reasonable computational resources, for making regional decisions regarding groundwater level sampling.
Reducing usage of the computational resources by event driven approach to model predictive control
NASA Astrophysics Data System (ADS)
Misik, Stefan; Bradac, Zdenek; Cela, Arben
2017-08-01
This paper deals with a real-time and optimal control of dynamic systems while also considers the constraints which these systems might be subject to. Main objective of this work is to propose a simple modification of the existing Model Predictive Control approach to better suit needs of computational resource-constrained real-time systems. An example using model of a mechanical system is presented and the performance of the proposed method is evaluated in a simulated environment.
Semi-supervised Machine Learning for Analysis of Hydrogeochemical Data and Models
NASA Astrophysics Data System (ADS)
Vesselinov, Velimir; O'Malley, Daniel; Alexandrov, Boian; Moore, Bryan
2017-04-01
Data- and model-based analyses such as uncertainty quantification, sensitivity analysis, and decision support using complex physics models with numerous model parameters and typically require a huge number of model evaluations (on order of 10^6). Furthermore, model simulations of complex physics may require substantial computational time. For example, accounting for simultaneously occurring physical processes such as fluid flow and biogeochemical reactions in heterogeneous porous medium may require several hours of wall-clock computational time. To address these issues, we have developed a novel methodology for semi-supervised machine learning based on Non-negative Matrix Factorization (NMF) coupled with customized k-means clustering. The algorithm allows for automated, robust Blind Source Separation (BSS) of groundwater types (contamination sources) based on model-free analyses of observed hydrogeochemical data. We have also developed reduced order modeling tools, which coupling support vector regression (SVR), genetic algorithms (GA) and artificial and convolutional neural network (ANN/CNN). SVR is applied to predict the model behavior within prior uncertainty ranges associated with the model parameters. ANN and CNN procedures are applied to upscale heterogeneity of the porous medium. In the upscaling process, fine-scale high-resolution models of heterogeneity are applied to inform coarse-resolution models which have improved computational efficiency while capturing the impact of fine-scale effects at the course scale of interest. These techniques are tested independently on a series of synthetic problems. We also present a decision analysis related to contaminant remediation where the developed reduced order models are applied to reproduce groundwater flow and contaminant transport in a synthetic heterogeneous aquifer. The tools are coded in Julia and are a part of the MADS high-performance computational framework (https://github.com/madsjulia/Mads.jl).
Reducing the Complexity of an Agent-Based Local Heroin Market Model
Heard, Daniel; Bobashev, Georgiy V.; Morris, Robert J.
2014-01-01
This project explores techniques for reducing the complexity of an agent-based model (ABM). The analysis involved a model developed from the ethnographic research of Dr. Lee Hoffer in the Larimer area heroin market, which involved drug users, drug sellers, homeless individuals and police. The authors used statistical techniques to create a reduced version of the original model which maintained simulation fidelity while reducing computational complexity. This involved identifying key summary quantities of individual customer behavior as well as overall market activity and replacing some agents with probability distributions and regressions. The model was then extended to allow external market interventions in the form of police busts. Extensions of this research perspective, as well as its strengths and limitations, are discussed. PMID:25025132
Heuristic Modeling for TRMM Lifetime Predictions
NASA Technical Reports Server (NTRS)
Jordan, P. S.; Sharer, P. J.; DeFazio, R. L.
1996-01-01
Analysis time for computing the expected mission lifetimes of proposed frequently maneuvering, tightly altitude constrained, Earth orbiting spacecraft have been significantly reduced by means of a heuristic modeling method implemented in a commercial-off-the-shelf spreadsheet product (QuattroPro) running on a personal computer (PC). The method uses a look-up table to estimate the maneuver frequency per month as a function of the spacecraft ballistic coefficient and the solar flux index, then computes the associated fuel use by a simple engine model. Maneuver frequency data points are produced by means of a single 1-month run of traditional mission analysis software for each of the 12 to 25 data points required for the table. As the data point computations are required only a mission design start-up and on the occasion of significant mission redesigns, the dependence on time consuming traditional modeling methods is dramatically reduced. Results to date have agreed with traditional methods to within 1 to 1.5 percent. The spreadsheet approach is applicable to a wide variety of Earth orbiting spacecraft with tight altitude constraints. It will be particularly useful to such missions as the Tropical Rainfall Measurement Mission scheduled for launch in 1997, whose mission lifetime calculations are heavily dependent on frequently revised solar flux predictions.
IoGET: Internet of Geophysical and Environmental Things
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mudunuru, Maruti Kumar
The objective of this project is to provide novel and fast reduced-order models for onboard computation at sensor nodes for real-time analysis. The approach will require that LANL perform high-fidelity numerical simulations, construct simple reduced-order models (ROMs) using machine learning and signal processing algorithms, and use real-time data analysis for ROMs and compressive sensing at sensor nodes.
Practical Use of Computationally Frugal Model Analysis Methods
Hill, Mary C.; Kavetski, Dmitri; Clark, Martyn; ...
2015-03-21
Computationally frugal methods of model analysis can provide substantial benefits when developing models of groundwater and other environmental systems. Model analysis includes ways to evaluate model adequacy and to perform sensitivity and uncertainty analysis. Frugal methods typically require 10s of parallelizable model runs; their convenience allows for other uses of the computational effort. We suggest that model analysis be posed as a set of questions used to organize methods that range from frugal to expensive (requiring 10,000 model runs or more). This encourages focus on method utility, even when methods have starkly different theoretical backgrounds. We note that many frugalmore » methods are more useful when unrealistic process-model nonlinearities are reduced. Inexpensive diagnostics are identified for determining when frugal methods are advantageous. Examples from the literature are used to demonstrate local methods and the diagnostics. We suggest that the greater use of computationally frugal model analysis methods would allow questions such as those posed in this work to be addressed more routinely, allowing the environmental sciences community to obtain greater scientific insight from the many ongoing and future modeling efforts« less
NASA Astrophysics Data System (ADS)
Marsh, C.; Pomeroy, J. W.; Wheater, H. S.
2016-12-01
There is a need for hydrological land surface schemes that can link to atmospheric models, provide hydrological prediction at multiple scales and guide the development of multiple objective water predictive systems. Distributed raster-based models suffer from an overrepresentation of topography, leading to wasted computational effort that increases uncertainty due to greater numbers of parameters and initial conditions. The Canadian Hydrological Model (CHM) is a modular, multiphysics, spatially distributed modelling framework designed for representing hydrological processes, including those that operate in cold-regions. Unstructured meshes permit variable spatial resolution, allowing coarse resolutions at low spatial variability and fine resolutions as required. Model uncertainty is reduced by lessening the necessary computational elements relative to high-resolution rasters. CHM uses a novel multi-objective approach for unstructured triangular mesh generation that fulfills hydrologically important constraints (e.g., basin boundaries, water bodies, soil classification, land cover, elevation, and slope/aspect). This provides an efficient spatial representation of parameters and initial conditions, as well as well-formed and well-graded triangles that are suitable for numerical discretization. CHM uses high-quality open source libraries and high performance computing paradigms to provide a framework that allows for integrating current state-of-the-art process algorithms. The impact of changes to model structure, including individual algorithms, parameters, initial conditions, driving meteorology, and spatial/temporal discretization can be easily tested. Initial testing of CHM compared spatial scales and model complexity for a spring melt period at a sub-arctic mountain basin. The meshing algorithm reduced the total number of computational elements and preserved the spatial heterogeneity of predictions.
A comparative study of serial and parallel aeroelastic computations of wings
NASA Technical Reports Server (NTRS)
Byun, Chansup; Guruswamy, Guru P.
1994-01-01
A procedure for computing the aeroelasticity of wings on parallel multiple-instruction, multiple-data (MIMD) computers is presented. In this procedure, fluids are modeled using Euler equations, and structures are modeled using modal or finite element equations. The procedure is designed in such a way that each discipline can be developed and maintained independently by using a domain decomposition approach. In the present parallel procedure, each computational domain is scalable. A parallel integration scheme is used to compute aeroelastic responses by solving fluid and structural equations concurrently. The computational efficiency issues of parallel integration of both fluid and structural equations are investigated in detail. This approach, which reduces the total computational time by a factor of almost 2, is demonstrated for a typical aeroelastic wing by using various numbers of processors on the Intel iPSC/860.
NASA Astrophysics Data System (ADS)
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.
Zhai, Peng-Wang; Hu, Yongxiang; Trepte, Charles R; Lucker, Patricia L
2009-02-16
A vector radiative transfer model has been developed for coupled atmosphere and ocean systems based on the Successive Order of Scattering (SOS) Method. The emphasis of this study is to make the model easy-to-use and computationally efficient. This model provides the full Stokes vector at arbitrary locations which can be conveniently specified by users. The model is capable of tracking and labeling different sources of the photons that are measured, e.g. water leaving radiances and reflected sky lights. This model also has the capability to separate florescence from multi-scattered sunlight. The delta - fit technique has been adopted to reduce computational time associated with the strongly forward-peaked scattering phase matrices. The exponential - linear approximation has been used to reduce the number of discretized vertical layers while maintaining the accuracy. This model is developed to serve the remote sensing community in harvesting physical parameters from multi-platform, multi-sensor measurements that target different components of the atmosphere-oceanic system.
Reduced Stress Tensor and Dissipation and the Transport of Lamb Vector
NASA Technical Reports Server (NTRS)
Wu, Jie-Zhi; Zhou, Ye; Wu, Jian-Ming
1996-01-01
We develop a methodology to ensure that the stress tensor, regardless of its number of independent components, can be reduced to an exactly equivalent one which has the same number of independent components as the surface force. It is applicable to the momentum balance if the shear viscosity is constant. A direct application of this method to the energy balance also leads to a reduction of the dissipation rate of kinetic energy. Following this procedure, significant saving in analysis and computation may be achieved. For turbulent flows, this strategy immediately implies that a given Reynolds stress model can always be replaced by a reduced one before putting it into computation. Furthermore, we show how the modeling of Reynolds stress tensor can be reduced to that of the mean turbulent Lamb vector alone, which is much simpler. As a first step of this alternative modeling development, we derive the governing equations for the Lamb vector and its square. These equations form a basis of new second-order closure schemes and, we believe, should be favorably compared to that of traditional Reynolds stress transport equation.
Ion flux through membrane channels--an enhanced algorithm for the Poisson-Nernst-Planck model.
Dyrka, Witold; Augousti, Andy T; Kotulska, Malgorzata
2008-09-01
A novel algorithmic scheme for numerical solution of the 3D Poisson-Nernst-Planck model is proposed. The algorithmic improvements are universal and independent of the detailed physical model. They include three major steps: an adjustable gradient-based step value, an adjustable relaxation coefficient, and an optimized segmentation of the modeled space. The enhanced algorithm significantly accelerates the speed of computation and reduces the computational demands. The theoretical model was tested on a regular artificial channel and validated on a real protein channel-alpha-hemolysin, proving its efficiency. (c) 2008 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Leser, Patrick E.; Hochhalter, Jacob D.; Newman, John A.; Leser, William P.; Warner, James E.; Wawrzynek, Paul A.; Yuan, Fuh-Gwo
2015-01-01
Utilizing inverse uncertainty quantification techniques, structural health monitoring can be integrated with damage progression models to form probabilistic predictions of a structure's remaining useful life. However, damage evolution in realistic structures is physically complex. Accurately representing this behavior requires high-fidelity models which are typically computationally prohibitive. In the present work, a high-fidelity finite element model is represented by a surrogate model, reducing computation times. The new approach is used with damage diagnosis data to form a probabilistic prediction of remaining useful life for a test specimen under mixed-mode conditions.
Continuum Gyrokinetic Simulations of Turbulence in a Helical Model SOL with NSTX-type parameters
NASA Astrophysics Data System (ADS)
Hammett, G. W.; Shi, E. L.; Hakim, A.; Stoltzfus-Dueck, T.
2017-10-01
We have developed the Gkeyll code to carry out 3D2V full- F gyrokinetic simulations of electrostatic plasma turbulence in open-field-line geometries, using special versions of discontinuous-Galerkin algorithms to help with the computational challenges of the edge region. (Higher-order algorithms can also be helpful for exascale computing as they reduce the ratio of communications to computations.) Our first simulations with straight field lines were done for LAPD-type cases. Here we extend this to a helical model of an SOL plasma and show results for NSTX-type parameters. These simulations include the basic elements of a scrape-off layer: bad-curvature/interchange drive of instabilities, narrow sources to model plasma leaking from the core, and parallel losses with model sheath boundary conditions (our model allows currents to flow in and out of the walls). The formation of blobs is observed. By reducing the strength of the poloidal magnetic field, the heat flux at the divertor plate is observed to broaden. Supported by the Max-Planck/Princeton Center for Plasma Physics, the SciDAC Center for the Study of Plasma Microturbulence, and DOE Contract DE-AC02-09CH11466.
NASA Astrophysics Data System (ADS)
Safaei, S.; Haghnegahdar, A.; Razavi, S.
2016-12-01
Complex environmental models are now the primary tool to inform decision makers for the current or future management of environmental resources under the climate and environmental changes. These complex models often contain a large number of parameters that need to be determined by a computationally intensive calibration procedure. Sensitivity analysis (SA) is a very useful tool that not only allows for understanding the model behavior, but also helps in reducing the number of calibration parameters by identifying unimportant ones. The issue is that most global sensitivity techniques are highly computationally demanding themselves for generating robust and stable sensitivity metrics over the entire model response surface. Recently, a novel global sensitivity analysis method, Variogram Analysis of Response Surfaces (VARS), is introduced that can efficiently provide a comprehensive assessment of global sensitivity using the Variogram concept. In this work, we aim to evaluate the effectiveness of this highly efficient GSA method in saving computational burden, when applied to systems with extra-large number of input factors ( 100). We use a test function and a hydrological modelling case study to demonstrate the capability of VARS method in reducing problem dimensionality by identifying important vs unimportant input factors.
Limited-memory adaptive snapshot selection for proper orthogonal decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oxberry, Geoffrey M.; Kostova-Vassilevska, Tanya; Arrighi, Bill
2015-04-02
Reduced order models are useful for accelerating simulations in many-query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models can have prohibitively expensive memory and floating-point operation costs in high-performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time-stepping ordinary differential equation solvers. The error estimator used in this work is related to theory boundingmore » the approximation error in time of proper orthogonal decomposition-based reduced order models, and memory usage is minimized by computing the singular value decomposition using a single-pass incremental algorithm. Results for a viscous Burgers’ test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full order model is recovered to within discretization error. The resulting method can be used on supercomputers to generate proper orthogonal decomposition-based reduced order models, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space.« less
A Computing Infrastructure for Supporting Climate Studies
NASA Astrophysics Data System (ADS)
Yang, C.; Bambacus, M.; Freeman, S. M.; Huang, Q.; Li, J.; Sun, M.; Xu, C.; Wojcik, G. S.; Cahalan, R. F.; NASA Climate @ Home Project Team
2011-12-01
Climate change is one of the major challenges facing us on the Earth planet in the 21st century. Scientists build many models to simulate the past and predict the climate change for the next decades or century. Most of the models are at a low resolution with some targeting high resolution in linkage to practical climate change preparedness. To calibrate and validate the models, millions of model runs are needed to find the best simulation and configuration. This paper introduces the NASA effort on Climate@Home project to build a supercomputer based-on advanced computing technologies, such as cloud computing, grid computing, and others. Climate@Home computing infrastructure includes several aspects: 1) a cloud computing platform is utilized to manage the potential spike access to the centralized components, such as grid computing server for dispatching and collecting models runs results; 2) a grid computing engine is developed based on MapReduce to dispatch models, model configuration, and collect simulation results and contributing statistics; 3) a portal serves as the entry point for the project to provide the management, sharing, and data exploration for end users; 4) scientists can access customized tools to configure model runs and visualize model results; 5) the public can access twitter and facebook to get the latest about the project. This paper will introduce the latest progress of the project and demonstrate the operational system during the AGU fall meeting. It will also discuss how this technology can become a trailblazer for other climate studies and relevant sciences. It will share how the challenges in computation and software integration were solved.
PROM7: 1D modeler of solar filaments or prominences
NASA Astrophysics Data System (ADS)
Gouttebroze, P.
2018-05-01
PROM7 is an update of PROM4 (ascl:1306.004) and computes simple models of solar prominences and filaments using Partial Radiative Distribution (PRD). The models consist of plane-parallel slabs standing vertically above the solar surface. Each model is defined by 5 parameters: temperature, density, geometrical thickness, microturbulent velocity and height above the solar surface. It solves the equations of radiative transfer, statistical equilibrium, ionization and pressure equilibria, and computes electron and hydrogen level population and hydrogen line profiles. Moreover, the code treats calcium atom which is reduced to 3 ionization states (Ca I, Ca II, CA III). Ca II ion has 5 levels which are useful for computing 2 resonance lines (H and K) and infrared triplet (to 8500 A).
Combining Static Analysis and Model Checking for Software Analysis
NASA Technical Reports Server (NTRS)
Brat, Guillaume; Visser, Willem; Clancy, Daniel (Technical Monitor)
2003-01-01
We present an iterative technique in which model checking and static analysis are combined to verify large software systems. The role of the static analysis is to compute partial order information which the model checker uses to reduce the state space. During exploration, the model checker also computes aliasing information that it gives to the static analyzer which can then refine its analysis. The result of this refined analysis is then fed back to the model checker which updates its partial order reduction. At each step of this iterative process, the static analysis computes optimistic information which results in an unsafe reduction of the state space. However we show that the process converges to a fired point at which time the partial order information is safe and the whole state space is explored.
NASA Astrophysics Data System (ADS)
Sinsbeck, Michael; Tartakovsky, Daniel
2015-04-01
Infiltration into top soil can be described by alternative models with different degrees of fidelity: Richards equation and the Green-Ampt model. These models typically contain uncertain parameters and forcings, rendering predictions of the state variables uncertain as well. Within the probabilistic framework, solutions of these models are given in terms of their probability density functions (PDFs) that, in the presence of data, can be treated as prior distributions. The assimilation of soil moisture data into model predictions, e.g., via a Bayesian updating of solution PDFs, poses a question of model selection: Given a significant difference in computational cost, is a lower-fidelity model preferable to its higher-fidelity counter-part? We investigate this question in the context of heterogeneous porous media, whose hydraulic properties are uncertain. While low-fidelity (reduced-complexity) models introduce a model error, their moderate computational cost makes it possible to generate more realizations, which reduces the (e.g., Monte Carlo) sampling or stochastic error. The ratio between these two errors determines the model with the smallest total error. We found assimilation of measurements of a quantity of interest (the soil moisture content, in our example) to decrease the model error, increasing the probability that the predictive accuracy of a reduced-complexity model does not fall below that of its higher-fidelity counterpart.
NASA Astrophysics Data System (ADS)
Mohammed, F.
2016-12-01
Landslide hazards such as fast-moving debris flows, slow-moving landslides, and other mass flows cause numerous fatalities, injuries, and damage. Landslide occurrences in fjords, bays, and lakes can additionally generate tsunamis with locally extremely high wave heights and runups. Two-dimensional depth-averaged models can successfully simulate the entire lifecycle of the three-dimensional landslide dynamics and tsunami propagation efficiently and accurately with the appropriate assumptions. Landslide rheology is defined using viscous fluids, visco-plastic fluids, and granular material to account for the possible landslide source materials. Saturated and unsaturated rheologies are further included to simulate debris flow, debris avalanches, mudflows, and rockslides respectively. The models are obtained by reducing the fully three-dimensional Navier-Stokes equations with the internal rheological definition of the landslide material, the water body, and appropriate scaling assumptions to obtain the depth-averaged two-dimensional models. The landslide and tsunami models are coupled to include the interaction between the landslide and the water body for tsunami generation. The reduced models are solved numerically with a fast semi-implicit finite-volume, shock-capturing based algorithm. The well-balanced, positivity preserving algorithm accurately accounts for wet-dry interface transition for the landslide runout, landslide-water body interface, and the tsunami wave flooding on land. The models are implemented as a General-Purpose computing on Graphics Processing Unit-based (GPGPU) suite of models, either coupled or run independently within the suite. The GPGPU implementation provides up to 1000 times speedup over a CPU-based serial computation. This enables simulations of multiple scenarios of hazard realizations that provides a basis for a probabilistic hazard assessment. The models have been successfully validated against experiments, past studies, and field data for landslides and tsunamis.
The RISC (Reduced Instruction Set Computer) Architecture and Computer Performance Evaluation.
1986-03-01
time where the main emphasis of the evaluation process is put on the software . The model is intended to provide a tool for computer architects to use...program, or 3) Was to be implemented in random logic more effec- tively than the equivalent sequence of software instructions. Both data and address...definition is the IEEE standard 729-1983 stating Computer Architecture as: " The process of defining a collection of hardware and software components and
Computer Programs For Automated Welding System
NASA Technical Reports Server (NTRS)
Agapakis, John E.
1993-01-01
Computer programs developed for use in controlling automated welding system described in MFS-28578. Together with control computer, computer input and output devices and control sensors and actuators, provide flexible capability for planning and implementation of schemes for automated welding of specific workpieces. Developed according to macro- and task-level programming schemes, which increases productivity and consistency by reducing amount of "teaching" of system by technician. System provides for three-dimensional mathematical modeling of workpieces, work cells, robots, and positioners.
An application of interactive computer graphics technology to the design of dispersal mechanisms
NASA Technical Reports Server (NTRS)
Richter, B. J.; Welch, B. H.
1977-01-01
Interactive computer graphics technology is combined with a general purpose mechanisms computer code to study the operational behavior of three guided bomb dispersal mechanism designs. These studies illustrate the use of computer graphics techniques to discover operational anomalies, to assess the effectiveness of design improvements, to reduce the time and cost of the modeling effort, and to provide the mechanism designer with a visual understanding of the physical operation of such systems.
Cloud computing can simplify HIT infrastructure management.
Glaser, John
2011-08-01
Software as a Service (SaaS), built on cloud computing technology, is emerging as the forerunner in IT infrastructure because it helps healthcare providers reduce capital investments. Cloud computing leads to predictable, monthly, fixed operating expenses for hospital IT staff. Outsourced cloud computing facilities are state-of-the-art data centers boasting some of the most sophisticated networking equipment on the market. The SaaS model helps hospitals safeguard against technology obsolescence, minimizes maintenance requirements, and simplifies management.
NASA Astrophysics Data System (ADS)
Alzubaidi, Mohammad; Balasubramanian, Vineeth; Patel, Ameet; Panchanathan, Sethuraman; Black, John A., Jr.
2012-03-01
Inductive learning refers to machine learning algorithms that learn a model from a set of training data instances. Any test instance is then classified by comparing it to the learned model. When the set of training instances lend themselves well to modeling, the use of a model substantially reduces the computation cost of classification. However, some training data sets are complex, and do not lend themselves well to modeling. Transductive learning refers to machine learning algorithms that classify test instances by comparing them to all of the training instances, without creating an explicit model. This can produce better classification performance, but at a much higher computational cost. Medical images vary greatly across human populations, constituting a data set that does not lend itself well to modeling. Our previous work showed that the wide variations seen across training sets of "normal" chest radiographs make it difficult to successfully classify test radiographs with an inductive (modeling) approach, and that a transductive approach leads to much better performance in detecting atypical regions. The problem with the transductive approach is its high computational cost. This paper develops and demonstrates a novel semi-transductive framework that can address the unique challenges of atypicality detection in chest radiographs. The proposed framework combines the superior performance of transductive methods with the reduced computational cost of inductive methods. Our results show that the proposed semitransductive approach provides both effective and efficient detection of atypical regions within a set of chest radiographs previously labeled by Mayo Clinic expert thoracic radiologists.
Security model for VM in cloud
NASA Astrophysics Data System (ADS)
Kanaparti, Venkataramana; Naveen K., R.; Rajani, S.; Padmvathamma, M.; Anitha, C.
2013-03-01
Cloud computing is a new approach emerged to meet ever-increasing demand for computing resources and to reduce operational costs and Capital Expenditure for IT services. As this new way of computation allows data and applications to be stored away from own corporate server, it brings more issues in security such as virtualization security, distributed computing, application security, identity management, access control and authentication. Even though Virtualization forms the basis for cloud computing it poses many threats in securing cloud. As most of Security threats lies at Virtualization layer in cloud we proposed this new Security Model for Virtual Machine in Cloud (SMVC) in which every process is authenticated by Trusted-Agent (TA) in Hypervisor as well as in VM. Our proposed model is designed to with-stand attacks by unauthorized process that pose threat to applications related to Data Mining, OLAP systems, Image processing which requires huge resources in cloud deployed on one or more VM's.
Adapting to life: ocean biogeochemical modelling and adaptive remeshing
NASA Astrophysics Data System (ADS)
Hill, J.; Popova, E. E.; Ham, D. A.; Piggott, M. D.; Srokosz, M.
2013-11-01
An outstanding problem in biogeochemical modelling of the ocean is that many of the key processes occur intermittently at small scales, such as the sub-mesoscale, that are not well represented in global ocean models. As an example, state-of-the-art models give values of primary production approximately two orders of magnitude lower than those observed in the ocean's oligotrophic gyres, which cover a third of the Earth's surface. This is partly due to their failure to resolve sub-mesoscale phenomena, which play a significant role in nutrient supply. Simply increasing the resolution of the models may be an inefficient computational solution to this problem. An approach based on recent advances in adaptive mesh computational techniques may offer an alternative. Here the first steps in such an approach are described, using the example of a~simple vertical column (quasi 1-D) ocean biogeochemical model. We present a novel method of simulating ocean biogeochemical behaviour on a vertically adaptive computational mesh, where the mesh changes in response to the biogeochemical and physical state of the system throughout the simulation. We show that the model reproduces the general physical and biological behaviour at three ocean stations (India, Papa and Bermuda) as compared to a high-resolution fixed mesh simulation and to observations. The simulations capture both the seasonal and inter-annual variations. The use of an adaptive mesh does not increase the computational error, but reduces the number of mesh elements by a factor of 2-3, so reducing computational overhead. We then show the potential of this method in two case studies where we change the metric used to determine the varying mesh sizes in order to capture the dynamics of chlorophyll at Bermuda and sinking detritus at Papa. We therefore demonstrate adaptive meshes may provide a~suitable numerical technique for simulating seasonal or transient biogeochemical behaviour at high spatial resolution whilst minimising computational cost.
Active technique by suction to control the flow structure over a van model
NASA Astrophysics Data System (ADS)
Harinaldi, Budiarso, Warjito, Kosasih, Engkos A.; Tarakka, Rustan; Simanungkalit, Sabar P.
2012-06-01
Today research trend in car aerodynamics are carried out from the point of view of the durable development. Some car companies have the objective to develop control solution that enable to reduce the aerodynamic drag of vehicle. It provides the possibility to modify the flow separation to reduce the development of the swirling structures around the vehicle. In this study, a family van is modeled with a modified form of Ahmed's body by changing the orientation of the flow from its original form (modified/reversed Ahmed Body). This model is equipped with a suction on the rear side to comprehensively examine the pressure field modifications that occur. The investigation combines computational and experimental work. The computational simulation used is k-epsilon flow turbulence model. The reversed Ahmed body used in the investigation has slant angle (φ) 35° at the front part. In the computational work, meshing type is tetra/hybrid element with hex core type and the grid number is more than 1.7 million in order to ensure detail discretization and more accurate calculation results. The boundary condition is upstream velocity of 11.1 m/s. Mean free stream at far upstream region is assumed in a steady state condition and uniform. The suction velocity is set at 1 m/s. Meanwhile in the experimental work a reversed Ahmed model is tested in a controlled wind tunnel experiments. The main measurement is the drag aerodynamic measurement at rear of the body of the model using strain gage. The results show that the application of a suction in the rear part of the van model give the effect of reducing the wake and the vortex is formed. Aerodynamic drag reduction close to 24% for the computational approach and 14.8% for the experimental approach by introducing a suction have been obtained.
Glass, Nancy; Hanson, Ginger C; Anger, W Kent; Laharnar, Naima; Campbell, Jacquelyn C; Weinstein, Marc; Perrin, Nancy
2017-07-01
The study examines the effectiveness of a workplace violence and harassment prevention and response program with female homecare workers in a consumer driven model of care. Homecare workers were randomized to either; computer based training (CBT only) or computer-based training with homecare worker peer facilitation (CBT + peer). Participants completed measures on confidence, incidents of violence, and harassment, health and work outcomes at baseline, 3, 6 months post-baseline. Homecare workers reported improved confidence to prevent and respond to workplace violence and harassment and a reduction in incidents of workplace violence and harassment in both groups at 6-month follow-up. A decrease in negative health and work outcomes associated with violence and harassment were not reported in the groups. CBT alone or with trained peer facilitation with homecare workers can increase confidence and reduce incidents of workplace violence and harassment in a consumer-driven model of care. © 2017 Wiley Periodicals, Inc.
Methods for simulation-based analysis of fluid-structure interaction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barone, Matthew Franklin; Payne, Jeffrey L.
2005-10-01
Methods for analysis of fluid-structure interaction using high fidelity simulations are critically reviewed. First, a literature review of modern numerical techniques for simulation of aeroelastic phenomena is presented. The review focuses on methods contained within the arbitrary Lagrangian-Eulerian (ALE) framework for coupling computational fluid dynamics codes to computational structural mechanics codes. The review treats mesh movement algorithms, the role of the geometric conservation law, time advancement schemes, wetted surface interface strategies, and some representative applications. The complexity and computational expense of coupled Navier-Stokes/structural dynamics simulations points to the need for reduced order modeling to facilitate parametric analysis. The proper orthogonalmore » decomposition (POD)/Galerkin projection approach for building a reduced order model (ROM) is presented, along with ideas for extension of the methodology to allow construction of ROMs based on data generated from ALE simulations.« less
NASA Trapezoidal Wing Computations Including Transition and Advanced Turbulence Modeling
NASA Technical Reports Server (NTRS)
Rumsey, C. L.; Lee-Rausch, E. M.
2012-01-01
Flow about the NASA Trapezoidal Wing is computed with several turbulence models by using grids from the first High Lift Prediction Workshop in an effort to advance understanding of computational fluid dynamics modeling for this type of flowfield. Transition is accounted for in many of the computations. In particular, a recently-developed 4-equation transition model is utilized and works well overall. Accounting for transition tends to increase lift and decrease moment, which improves the agreement with experiment. Upper surface flap separation is reduced, and agreement with experimental surface pressures and velocity profiles is improved. The predicted shape of wakes from upstream elements is strongly influenced by grid resolution in regions above the main and flap elements. Turbulence model enhancements to account for rotation and curvature have the general effect of increasing lift and improving the resolution of the wing tip vortex as it convects downstream. However, none of the models improve the prediction of surface pressures near the wing tip, where more grid resolution is needed.
Multiple Scattering Principal Component-based Radiative Transfer Model (PCRTM) from Far IR to UV-Vis
NASA Astrophysics Data System (ADS)
Liu, X.; Wu, W.; Yang, Q.
2017-12-01
Modern satellite hyperspectral satellite remote sensors such as AIRS, CrIS, IASI, CLARREO all require accurate and fast radiative transfer models that can deal with multiple scattering of clouds and aerosols to explore the information contents. However, performing full radiative transfer calculations using multiple stream methods such as discrete ordinate (DISORT), doubling and adding (AD), successive order of scattering order of scattering (SOS) are very time consuming. We have developed a principal component-based radiative transfer model (PCRTM) to reduce the computational burden by orders of magnitudes while maintain high accuracy. By exploring spectral correlations, the PCRTM reduce the number of radiative transfer calculations in frequency domain. It further uses a hybrid stream method to decrease the number of calls to the computational expensive multiple scattering calculations with high stream numbers. Other fast parameterizations have been used in the infrared spectral region reduce the computational time to milliseconds for an AIRS forward simulation (2378 spectral channels). The PCRTM has been development to cover spectral range from far IR to UV-Vis. The PCRTM model have been be used for satellite data inversions, proxy data generation, inter-satellite calibrations, spectral fingerprinting, and climate OSSE. We will show examples of applying the PCRTM to single field of view cloudy retrievals of atmospheric temperature, moisture, traces gases, clouds, and surface parameters. We will also show how the PCRTM are used for the NASA CLARREO project.
Large-scale inverse model analyses employing fast randomized data reduction
NASA Astrophysics Data System (ADS)
Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan
2017-08-01
When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.
2013-08-01
The SDM was subjected to forced small (0.5) sinusoidal pitching oscillations and derivatives were computed from measured model loads, angles of... aluminium alloy when subjected to both tensile and torsional loading. He joined the Aeronautical Research Laboratories (now called the Defence...oscillations and derivatives were computed from measured model loads, angles of attack, reduced frequency of oscillation and aircraft geometrical parameters
NASA Technical Reports Server (NTRS)
Pak, Chan-gi; Li, Wesley W.
2009-01-01
Supporting the Aeronautics Research Mission Directorate guidelines, the National Aeronautics and Space Administration [NASA] Dryden Flight Research Center is developing a multidisciplinary design, analysis, and optimization [MDAO] tool. This tool will leverage existing tools and practices, and allow the easy integration and adoption of new state-of-the-art software. Today s modern aircraft designs in transonic speed are a challenging task due to the computation time required for the unsteady aeroelastic analysis using a Computational Fluid Dynamics [CFD] code. Design approaches in this speed regime are mainly based on the manual trial and error. Because of the time required for unsteady CFD computations in time-domain, this will considerably slow down the whole design process. These analyses are usually performed repeatedly to optimize the final design. As a result, there is considerable motivation to be able to perform aeroelastic calculations more quickly and inexpensively. This paper will describe the development of unsteady transonic aeroelastic design methodology for design optimization using reduced modeling method and unsteady aerodynamic approximation. The method requires the unsteady transonic aerodynamics be represented in the frequency or Laplace domain. Dynamically linear assumption is used for creating Aerodynamic Influence Coefficient [AIC] matrices in transonic speed regime. Unsteady CFD computations are needed for the important columns of an AIC matrix which corresponded to the primary modes for the flutter. Order reduction techniques, such as Guyan reduction and improved reduction system, are used to reduce the size of problem transonic flutter can be found by the classic methods, such as Rational function approximation, p-k, p, root-locus etc. Such a methodology could be incorporated into MDAO tool for design optimization at a reasonable computational cost. The proposed technique is verified using the Aerostructures Test Wing 2 actually designed, built, and tested at NASA Dryden Flight Research Center. The results from the full order model and the approximate reduced order model are analyzed and compared.
NASA Astrophysics Data System (ADS)
Pawłuszek, Kamila; Borkowski, Andrzej
2016-06-01
Since the availability of high-resolution Airborne Laser Scanning (ALS) data, substantial progress in geomorphological research, especially in landslide analysis, has been carried out. First and second order derivatives of Digital Terrain Model (DTM) have become a popular and powerful tool in landslide inventory mapping. Nevertheless, an automatic landslide mapping based on sophisticated classifiers including Support Vector Machine (SVM), Artificial Neural Network or Random Forests is often computationally time consuming. The objective of this research is to deeply explore topographic information provided by ALS data and overcome computational time limitation. For this reason, an extended set of topographic features and the Principal Component Analysis (PCA) were used to reduce redundant information. The proposed novel approach was tested on a susceptible area affected by more than 50 landslides located on Rożnów Lake in Carpathian Mountains, Poland. The initial seven PCA components with 90% of the total variability in the original topographic attributes were used for SVM classification. Comparing results with landslide inventory map, the average user's accuracy (UA), producer's accuracy (PA), and overall accuracy (OA) were calculated for two models according to the classification results. Thereby, for the PCA-feature-reduced model UA, PA, and OA were found to be 72%, 76%, and 72%, respectively. Similarly, UA, PA, and OA in the non-reduced original topographic model, was 74%, 77% and 74%, respectively. Using the initial seven PCA components instead of the twenty original topographic attributes does not significantly change identification accuracy but reduce computational time.
Integrated computer-aided design using minicomputers
NASA Technical Reports Server (NTRS)
Storaasli, O. O.
1980-01-01
Computer-Aided Design/Computer-Aided Manufacturing (CAD/CAM), a highly interactive software, has been implemented on minicomputers at the NASA Langley Research Center. CAD/CAM software integrates many formerly fragmented programs and procedures into one cohesive system; it also includes finite element modeling and analysis, and has been interfaced via a computer network to a relational data base management system and offline plotting devices on mainframe computers. The CAD/CAM software system requires interactive graphics terminals operating at a minimum of 4800 bits/sec transfer rate to a computer. The system is portable and introduces 'interactive graphics', which permits the creation and modification of models interactively. The CAD/CAM system has already produced designs for a large area space platform, a national transonic facility fan blade, and a laminar flow control wind tunnel model. Besides the design/drafting element analysis capability, CAD/CAM provides options to produce an automatic program tooling code to drive a numerically controlled (N/C) machine. Reductions in time for design, engineering, drawing, finite element modeling, and N/C machining will benefit productivity through reduced costs, fewer errors, and a wider range of configuration.
Ultra-Scale Computing for Emergency Evacuation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhaduri, Budhendra L; Nutaro, James J; Liu, Cheng
2010-01-01
Emergency evacuations are carried out in anticipation of a disaster such as hurricane landfall or flooding, and in response to a disaster that strikes without a warning. Existing emergency evacuation modeling and simulation tools are primarily designed for evacuation planning and are of limited value in operational support for real time evacuation management. In order to align with desktop computing, these models reduce the data and computational complexities through simple approximations and representations of real network conditions and traffic behaviors, which rarely represent real-world scenarios. With the emergence of high resolution physiographic, demographic, and socioeconomic data and supercomputing platforms, itmore » is possible to develop micro-simulation based emergency evacuation models that can foster development of novel algorithms for human behavior and traffic assignments, and can simulate evacuation of millions of people over a large geographic area. However, such advances in evacuation modeling and simulations demand computational capacity beyond the desktop scales and can be supported by high performance computing platforms. This paper explores the motivation and feasibility of ultra-scale computing for increasing the speed of high resolution emergency evacuation simulations.« less
Hou, Zeyu; Lu, Wenxi; Xue, Haibo; Lin, Jin
2017-08-01
Surrogate-based simulation-optimization technique is an effective approach for optimizing the surfactant enhanced aquifer remediation (SEAR) strategy for clearing DNAPLs. The performance of the surrogate model, which is used to replace the simulation model for the aim of reducing computation burden, is the key of corresponding researches. However, previous researches are generally based on a stand-alone surrogate model, and rarely make efforts to improve the approximation accuracy of the surrogate model to the simulation model sufficiently by combining various methods. In this regard, we present set pair analysis (SPA) as a new method to build ensemble surrogate (ES) model, and conducted a comparative research to select a better ES modeling pattern for the SEAR strategy optimization problems. Surrogate models were developed using radial basis function artificial neural network (RBFANN), support vector regression (SVR), and Kriging. One ES model is assembling RBFANN model, SVR model, and Kriging model using set pair weights according their performance, and the other is assembling several Kriging (the best surrogate modeling method of three) models built with different training sample datasets. Finally, an optimization model, in which the ES model was embedded, was established to obtain the optimal remediation strategy. The results showed the residuals of the outputs between the best ES model and simulation model for 100 testing samples were lower than 1.5%. Using an ES model instead of the simulation model was critical for considerably reducing the computation time of simulation-optimization process and maintaining high computation accuracy simultaneously. Copyright © 2017 Elsevier B.V. All rights reserved.
Super-resolution using a light inception layer in convolutional neural network
NASA Astrophysics Data System (ADS)
Mou, Qinyang; Guo, Jun
2018-04-01
Recently, several models based on CNN architecture have achieved great result on Single Image Super-Resolution (SISR) problem. In this paper, we propose an image super-resolution method (SR) using a light inception layer in convolutional network (LICN). Due to the strong representation ability of our well-designed inception layer that can learn richer representation with less parameters, we can build our model with shallow architecture that can reduce the effect of vanishing gradients problem and save computational costs. Our model strike a balance between computational speed and the quality of the result. Compared with state-of-the-art result, we produce comparable or better results with faster computational speed.
NASA Technical Reports Server (NTRS)
Yanosy, J. L.; Rowell, L. F.
1985-01-01
Efforts to make increasingly use of suitable computer programs in the design of hardware have the potential to reduce expenditures. In this context, NASA has evaluated the benefits provided by software tools through an application to the Environmental Control and Life Support (ECLS) system. The present paper is concerned with the benefits obtained by an employment of simulation tools in the case of the Air Revitalization System (ARS) of a Space Station life support system. Attention is given to the ARS functions and components, a computer program overview, a SAND (solid amine water desorbed) bed model description, a model validation, and details regarding the simulation benefits.
NASA Astrophysics Data System (ADS)
Joost, William J.
2012-09-01
Transportation accounts for approximately 28% of U.S. energy consumption with the majority of transportation energy derived from petroleum sources. Many technologies such as vehicle electrification, advanced combustion, and advanced fuels can reduce transportation energy consumption by improving the efficiency of cars and trucks. Lightweight materials are another important technology that can improve passenger vehicle fuel efficiency by 6-8% for each 10% reduction in weight while also making electric and alternative vehicles more competitive. Despite the opportunities for improved efficiency, widespread deployment of lightweight materials for automotive structures is hampered by technology gaps most often associated with performance, manufacturability, and cost. In this report, the impact of reduced vehicle weight on energy efficiency is discussed with a particular emphasis on quantitative relationships determined by several researchers. The most promising lightweight materials systems are described along with a brief review of the most significant technical barriers to their implementation. For each material system, the development of accurate material models is critical to support simulation-intensive processing and structural design for vehicles; improved models also contribute to an integrated computational materials engineering (ICME) approach for addressing technical barriers and accelerating deployment. The value of computational techniques is described by considering recent ICME and computational materials science success stories with an emphasis on applying problem-specific methods.
Model Order Reduction Algorithm for Estimating the Absorption Spectrum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Beeumen, Roel; Williams-Young, David B.; Kasper, Joseph M.
The ab initio description of the spectral interior of the absorption spectrum poses both a theoretical and computational challenge for modern electronic structure theory. Due to the often spectrally dense character of this domain in the quantum propagator’s eigenspectrum for medium-to-large sized systems, traditional approaches based on the partial diagonalization of the propagator often encounter oscillatory and stagnating convergence. Electronic structure methods which solve the molecular response problem through the solution of spectrally shifted linear systems, such as the complex polarization propagator, offer an alternative approach which is agnostic to the underlying spectral density or domain location. This generality comesmore » at a seemingly high computational cost associated with solving a large linear system for each spectral shift in some discretization of the spectral domain of interest. In this work, we present a novel, adaptive solution to this high computational overhead based on model order reduction techniques via interpolation. Model order reduction reduces the computational complexity of mathematical models and is ubiquitous in the simulation of dynamical systems and control theory. The efficiency and effectiveness of the proposed algorithm in the ab initio prediction of X-ray absorption spectra is demonstrated using a test set of challenging water clusters which are spectrally dense in the neighborhood of the oxygen K-edge. On the basis of a single, user defined tolerance we automatically determine the order of the reduced models and approximate the absorption spectrum up to the given tolerance. We also illustrate that, for the systems studied, the automatically determined model order increases logarithmically with the problem dimension, compared to a linear increase of the number of eigenvalues within the energy window. Furthermore, we observed that the computational cost of the proposed algorithm only scales quadratically with respect to the problem dimension.« less
Quantum Gauss-Jordan Elimination and Simulation of Accounting Principles on Quantum Computers
NASA Astrophysics Data System (ADS)
Diep, Do Ngoc; Giang, Do Hoang; Van Minh, Nguyen
2017-06-01
The paper is devoted to a version of Quantum Gauss-Jordan Elimination and its applications. In the first part, we construct the Quantum Gauss-Jordan Elimination (QGJE) Algorithm and estimate the complexity of computation of Reduced Row Echelon Form (RREF) of N × N matrices. The main result asserts that QGJE has computation time is of order 2 N/2. The second part is devoted to a new idea of simulation of accounting by quantum computing. We first expose the actual accounting principles in a pure mathematics language. Then, we simulate the accounting principles on quantum computers. We show that, all accounting actions are exhousted by the described basic actions. The main problems of accounting are reduced to some system of linear equations in the economic model of Leontief. In this simulation, we use our constructed Quantum Gauss-Jordan Elimination to solve the problems and the complexity of quantum computing is a square root order faster than the complexity in classical computing.
NASA Technical Reports Server (NTRS)
Lansing, F. L.; Strain, D. M.; Chai, V. W.; Higgins, S.
1979-01-01
The energy Comsumption Computer Program was developed to simulate building heating and cooling loads and compute thermal and electric energy consumption and cost. This article reports on the new additional algorithms and modifications made in an effort to widen the areas of application. The program structure was rewritten accordingly to refine and advance the building model and to further reduce the processing time and cost. The program is noted for its very low cost and ease of use compared to other available codes. The accuracy of computations is not sacrificed however, since the results are expected to lie within + or - 10% of actual energy meter readings.
Ocean Models and Proper Orthogonal Decomposition
NASA Astrophysics Data System (ADS)
Salas-de-Leon, D. A.
2007-05-01
The increasing computational developments and the better understanding of mathematical and physical systems resulted in an increasing number of ocean models. Long time ago, modelers were like a secret organization and recognize each other by using secret codes and languages that only a select group of people was able to recognize and understand. The access to computational systems was reduced, on one hand equipment and the using time of computers were expensive and restricted, and on the other hand, they required an advance computational languages that not everybody wanted to learn. Now a days most college freshman own a personal computer (PC or laptop), and/or have access to more sophisticated computational systems than those available for research in the early 80's. The resource availability resulted in a mayor access to all kind models. Today computer speed and time and the algorithms does not seem to be a problem, even though some models take days to run in small computational systems. Almost every oceanographic institution has their own model, what is more, in the same institution from one office to the next there are different models for the same phenomena, developed by different research member, the results does not differ substantially since the equations are the same, and the solving algorithms are similar. The algorithms and the grids, constructed with algorithms, can be found in text books and/or over the internet. Every year more sophisticated models are constructed. The Proper Orthogonal Decomposition is a technique that allows the reduction of the number of variables to solve keeping the model properties, for which it can be a very useful tool in diminishing the processes that have to be solved using "small" computational systems, making sophisticated models available for a greater community.
NASA Technical Reports Server (NTRS)
Yang, H. Q.; West, Jeff
2015-01-01
Current reduced-order thermal model for cryogenic propellant tanks is based on correlations built for flat plates collected in the 1950's. The use of these correlations suffers from: inaccurate geometry representation; inaccurate gravity orientation; ambiguous length scale; and lack of detailed validation. The work presented under this task uses the first-principles based Computational Fluid Dynamics (CFD) technique to compute heat transfer from tank wall to the cryogenic fluids, and extracts and correlates the equivalent heat transfer coefficient to support reduced-order thermal model. The CFD tool was first validated against available experimental data and commonly used correlations for natural convection along a vertically heated wall. Good agreements between the present prediction and experimental data have been found for flows in laminar as well turbulent regimes. The convective heat transfer between tank wall and cryogenic propellant, and that between tank wall and ullage gas were then simulated. The results showed that commonly used heat transfer correlations for either vertical or horizontal plate over predict heat transfer rate for the cryogenic tank, in some cases by as much as one order of magnitude. A characteristic length scale has been defined that can correlate all heat transfer coefficients for different fill levels into a single curve. This curve can be used for the reduced-order heat transfer model analysis.
Uncertainties in obtaining high reliability from stress-strength models
NASA Technical Reports Server (NTRS)
Neal, Donald M.; Matthews, William T.; Vangel, Mark G.
1992-01-01
There has been a recent interest in determining high statistical reliability in risk assessment of aircraft components. The potential consequences are identified of incorrectly assuming a particular statistical distribution for stress or strength data used in obtaining the high reliability values. The computation of the reliability is defined as the probability of the strength being greater than the stress over the range of stress values. This method is often referred to as the stress-strength model. A sensitivity analysis was performed involving a comparison of reliability results in order to evaluate the effects of assuming specific statistical distributions. Both known population distributions, and those that differed slightly from the known, were considered. Results showed substantial differences in reliability estimates even for almost nondetectable differences in the assumed distributions. These differences represent a potential problem in using the stress-strength model for high reliability computations, since in practice it is impossible to ever know the exact (population) distribution. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability.
Evolutionary inference via the Poisson Indel Process.
Bouchard-Côté, Alexandre; Jordan, Michael I
2013-01-22
We address the problem of the joint statistical inference of phylogenetic trees and multiple sequence alignments from unaligned molecular sequences. This problem is generally formulated in terms of string-valued evolutionary processes along the branches of a phylogenetic tree. The classic evolutionary process, the TKF91 model [Thorne JL, Kishino H, Felsenstein J (1991) J Mol Evol 33(2):114-124] is a continuous-time Markov chain model composed of insertion, deletion, and substitution events. Unfortunately, this model gives rise to an intractable computational problem: The computation of the marginal likelihood under the TKF91 model is exponential in the number of taxa. In this work, we present a stochastic process, the Poisson Indel Process (PIP), in which the complexity of this computation is reduced to linear. The Poisson Indel Process is closely related to the TKF91 model, differing only in its treatment of insertions, but it has a global characterization as a Poisson process on the phylogeny. Standard results for Poisson processes allow key computations to be decoupled, which yields the favorable computational profile of inference under the PIP model. We present illustrative experiments in which Bayesian inference under the PIP model is compared with separate inference of phylogenies and alignments.
Evolutionary inference via the Poisson Indel Process
Bouchard-Côté, Alexandre; Jordan, Michael I.
2013-01-01
We address the problem of the joint statistical inference of phylogenetic trees and multiple sequence alignments from unaligned molecular sequences. This problem is generally formulated in terms of string-valued evolutionary processes along the branches of a phylogenetic tree. The classic evolutionary process, the TKF91 model [Thorne JL, Kishino H, Felsenstein J (1991) J Mol Evol 33(2):114–124] is a continuous-time Markov chain model composed of insertion, deletion, and substitution events. Unfortunately, this model gives rise to an intractable computational problem: The computation of the marginal likelihood under the TKF91 model is exponential in the number of taxa. In this work, we present a stochastic process, the Poisson Indel Process (PIP), in which the complexity of this computation is reduced to linear. The Poisson Indel Process is closely related to the TKF91 model, differing only in its treatment of insertions, but it has a global characterization as a Poisson process on the phylogeny. Standard results for Poisson processes allow key computations to be decoupled, which yields the favorable computational profile of inference under the PIP model. We present illustrative experiments in which Bayesian inference under the PIP model is compared with separate inference of phylogenies and alignments. PMID:23275296
A one-model approach based on relaxed combinations of inputs for evaluating input congestion in DEA
NASA Astrophysics Data System (ADS)
Khodabakhshi, Mohammad
2009-08-01
This paper provides a one-model approach of input congestion based on input relaxation model developed in data envelopment analysis (e.g. [G.R. Jahanshahloo, M. Khodabakhshi, Suitable combination of inputs for improving outputs in DEA with determining input congestion -- Considering textile industry of China, Applied Mathematics and Computation (1) (2004) 263-273; G.R. Jahanshahloo, M. Khodabakhshi, Determining assurance interval for non-Archimedean ele improving outputs model in DEA, Applied Mathematics and Computation 151 (2) (2004) 501-506; M. Khodabakhshi, A super-efficiency model based on improved outputs in data envelopment analysis, Applied Mathematics and Computation 184 (2) (2007) 695-703; M. Khodabakhshi, M. Asgharian, An input relaxation measure of efficiency in stochastic data analysis, Applied Mathematical Modelling 33 (2009) 2010-2023]. This approach reduces solving three problems with the two-model approach introduced in the first of the above-mentioned reference to two problems which is certainly important from computational point of view. The model is applied to a set of data extracted from ISI database to estimate input congestion of 12 Canadian business schools.
Reliability modeling of fault-tolerant computer based systems
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.
1987-01-01
Digital fault-tolerant computer-based systems have become commonplace in military and commercial avionics. These systems hold the promise of increased availability, reliability, and maintainability over conventional analog-based systems through the application of replicated digital computers arranged in fault-tolerant configurations. Three tightly coupled factors of paramount importance, ultimately determining the viability of these systems, are reliability, safety, and profitability. Reliability, the major driver affects virtually every aspect of design, packaging, and field operations, and eventually produces profit for commercial applications or increased national security. However, the utilization of digital computer systems makes the task of producing credible reliability assessment a formidable one for the reliability engineer. The root of the problem lies in the digital computer's unique adaptability to changing requirements, computational power, and ability to test itself efficiently. Addressed here are the nuances of modeling the reliability of systems with large state sizes, in the Markov sense, which result from systems based on replicated redundant hardware and to discuss the modeling of factors which can reduce reliability without concomitant depletion of hardware. Advanced fault-handling models are described and methods of acquiring and measuring parameters for these models are delineated.
Introducing Seismic Tomography with Computational Modeling
NASA Astrophysics Data System (ADS)
Neves, R.; Neves, M. L.; Teodoro, V.
2011-12-01
Learning seismic tomography principles and techniques involves advanced physical and computational knowledge. In depth learning of such computational skills is a difficult cognitive process that requires a strong background in physics, mathematics and computer programming. The corresponding learning environments and pedagogic methodologies should then involve sets of computational modelling activities with computer software systems which allow students the possibility to improve their mathematical or programming knowledge and simultaneously focus on the learning of seismic wave propagation and inverse theory. To reduce the level of cognitive opacity associated with mathematical or programming knowledge, several computer modelling systems have already been developed (Neves & Teodoro, 2010). Among such systems, Modellus is particularly well suited to achieve this goal because it is a domain general environment for explorative and expressive modelling with the following main advantages: 1) an easy and intuitive creation of mathematical models using just standard mathematical notation; 2) the simultaneous exploration of images, tables, graphs and object animations; 3) the attribution of mathematical properties expressed in the models to animated objects; and finally 4) the computation and display of mathematical quantities obtained from the analysis of images and graphs. Here we describe virtual simulations and educational exercises which enable students an easy grasp of the fundamental of seismic tomography. The simulations make the lecture more interactive and allow students the possibility to overcome their lack of advanced mathematical or programming knowledge and focus on the learning of seismological concepts and processes taking advantage of basic scientific computation methods and tools.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less
Dick, Thomas E.; Molkov, Yaroslav I.; Nieman, Gary; Hsieh, Yee-Hsee; Jacono, Frank J.; Doyle, John; Scheff, Jeremy D.; Calvano, Steve E.; Androulakis, Ioannis P.; An, Gary; Vodovotz, Yoram
2012-01-01
Acute inflammation leads to organ failure by engaging catastrophic feedback loops in which stressed tissue evokes an inflammatory response and, in turn, inflammation damages tissue. Manifestations of this maladaptive inflammatory response include cardio-respiratory dysfunction that may be reflected in reduced heart rate and ventilatory pattern variabilities. We have developed signal-processing algorithms that quantify non-linear deterministic characteristics of variability in biologic signals. Now, coalescing under the aegis of the NIH Computational Biology Program and the Society for Complexity in Acute Illness, two research teams performed iterative experiments and computational modeling on inflammation and cardio-pulmonary dysfunction in sepsis as well as on neural control of respiration and ventilatory pattern variability. These teams, with additional collaborators, have recently formed a multi-institutional, interdisciplinary consortium, whose goal is to delineate the fundamental interrelationship between the inflammatory response and physiologic variability. Multi-scale mathematical modeling and complementary physiological experiments will provide insight into autonomic neural mechanisms that may modulate the inflammatory response to sepsis and simultaneously reduce heart rate and ventilatory pattern variabilities associated with sepsis. This approach integrates computational models of neural control of breathing and cardio-respiratory coupling with models that combine inflammation, cardiovascular function, and heart rate variability. The resulting integrated model will provide mechanistic explanations for the phenomena of respiratory sinus-arrhythmia and cardio-ventilatory coupling observed under normal conditions, and the loss of these properties during sepsis. This approach holds the potential of modeling cross-scale physiological interactions to improve both basic knowledge and clinical management of acute inflammatory diseases such as sepsis and trauma. PMID:22783197
Dick, Thomas E; Molkov, Yaroslav I; Nieman, Gary; Hsieh, Yee-Hsee; Jacono, Frank J; Doyle, John; Scheff, Jeremy D; Calvano, Steve E; Androulakis, Ioannis P; An, Gary; Vodovotz, Yoram
2012-01-01
Acute inflammation leads to organ failure by engaging catastrophic feedback loops in which stressed tissue evokes an inflammatory response and, in turn, inflammation damages tissue. Manifestations of this maladaptive inflammatory response include cardio-respiratory dysfunction that may be reflected in reduced heart rate and ventilatory pattern variabilities. We have developed signal-processing algorithms that quantify non-linear deterministic characteristics of variability in biologic signals. Now, coalescing under the aegis of the NIH Computational Biology Program and the Society for Complexity in Acute Illness, two research teams performed iterative experiments and computational modeling on inflammation and cardio-pulmonary dysfunction in sepsis as well as on neural control of respiration and ventilatory pattern variability. These teams, with additional collaborators, have recently formed a multi-institutional, interdisciplinary consortium, whose goal is to delineate the fundamental interrelationship between the inflammatory response and physiologic variability. Multi-scale mathematical modeling and complementary physiological experiments will provide insight into autonomic neural mechanisms that may modulate the inflammatory response to sepsis and simultaneously reduce heart rate and ventilatory pattern variabilities associated with sepsis. This approach integrates computational models of neural control of breathing and cardio-respiratory coupling with models that combine inflammation, cardiovascular function, and heart rate variability. The resulting integrated model will provide mechanistic explanations for the phenomena of respiratory sinus-arrhythmia and cardio-ventilatory coupling observed under normal conditions, and the loss of these properties during sepsis. This approach holds the potential of modeling cross-scale physiological interactions to improve both basic knowledge and clinical management of acute inflammatory diseases such as sepsis and trauma.
Large scale cardiac modeling on the Blue Gene supercomputer.
Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U; Weiss, Daniel L; Seemann, Gunnar; Dössel, Olaf; Pitman, Michael C; Rice, John J
2008-01-01
Multi-scale, multi-physical heart models have not yet been able to include a high degree of accuracy and resolution with respect to model detail and spatial resolution due to computational limitations of current systems. We propose a framework to compute large scale cardiac models. Decomposition of anatomical data in segments to be distributed on a parallel computer is carried out by optimal recursive bisection (ORB). The algorithm takes into account a computational load parameter which has to be adjusted according to the cell models used. The diffusion term is realized by the monodomain equations. The anatomical data-set was given by both ventricles of the Visible Female data-set in a 0.2 mm resolution. Heterogeneous anisotropy was included in the computation. Model weights as input for the decomposition and load balancing were set to (a) 1 for tissue and 0 for non-tissue elements; (b) 10 for tissue and 1 for non-tissue elements. Scaling results for 512, 1024, 2048, 4096 and 8192 computational nodes were obtained for 10 ms simulation time. The simulations were carried out on an IBM Blue Gene/L parallel computer. A 1 s simulation was then carried out on 2048 nodes for the optimal model load. Load balances did not differ significantly across computational nodes even if the number of data elements distributed to each node differed greatly. Since the ORB algorithm did not take into account computational load due to communication cycles, the speedup is close to optimal for the computation time but not optimal overall due to the communication overhead. However, the simulation times were reduced form 87 minutes on 512 to 11 minutes on 8192 nodes. This work demonstrates that it is possible to run simulations of the presented detailed cardiac model within hours for the simulation of a heart beat.
POD/DEIM reduced-order strategies for efficient four dimensional variational data assimilation
NASA Astrophysics Data System (ADS)
Ştefănescu, R.; Sandu, A.; Navon, I. M.
2015-08-01
This work studies reduced order modeling (ROM) approaches to speed up the solution of variational data assimilation problems with large scale nonlinear dynamical models. It is shown that a key requirement for a successful reduced order solution is that reduced order Karush-Kuhn-Tucker conditions accurately represent their full order counterparts. In particular, accurate reduced order approximations are needed for the forward and adjoint dynamical models, as well as for the reduced gradient. New strategies to construct reduced order based are developed for proper orthogonal decomposition (POD) ROM data assimilation using both Galerkin and Petrov-Galerkin projections. For the first time POD, tensorial POD, and discrete empirical interpolation method (DEIM) are employed to develop reduced data assimilation systems for a geophysical flow model, namely, the two dimensional shallow water equations. Numerical experiments confirm the theoretical framework for Galerkin projection. In the case of Petrov-Galerkin projection, stabilization strategies must be considered for the reduced order models. The new reduced order shallow water data assimilation system provides analyses similar to those produced by the full resolution data assimilation system in one tenth of the computational time.
A challenge in PBPK model development is estimating the parameters for absorption, distribution, metabolism, and excretion of the parent compound and metabolites of interest. One approach to reduce the number of parameters has been to simplify pharmacokinetic models by lumping p...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Z.; Bessa, M. A.; Liu, W.K.
A predictive computational theory is shown for modeling complex, hierarchical materials ranging from metal alloys to polymer nanocomposites. The theory can capture complex mechanisms such as plasticity and failure that span across multiple length scales. This general multiscale material modeling theory relies on sound principles of mathematics and mechanics, and a cutting-edge reduced order modeling method named self-consistent clustering analysis (SCA) [Zeliang Liu, M.A. Bessa, Wing Kam Liu, “Self-consistent clustering analysis: An efficient multi-scale scheme for inelastic heterogeneous materials,” Comput. Methods Appl. Mech. Engrg. 306 (2016) 319–341]. SCA reduces by several orders of magnitude the computational cost of micromechanical andmore » concurrent multiscale simulations, while retaining the microstructure information. This remarkable increase in efficiency is achieved with a data-driven clustering method. Computationally expensive operations are performed in the so-called offline stage, where degrees of freedom (DOFs) are agglomerated into clusters. The interaction tensor of these clusters is computed. In the online or predictive stage, the Lippmann-Schwinger integral equation is solved cluster-wise using a self-consistent scheme to ensure solution accuracy and avoid path dependence. To construct a concurrent multiscale model, this scheme is applied at each material point in a macroscale structure, replacing a conventional constitutive model with the average response computed from the microscale model using just the SCA online stage. A regularized damage theory is incorporated in the microscale that avoids the mesh and RVE size dependence that commonly plagues microscale damage calculations. The SCA method is illustrated with two cases: a carbon fiber reinforced polymer (CFRP) structure with the concurrent multiscale model and an application to fatigue prediction for additively manufactured metals. For the CFRP problem, a speed up estimated to be about 43,000 is achieved by using the SCA method, as opposed to FE2, enabling the solution of an otherwise computationally intractable problem. The second example uses a crystal plasticity constitutive law and computes the fatigue potency of extrinsic microscale features such as voids. This shows that local stress and strain are capture sufficiently well by SCA. This model has been incorporated in a process-structure-properties prediction framework for process design in additive manufacturing.« less
Model implementation for dynamic computation of system cost
NASA Astrophysics Data System (ADS)
Levri, J.; Vaccari, D.
The Advanced Life Support (ALS) Program metric is the ratio of the equivalent system mass (ESM) of a mission based on International Space Station (ISS) technology to the ESM of that same mission based on ALS technology. ESM is a mission cost analog that converts the volume, power, cooling and crewtime requirements of a mission into mass units to compute an estimate of the life support system emplacement cost. Traditionally, ESM has been computed statically, using nominal values for system sizing. However, computation of ESM with static, nominal sizing estimates cannot capture the peak sizing requirements driven by system dynamics. In this paper, a dynamic model for a near-term Mars mission is described. The model is implemented in Matlab/Simulink' for the purpose of dynamically computing ESM. This paper provides a general overview of the crew, food, biomass, waste, water and air blocks in the Simulink' model. Dynamic simulations of the life support system track mass flow, volume and crewtime needs, as well as power and cooling requirement profiles. The mission's ESM is computed, based upon simulation responses. Ultimately, computed ESM values for various system architectures will feed into an optimization search (non-derivative) algorithm to predict parameter combinations that result in reduced objective function values.
Computationally Efficient Multiconfigurational Reactive Molecular Dynamics
Yamashita, Takefumi; Peng, Yuxing; Knight, Chris; Voth, Gregory A.
2012-01-01
It is a computationally demanding task to explicitly simulate the electronic degrees of freedom in a system to observe the chemical transformations of interest, while at the same time sampling the time and length scales required to converge statistical properties and thus reduce artifacts due to initial conditions, finite-size effects, and limited sampling. One solution that significantly reduces the computational expense consists of molecular models in which effective interactions between particles govern the dynamics of the system. If the interaction potentials in these models are developed to reproduce calculated properties from electronic structure calculations and/or ab initio molecular dynamics simulations, then one can calculate accurate properties at a fraction of the computational cost. Multiconfigurational algorithms model the system as a linear combination of several chemical bonding topologies to simulate chemical reactions, also sometimes referred to as “multistate”. These algorithms typically utilize energy and force calculations already found in popular molecular dynamics software packages, thus facilitating their implementation without significant changes to the structure of the code. However, the evaluation of energies and forces for several bonding topologies per simulation step can lead to poor computational efficiency if redundancy is not efficiently removed, particularly with respect to the calculation of long-ranged Coulombic interactions. This paper presents accurate approximations (effective long-range interaction and resulting hybrid methods) and multiple-program parallelization strategies for the efficient calculation of electrostatic interactions in reactive molecular simulations. PMID:25100924
Damodara, Vijaya; Chen, Daniel H; Lou, Helen H; Rasel, Kader M A; Richmond, Peyton; Wang, Anan; Li, Xianchang
2017-05-01
Emissions from flares constitute unburned hydrocarbons, carbon monoxide (CO), soot, and other partially burned and altered hydrocarbons along with carbon dioxide (CO 2 ) and water. Soot or visible smoke is of particular concern for flare operators/regulatory agencies. The goal of the study is to develop a computational fluid dynamics (CFD) model capable of predicting flare combustion efficiency (CE) and soot emission. Since detailed combustion mechanisms are too complicated for (CFD) application, a 50-species reduced mechanism, LU 3.0.1, was developed. LU 3.0.1 is capable of handling C 4 hydrocarbons and soot precursor species (C 2 H 2 , C 2 H 4 , C 6 H 6 ). The new reduced mechanism LU 3.0.1 was first validated against experimental performance indicators: laminar flame speed, adiabatic flame temperature, and ignition delay. Further, CFD simulations using LU 3.0.1 were run to predict soot emission and CE of air-assisted flare tests conducted in 2010 in Tulsa, Oklahoma, using ANSYS Fluent software. Results of non-premixed probability density function (PDF) model and eddy dissipation concept (EDC) model are discussed. It is also noteworthy that when used in conjunction with the EDC turbulence-chemistry model, LU 3.0.1 can reasonably predict volatile organic compound (VOC) emissions as well. A reduced combustion mechanism containing 50 C 1 -C 4 species and soot precursors has been developed and validated against experimental data. The combustion mechanism is then employed in the computational fluid dynamics (CFD) of modeling of soot emission and combustion efficiency (CE) of controlled flares for which experimental soot and CE data are available. The validated CFD modeling tools are useful for oil, gas, and chemical industries to comply with U.S. Environmental Protection Agency's (EPA) mandate to achieve smokeless flaring with a high CE.
Applying Computer Models to Realize Closed-Loop Neonatal Oxygen Therapy.
Morozoff, Edmund; Smyth, John A; Saif, Mehrdad
2017-01-01
Within the context of automating neonatal oxygen therapy, this article describes the transformation of an idea verified by a computer model into a device actuated by a computer model. Computer modeling of an entire neonatal oxygen therapy system can facilitate the development of closed-loop control algorithms by providing a verification platform and speeding up algorithm development. In this article, we present a method of mathematically modeling the system's components: the oxygen transport within the patient, the oxygen blender, the controller, and the pulse oximeter. Furthermore, within the constraints of engineering a product, an idealized model of the neonatal oxygen transport component may be integrated effectively into the control algorithm of a device, referred to as the adaptive model. Manual and closed-loop oxygen therapy performance were defined in this article by 3 criteria in the following order of importance: percent duration of SpO2 spent in normoxemia (target SpO2 ± 2.5%), hypoxemia (less than normoxemia), and hyperoxemia (more than normoxemia); number of 60-second periods <85% SpO2 and >95% SpO2; and number of manual adjustments. Results from a clinical evaluation that compared the performance of 3 closed-loop control algorithms (state machine, proportional-integral-differential, and adaptive model) with manual oxygen therapy on 7 low-birth-weight ventilated preterm babies, are presented. Compared with manual therapy, all closed-loop control algorithms significantly increased the patients' duration in normoxemia and reduced hyperoxemia (P < 0.05). The number of manual adjustments was also significantly reduced by all of the closed-loop control algorithms (P < 0.05). Although the performance of the 3 control algorithms was equivalent, it is suggested that the adaptive model, with its ease of use, may have the best utility.
Shang, Eric K; Nathan, Derek P; Sprinkle, Shanna R; Fairman, Ronald M; Bavaria, Joseph E; Gorman, Robert C; Gorman, Joseph H; Jackson, Benjamin M
2013-09-10
Wall stress calculated using finite element analysis has been used to predict rupture risk of aortic aneurysms. Prior models often assume uniform aortic wall thickness and fusiform geometry. We examined the effects of including local wall thickness, intraluminal thrombus, calcifications, and saccular geometry on peak wall stress (PWS) in finite element analysis of descending thoracic aortic aneurysms. Computed tomographic angiography of descending thoracic aortic aneurysms (n=10 total, 5 fusiform and 5 saccular) underwent 3-dimensional reconstruction with custom algorithms. For each aneurysm, an initial model was constructed with uniform wall thickness. Experimental models explored the addition of variable wall thickness, calcifications, and intraluminal thrombus. Each model was loaded with 120 mm Hg pressure, and von Mises PWS was computed. The mean PWS of uniform wall thickness models was 410 ± 111 kPa. The imposition of variable wall thickness increased PWS (481 ± 126 kPa, P<0.001). Although the addition of calcifications was not statistically significant (506 ± 126 kPa, P=0.07), the addition of intraluminal thrombus to variable wall thickness (359 ± 86 kPa, P ≤ 0.001) reduced PWS. A final model incorporating all features also reduced PWS (368 ± 88 kPa, P<0.001). Saccular geometry did not increase diameter-normalized stress in the final model (77 ± 7 versus 67 ± 12 kPa/cm, P=0.22). Incorporation of local wall thickness can significantly increase PWS in finite element analysis models of thoracic aortic aneurysms. Incorporating variable wall thickness, intraluminal thrombus, and calcifications significantly impacts computed PWS of thoracic aneurysms; sophisticated models may, therefore, be more accurate in assessing rupture risk. Saccular aneurysms did not demonstrate a significantly higher normalized PWS than fusiform aneurysms.
Computational Fluid Dynamics of Whole-Body Aircraft
NASA Astrophysics Data System (ADS)
Agarwal, Ramesh
1999-01-01
The current state of the art in computational aerodynamics for whole-body aircraft flowfield simulations is described. Recent advances in geometry modeling, surface and volume grid generation, and flow simulation algorithms have led to accurate flowfield predictions for increasingly complex and realistic configurations. As a result, computational aerodynamics has emerged as a crucial enabling technology for the design and development of flight vehicles. Examples illustrating the current capability for the prediction of transport and fighter aircraft flowfields are presented. Unfortunately, accurate modeling of turbulence remains a major difficulty in the analysis of viscosity-dominated flows. In the future, inverse design methods, multidisciplinary design optimization methods, artificial intelligence technology, and massively parallel computer technology will be incorporated into computational aerodynamics, opening up greater opportunities for improved product design at substantially reduced costs.
NASA Technical Reports Server (NTRS)
DeChant, Lawrence Justin
1998-01-01
In spite of rapid advances in both scalar and parallel computational tools, the large number of variables involved in both design and inverse problems make the use of sophisticated fluid flow models impractical, With this restriction, it is concluded that an important family of methods for mathematical/computational development are reduced or approximate fluid flow models. In this study a combined perturbation/numerical modeling methodology is developed which provides a rigorously derived family of solutions. The mathematical model is computationally more efficient than classical boundary layer but provides important two-dimensional information not available using quasi-1-d approaches. An additional strength of the current methodology is its ability to locally predict static pressure fields in a manner analogous to more sophisticated parabolized Navier Stokes (PNS) formulations. To resolve singular behavior, the model utilizes classical analytical solution techniques. Hence, analytical methods have been combined with efficient numerical methods to yield an efficient hybrid fluid flow model. In particular, the main objective of this research has been to develop a system of analytical and numerical ejector/mixer nozzle models, which require minimal empirical input. A computer code, DREA Differential Reduced Ejector/mixer Analysis has been developed with the ability to run sufficiently fast so that it may be used either as a subroutine or called by an design optimization routine. Models are of direct use to the High Speed Civil Transport Program (a joint government/industry project seeking to develop an economically.viable U.S. commercial supersonic transport vehicle) and are currently being adopted by both NASA and industry. Experimental validation of these models is provided by comparison to results obtained from open literature and Limited Exclusive Right Distribution (LERD) sources, as well as dedicated experiments performed at Texas A&M. These experiments have been performed using a hydraulic/gas flow analog. Results of comparisons of DREA computations with experimental data, which include entrainment, thrust, and local profile information, are overall good. Computational time studies indicate that DREA provides considerably more information at a lower computational cost than contemporary ejector nozzle design models. Finally. physical limitations of the method, deviations from experimental data, potential improvements and alternative formulations are described. This report represents closure to the NASA Graduate Researchers Program. Versions of the DREA code and a user's guide may be obtained from the NASA Lewis Research Center.
Dual Quaternions as Constraints in 4D-DPM Models for Pose Estimation.
Martinez-Berti, Enrique; Sánchez-Salmerón, Antonio-José; Ricolfe-Viala, Carlos
2017-08-19
The goal of this research work is to improve the accuracy of human pose estimation using the Deformation Part Model (DPM) without increasing computational complexity. First, the proposed method seeks to improve pose estimation accuracy by adding the depth channel to DPM, which was formerly defined based only on red-green-blue (RGB) channels, in order to obtain a four-dimensional DPM (4D-DPM). In addition, computational complexity can be controlled by reducing the number of joints by taking it into account in a reduced 4D-DPM. Finally, complete solutions are obtained by solving the omitted joints by using inverse kinematics models. In this context, the main goal of this paper is to analyze the effect on pose estimation timing cost when using dual quaternions to solve the inverse kinematics.
A comparative study of the tail ion distribution with reduced Fokker-Planck models
NASA Astrophysics Data System (ADS)
McDevitt, C. J.; Tang, Xian-Zhu; Guo, Zehua; Berk, H. L.
2014-03-01
A series of reduced models are used to study the fast ion tail in the vicinity of a transition layer between plasmas at disparate temperatures and densities, which is typical of the gas and pusher interface in inertial confinement fusion targets. Emphasis is placed on utilizing progressively more comprehensive models in order to identify the essential physics for computing the fast ion tail at energies comparable to the Gamow peak. The resulting fast ion tail distribution is subsequently used to compute the fusion reactivity as a function of collisionality and temperature. While a significant reduction of the fusion reactivity in the hot spot compared to the nominal Maxwellian case is present, this reduction is found to be partially recovered by an increase of the fusion reactivity in the neighboring cold region.
Active subspace uncertainty quantification for a polydomain ferroelectric phase-field model
NASA Astrophysics Data System (ADS)
Leon, Lider S.; Smith, Ralph C.; Miles, Paul; Oates, William S.
2018-03-01
Quantum-informed ferroelectric phase field models capable of predicting material behavior, are necessary for facilitating the development and production of many adaptive structures and intelligent systems. Uncertainty is present in these models, given the quantum scale at which calculations take place. A necessary analysis is to determine how the uncertainty in the response can be attributed to the uncertainty in the model inputs or parameters. A second analysis is to identify active subspaces within the original parameter space, which quantify directions in which the model response varies most dominantly, thus reducing sampling effort and computational cost. In this investigation, we identify an active subspace for a poly-domain ferroelectric phase-field model. Using the active variables as our independent variables, we then construct a surrogate model and perform Bayesian inference. Once we quantify the uncertainties in the active variables, we obtain uncertainties for the original parameters via an inverse mapping. The analysis provides insight into how active subspace methodologies can be used to reduce computational power needed to perform Bayesian inference on model parameters informed by experimental or simulated data.
Simplified Modeling of Oxidation of Hydrocarbons
NASA Technical Reports Server (NTRS)
Bellan, Josette; Harstad, Kenneth
2008-01-01
A method of simplified computational modeling of oxidation of hydrocarbons is undergoing development. This is one of several developments needed to enable accurate computational simulation of turbulent, chemically reacting flows. At present, accurate computational simulation of such flows is difficult or impossible in most cases because (1) the numbers of grid points needed for adequate spatial resolution of turbulent flows in realistically complex geometries are beyond the capabilities of typical supercomputers now in use and (2) the combustion of typical hydrocarbons proceeds through decomposition into hundreds of molecular species interacting through thousands of reactions. Hence, the combination of detailed reaction- rate models with the fundamental flow equations yields flow models that are computationally prohibitive. Hence, further, a reduction of at least an order of magnitude in the dimension of reaction kinetics is one of the prerequisites for feasibility of computational simulation of turbulent, chemically reacting flows. In the present method of simplified modeling, all molecular species involved in the oxidation of hydrocarbons are classified as either light or heavy; heavy molecules are those having 3 or more carbon atoms. The light molecules are not subject to meaningful decomposition, and the heavy molecules are considered to decompose into only 13 specified constituent radicals, a few of which are listed in the table. One constructs a reduced-order model, suitable for use in estimating the release of heat and the evolution of temperature in combustion, from a base comprising the 13 constituent radicals plus a total of 26 other species that include the light molecules and related light free radicals. Then rather than following all possible species through their reaction coordinates, one follows only the reduced set of reaction coordinates of the base. The behavior of the base was examined in test computational simulations of the combustion of heptane in a stirred reactor at various initial pressures ranging from 0.1 to 6 MPa. Most of the simulations were performed for stoichiometric mixtures; some were performed for fuel/oxygen mole ratios of 1/2 and 2.
A reduced-order nonlinear sliding mode observer for vehicle slip angle and tyre forces
NASA Astrophysics Data System (ADS)
Chen, Yuhang; Ji, Yunfeng; Guo, Konghui
2014-12-01
In this paper, a reduced-order sliding mode observer (RO-SMO) is developed for vehicle state estimation. Several improvements are achieved in this paper. First, the reference model accuracy is improved by considering vehicle load transfers and using a precise nonlinear tyre model 'UniTire'. Second, without the reference model accuracy degraded, the computing burden of the state observer is decreased by a reduced-order approach. Third, nonlinear system damping is integrated into the SMO to speed convergence and reduce chattering. The proposed RO-SMO is evaluated through simulation and experiments based on an in-wheel motor electric vehicle. The results show that the proposed observer accurately predicts the vehicle states.
Improving finite element results in modeling heart valve mechanics.
Earl, Emily; Mohammadi, Hadi
2018-06-01
Finite element analysis is a well-established computational tool which can be used for the analysis of soft tissue mechanics. Due to the structural complexity of the leaflet tissue of the heart valve, the currently available finite element models do not adequately represent the leaflet tissue. A method of addressing this issue is to implement computationally expensive finite element models, characterized by precise constitutive models including high-order and high-density mesh techniques. In this study, we introduce a novel numerical technique that enhances the results obtained from coarse mesh finite element models to provide accuracy comparable to that of fine mesh finite element models while maintaining a relatively low computational cost. Introduced in this study is a method by which the computational expense required to solve linear and nonlinear constitutive models, commonly used in heart valve mechanics simulations, is reduced while continuing to account for large and infinitesimal deformations. This continuum model is developed based on the least square algorithm procedure coupled with the finite difference method adhering to the assumption that the components of the strain tensor are available at all nodes of the finite element mesh model. The suggested numerical technique is easy to implement, practically efficient, and requires less computational time compared to currently available commercial finite element packages such as ANSYS and/or ABAQUS.
NASA Astrophysics Data System (ADS)
Lin, Y.; O'Malley, D.; Vesselinov, V. V.
2015-12-01
Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a powerful tool for large-scale applications.
Modeling of power transmission and stress grading for corona protection
NASA Astrophysics Data System (ADS)
Zohdi, T. I.; Abali, B. E.
2017-11-01
Electrical high voltage (HV) machines are prone to corona discharges leading to power losses as well as damage of the insulating layer. Many different techniques are applied as corona protection and computational methods aid to select the best design. In this paper we develop a reduced-order model in 1D estimating electric field and temperature distribution of a conductor wrapped with different layers, as usual for HV-machines. Many assumptions and simplifications are undertaken for this 1D model, therefore, we compare its results to a direct numerical simulation in 3D quantitatively. Both models are transient and nonlinear, giving a possibility to quickly estimate in 1D or fully compute in 3D by a computational cost. Such tools enable understanding, evaluation, and optimization of corona shielding systems for multilayered coils.
NASA Astrophysics Data System (ADS)
Rochoux, M. C.; Ricci, S.; Lucor, D.; Cuenot, B.; Trouvé, A.
2014-05-01
This paper is the first part in a series of two articles and presents a data-driven wildfire simulator for forecasting wildfire spread scenarios, at a reduced computational cost that is consistent with operational systems. The prototype simulator features the following components: a level-set-based fire propagation solver FIREFLY that adopts a regional-scale modeling viewpoint, treats wildfires as surface propagating fronts, and uses a description of the local rate of fire spread (ROS) as a function of environmental conditions based on Rothermel's model; a series of airborne-like observations of the fire front positions; and a data assimilation algorithm based on an ensemble Kalman filter (EnKF) for parameter estimation. This stochastic algorithm partly accounts for the non-linearities between the input parameters of the semi-empirical ROS model and the fire front position, and is sequentially applied to provide a spatially-uniform correction to wind and biomass fuel parameters as observations become available. A wildfire spread simulator combined with an ensemble-based data assimilation algorithm is therefore a promising approach to reduce uncertainties in the forecast position of the fire front and to introduce a paradigm-shift in the wildfire emergency response. In order to reduce the computational cost of the EnKF algorithm, a surrogate model based on a polynomial chaos (PC) expansion is used in place of the forward model FIREFLY in the resulting hybrid PC-EnKF algorithm. The performance of EnKF and PC-EnKF is assessed on synthetically-generated simple configurations of fire spread to provide valuable information and insight on the benefits of the PC-EnKF approach as well as on a controlled grassland fire experiment. The results indicate that the proposed PC-EnKF algorithm features similar performance to the standard EnKF algorithm, but at a much reduced computational cost. In particular, the re-analysis and forecast skills of data assimilation strongly relate to the spatial and temporal variability of the errors in the ROS model parameters.
NASA Astrophysics Data System (ADS)
Rochoux, M. C.; Ricci, S.; Lucor, D.; Cuenot, B.; Trouvé, A.
2014-11-01
This paper is the first part in a series of two articles and presents a data-driven wildfire simulator for forecasting wildfire spread scenarios, at a reduced computational cost that is consistent with operational systems. The prototype simulator features the following components: an Eulerian front propagation solver FIREFLY that adopts a regional-scale modeling viewpoint, treats wildfires as surface propagating fronts, and uses a description of the local rate of fire spread (ROS) as a function of environmental conditions based on Rothermel's model; a series of airborne-like observations of the fire front positions; and a data assimilation (DA) algorithm based on an ensemble Kalman filter (EnKF) for parameter estimation. This stochastic algorithm partly accounts for the nonlinearities between the input parameters of the semi-empirical ROS model and the fire front position, and is sequentially applied to provide a spatially uniform correction to wind and biomass fuel parameters as observations become available. A wildfire spread simulator combined with an ensemble-based DA algorithm is therefore a promising approach to reduce uncertainties in the forecast position of the fire front and to introduce a paradigm-shift in the wildfire emergency response. In order to reduce the computational cost of the EnKF algorithm, a surrogate model based on a polynomial chaos (PC) expansion is used in place of the forward model FIREFLY in the resulting hybrid PC-EnKF algorithm. The performance of EnKF and PC-EnKF is assessed on synthetically generated simple configurations of fire spread to provide valuable information and insight on the benefits of the PC-EnKF approach, as well as on a controlled grassland fire experiment. The results indicate that the proposed PC-EnKF algorithm features similar performance to the standard EnKF algorithm, but at a much reduced computational cost. In particular, the re-analysis and forecast skills of DA strongly relate to the spatial and temporal variability of the errors in the ROS model parameters.
A Test of the Validity of Inviscid Wall-Modeled LES
NASA Astrophysics Data System (ADS)
Redman, Andrew; Craft, Kyle; Aikens, Kurt
2015-11-01
Computational expense is one of the main deterrents to more widespread use of large eddy simulations (LES). As such, it is important to reduce computational costs whenever possible. In this vein, it may be reasonable to assume that high Reynolds number flows with turbulent boundary layers are inviscid when using a wall model. This assumption relies on the grid being too coarse to resolve either the viscous length scales in the outer flow or those near walls. We are not aware of other studies that have suggested or examined the validity of this approach. The inviscid wall-modeled LES assumption is tested here for supersonic flow over a flat plate on three different grids. Inviscid and viscous results are compared to those of another wall-modeled LES as well as experimental data - the results appear promising. Furthermore, the inviscid assumption reduces simulation costs by about 25% and 39% for supersonic and subsonic flows, respectively, with the current LES application. Recommendations are presented as are future areas of research. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. Computational resources on TACC Stampede were provided under XSEDE allocation ENG150001.
Fast computation of the electrolyte-concentration transfer function of a lithium-ion cell model
NASA Astrophysics Data System (ADS)
Rodríguez, Albert; Plett, Gregory L.; Trimboli, M. Scott
2017-08-01
One approach to creating physics-based reduced-order models (ROMs) of battery-cell dynamics requires first generating linearized Laplace-domain transfer functions of all cell internal electrochemical variables of interest. Then, the resulting infinite-dimensional transfer functions can be reduced by various means in order to find an approximate low-dimensional model. These methods include Padé approximation or the Discrete-Time Realization algorithm. In a previous article, Lee and colleagues developed a transfer function of the electrolyte concentration for a porous-electrode pseudo-two-dimensional lithium-ion cell model. Their approach used separation of variables and Sturm-Liouville theory to compute an infinite-series solution to the transfer function, which they then truncated to a finite number of terms for reasons of practicality. Here, we instead use a variation-of-parameters approach to arrive at a different representation of the identical solution that does not require a series expansion. The primary benefits of the new approach are speed of computation of the transfer function and the removal of the requirement to approximate the transfer function by truncating the number of terms evaluated. Results show that the speedup of the new method can be more than 3800.
Estimating Skin Cancer Risk: Evaluating Mobile Computer-Adaptive Testing.
Djaja, Ngadiman; Janda, Monika; Olsen, Catherine M; Whiteman, David C; Chien, Tsair-Wei
2016-01-22
Response burden is a major detriment to questionnaire completion rates. Computer adaptive testing may offer advantages over non-adaptive testing, including reduction of numbers of items required for precise measurement. Our aim was to compare the efficiency of non-adaptive (NAT) and computer adaptive testing (CAT) facilitated by Partial Credit Model (PCM)-derived calibration to estimate skin cancer risk. We used a random sample from a population-based Australian cohort study of skin cancer risk (N=43,794). All 30 items of the skin cancer risk scale were calibrated with the Rasch PCM. A total of 1000 cases generated following a normal distribution (mean [SD] 0 [1]) were simulated using three Rasch models with three fixed-item (dichotomous, rating scale, and partial credit) scenarios, respectively. We calculated the comparative efficiency and precision of CAT and NAT (shortening of questionnaire length and the count difference number ratio less than 5% using independent t tests). We found that use of CAT led to smaller person standard error of the estimated measure than NAT, with substantially higher efficiency but no loss of precision, reducing response burden by 48%, 66%, and 66% for dichotomous, Rating Scale Model, and PCM models, respectively. CAT-based administrations of the skin cancer risk scale could substantially reduce participant burden without compromising measurement precision. A mobile computer adaptive test was developed to help people efficiently assess their skin cancer risk.
Robust simulation of buckled structures using reduced order modeling
NASA Astrophysics Data System (ADS)
Wiebe, R.; Perez, R. A.; Spottswood, S. M.
2016-09-01
Lightweight metallic structures are a mainstay in aerospace engineering. For these structures, stability, rather than strength, is often the critical limit state in design. For example, buckling of panels and stiffeners may occur during emergency high-g maneuvers, while in supersonic and hypersonic aircraft, it may be induced by thermal stresses. The longstanding solution to such challenges was to increase the sizing of the structural members, which is counter to the ever present need to minimize weight for reasons of efficiency and performance. In this work we present some recent results in the area of reduced order modeling of post- buckled thin beams. A thorough parametric study of the response of a beam to changing harmonic loading parameters, which is useful in exposing complex phenomena and exercising numerical models, is presented. Two error metrics that use but require no time stepping of a (computationally expensive) truth model are also introduced. The error metrics are applied to several interesting forcing parameter cases identified from the parametric study and are shown to yield useful information about the quality of a candidate reduced order model. Parametric studies, especially when considering forcing and structural geometry parameters, coupled environments, and uncertainties would be computationally intractable with finite element models. The goal is to make rapid simulation of complex nonlinear dynamic behavior possible for distributed systems via fast and accurate reduced order models. This ability is crucial in allowing designers to rigorously probe the robustness of their designs to account for variations in loading, structural imperfections, and other uncertainties.
Modal Substructuring of Geometrically Nonlinear Finite Element Models with Interface Reduction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuether, Robert J.; Allen, Matthew S.; Hollkamp, Joseph J.
Substructuring methods have been widely used in structural dynamics to divide large, complicated finite element models into smaller substructures. For linear systems, many methods have been developed to reduce the subcomponents down to a low order set of equations using a special set of component modes, and these are then assembled to approximate the dynamics of a large scale model. In this paper, a substructuring approach is developed for coupling geometrically nonlinear structures, where each subcomponent is drastically reduced to a low order set of nonlinear equations using a truncated set of fixedinterface and characteristic constraint modes. The method usedmore » to extract the coefficients of the nonlinear reduced order model (NLROM) is non-intrusive in that it does not require any modification to the commercial FEA code, but computes the NLROM from the results of several nonlinear static analyses. The NLROMs are then assembled to approximate the nonlinear differential equations of the global assembly. The method is demonstrated on the coupling of two geometrically nonlinear plates with simple supports at all edges. The plates are joined at a continuous interface through the rotational degrees-of-freedom (DOF), and the nonlinear normal modes (NNMs) of the assembled equations are computed to validate the models. The proposed substructuring approach reduces a 12,861 DOF nonlinear finite element model down to only 23 DOF, while still accurately reproducing the first three NNMs of the full order model.« less
Modal Substructuring of Geometrically Nonlinear Finite Element Models with Interface Reduction
Kuether, Robert J.; Allen, Matthew S.; Hollkamp, Joseph J.
2017-03-29
Substructuring methods have been widely used in structural dynamics to divide large, complicated finite element models into smaller substructures. For linear systems, many methods have been developed to reduce the subcomponents down to a low order set of equations using a special set of component modes, and these are then assembled to approximate the dynamics of a large scale model. In this paper, a substructuring approach is developed for coupling geometrically nonlinear structures, where each subcomponent is drastically reduced to a low order set of nonlinear equations using a truncated set of fixedinterface and characteristic constraint modes. The method usedmore » to extract the coefficients of the nonlinear reduced order model (NLROM) is non-intrusive in that it does not require any modification to the commercial FEA code, but computes the NLROM from the results of several nonlinear static analyses. The NLROMs are then assembled to approximate the nonlinear differential equations of the global assembly. The method is demonstrated on the coupling of two geometrically nonlinear plates with simple supports at all edges. The plates are joined at a continuous interface through the rotational degrees-of-freedom (DOF), and the nonlinear normal modes (NNMs) of the assembled equations are computed to validate the models. The proposed substructuring approach reduces a 12,861 DOF nonlinear finite element model down to only 23 DOF, while still accurately reproducing the first three NNMs of the full order model.« less
Recent Enhancements to the Development of CFD-Based Aeroelastic Reduced-Order Models
NASA Technical Reports Server (NTRS)
Silva, Walter A.
2007-01-01
Recent enhancements to the development of CFD-based unsteady aerodynamic and aeroelastic reduced-order models (ROMs) are presented. These enhancements include the simultaneous application of structural modes as CFD input, static aeroelastic analysis using a ROM, and matched-point solutions using a ROM. The simultaneous application of structural modes as CFD input enables the computation of the unsteady aerodynamic state-space matrices with a single CFD execution, independent of the number of structural modes. The responses obtained from a simultaneous excitation of the CFD-based unsteady aerodynamic system are processed using system identification techniques in order to generate an unsteady aerodynamic state-space ROM. Once the unsteady aerodynamic state-space ROM is generated, a method for computing the static aeroelastic response using this unsteady aerodynamic ROM and a state-space model of the structure, is presented. Finally, a method is presented that enables the computation of matchedpoint solutions using a single ROM that is applicable over a range of dynamic pressures and velocities for a given Mach number. These enhancements represent a significant advancement of unsteady aerodynamic and aeroelastic ROM technology.
Computer Analysis of Spectrum Anomaly in 32-GHz Traveling-Wave Tube for Cassini Mission
NASA Technical Reports Server (NTRS)
Dayton, James A., Jr.; Wilson, Jeffrey D.; Kory, Carol L.
1999-01-01
Computer modeling of the 32-GHz traveling-wave tube (TWT) for the Cassini Mission was conducted to explain the anomaly observed in the spectrum analysis of one of the flight-model tubes. The analysis indicated that the effect, manifested as a weak signal in the neighborhood of 35 GHz, was an intermodulation product of the 32-GHz drive signal with a 66.9-GHz oscillation induced by coupling to the second harmonic'signal. The oscillation occurred only at low- radiofrequency (RF) drive power levels that are not expected during the Cassini Mission. The conclusion was that the anomaly was caused by a generic defect inadvertently incorporated in the geometric design of the slow-wave circuit and that it would not change as the TWT aged. The most probable effect of aging on tube performance would be a reduction in the electron beam current. The computer modeling indicated that although not likely to occur within the mission lifetime, a reduction in beam current would reduce or eliminate the anomaly but would do so at the cost of reduced RF output power.
Opportunities for Breakthroughs in Large-Scale Computational Simulation and Design
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia; Alter, Stephen J.; Atkins, Harold L.; Bey, Kim S.; Bibb, Karen L.; Biedron, Robert T.; Carpenter, Mark H.; Cheatwood, F. McNeil; Drummond, Philip J.; Gnoffo, Peter A.
2002-01-01
Opportunities for breakthroughs in the large-scale computational simulation and design of aerospace vehicles are presented. Computational fluid dynamics tools to be used within multidisciplinary analysis and design methods are emphasized. The opportunities stem from speedups and robustness improvements in the underlying unit operations associated with simulation (geometry modeling, grid generation, physical modeling, analysis, etc.). Further, an improved programming environment can synergistically integrate these unit operations to leverage the gains. The speedups result from reducing the problem setup time through geometry modeling and grid generation operations, and reducing the solution time through the operation counts associated with solving the discretized equations to a sufficient accuracy. The opportunities are addressed only at a general level here, but an extensive list of references containing further details is included. The opportunities discussed are being addressed through the Fast Adaptive Aerospace Tools (FAAST) element of the Advanced Systems Concept to Test (ASCoT) and the third Generation Reusable Launch Vehicles (RLV) projects at NASA Langley Research Center. The overall goal is to enable greater inroads into the design process with large-scale simulations.
Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn; Lin, Guang, E-mail: guanglin@purdue.edu
2016-07-15
In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.
A video, text, and speech-driven realistic 3-d virtual head for human-machine interface.
Yu, Jun; Wang, Zeng-Fu
2015-05-01
A multiple inputs-driven realistic facial animation system based on 3-D virtual head for human-machine interface is proposed. The system can be driven independently by video, text, and speech, thus can interact with humans through diverse interfaces. The combination of parameterized model and muscular model is used to obtain a tradeoff between computational efficiency and high realism of 3-D facial animation. The online appearance model is used to track 3-D facial motion from video in the framework of particle filtering, and multiple measurements, i.e., pixel color value of input image and Gabor wavelet coefficient of illumination ratio image, are infused to reduce the influence of lighting and person dependence for the construction of online appearance model. The tri-phone model is used to reduce the computational consumption of visual co-articulation in speech synchronized viseme synthesis without sacrificing any performance. The objective and subjective experiments show that the system is suitable for human-machine interaction.
2013-01-01
In this paper, we develop and validate a method to identify computationally efficient site- and patient-specific models of ultrasound thermal therapies from MR thermal images. The models of the specific absorption rate of the transduced energy and the temperature response of the therapy target are identified in the reduced basis of proper orthogonal decomposition of thermal images, acquired in response to a mild thermal test excitation. The method permits dynamic reidentification of the treatment models during the therapy by recursively utilizing newly acquired images. Such adaptation is particularly important during high-temperature therapies, which are known to substantially and rapidly change tissue properties and blood perfusion. The developed theory was validated for the case of focused ultrasound heating of a tissue phantom. The experimental and computational results indicate that the developed approach produces accurate low-dimensional treatment models despite temporal and spatial noises in MR images and slow image acquisition rate. PMID:22531754
A Computer Model of the Cardiovascular System for Effective Learning.
ERIC Educational Resources Information Center
Rothe, Carl F.
1980-01-01
Presents a model of the cardiovascular system which solves a set of interacting, possibly nonlinear, differential equations. Figures present a schematic diagram of the model and printouts that simulate normal conditions, exercise, hemorrhage, reduced contractility. The nine interacting equations used to describe the system are described in the…
COMPUTER SIMULATOR (BEST) FOR DESIGNING SULFATE-REDUCING BACTERIA FIELD BIOREACTORS
BEST (bioreactor economics, size and time of operation) is a spreadsheet-based model that is used in conjunction with public domain software, PhreeqcI. BEST is used in the design process of sulfate-reducing bacteria (SRB) field bioreactors to passively treat acid mine drainage (A...
NASA Astrophysics Data System (ADS)
Tripathi, Vijay S.; Yeh, G. T.
1993-06-01
Sophisticated and highly computation-intensive models of transport of reactive contaminants in groundwater have been developed in recent years. Application of such models to real-world contaminant transport problems, e.g., simulation of groundwater transport of 10-15 chemically reactive elements (e.g., toxic metals) and relevant complexes and minerals in two and three dimensions over a distance of several hundred meters, requires high-performance computers including supercomputers. Although not widely recognized as such, the computational complexity and demand of these models compare with well-known computation-intensive applications including weather forecasting and quantum chemical calculations. A survey of the performance of a variety of available hardware, as measured by the run times for a reactive transport model HYDROGEOCHEM, showed that while supercomputers provide the fastest execution times for such problems, relatively low-cost reduced instruction set computer (RISC) based scalar computers provide the best performance-to-price ratio. Because supercomputers like the Cray X-MP are inherently multiuser resources, often the RISC computers also provide much better turnaround times. Furthermore, RISC-based workstations provide the best platforms for "visualization" of groundwater flow and contaminant plumes. The most notable result, however, is that current workstations costing less than $10,000 provide performance within a factor of 5 of a Cray X-MP.
Computerized power supply analysis: State equation generation and terminal models
NASA Technical Reports Server (NTRS)
Garrett, S. J.
1978-01-01
To aid engineers that design power supply systems two analysis tools that can be used with the state equation analysis package were developed. These tools include integration routines that start with the description of a power supply in state equation form and yield analytical results. The first tool uses a computer program that works with the SUPER SCEPTRE circuit analysis program and prints the state equation for an electrical network. The state equations developed automatically by the computer program are used to develop an algorithm for reducing the number of state variables required to describe an electrical network. In this way a second tool is obtained in which the order of the network is reduced and a simpler terminal model is obtained.
A fast collocation method for a variable-coefficient nonlocal diffusion model
NASA Astrophysics Data System (ADS)
Wang, Che; Wang, Hong
2017-02-01
We develop a fast collocation scheme for a variable-coefficient nonlocal diffusion model, for which a numerical discretization would yield a dense stiffness matrix. The development of the fast method is achieved by carefully handling the variable coefficients appearing inside the singular integral operator and exploiting the structure of the dense stiffness matrix. The resulting fast method reduces the computational work from O (N3) required by a commonly used direct solver to O (Nlog N) per iteration and the memory requirement from O (N2) to O (N). Furthermore, the fast method reduces the computational work of assembling the stiffness matrix from O (N2) to O (N). Numerical results are presented to show the utility of the fast method.
Confirmation of a realistic reactor model for BNCT dosimetry at the TRIGA Mainz
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ziegner, Markus, E-mail: Markus.Ziegner.fl@ait.ac.at; Schmitz, Tobias; Hampel, Gabriele
2014-11-01
Purpose: In order to build up a reliable dose monitoring system for boron neutron capture therapy (BNCT) applications at the TRIGA reactor in Mainz, a computer model for the entire reactor was established, simulating the radiation field by means of the Monte Carlo method. The impact of different source definition techniques was compared and the model was validated by experimental fluence and dose determinations. Methods: The depletion calculation code ORIGEN2 was used to compute the burn-up and relevant material composition of each burned fuel element from the day of first reactor operation to its current core. The material composition ofmore » the current core was used in a MCNP5 model of the initial core developed earlier. To perform calculations for the region outside the reactor core, the model was expanded to include the thermal column and compared with the previously established ATTILA model. Subsequently, the computational model is simplified in order to reduce the calculation time. Both simulation models are validated by experiments with different setups using alanine dosimetry and gold activation measurements with two different types of phantoms. Results: The MCNP5 simulated neutron spectrum and source strength are found to be in good agreement with the previous ATTILA model whereas the photon production is much lower. Both MCNP5 simulation models predict all experimental dose values with an accuracy of about 5%. The simulations reveal that a Teflon environment favorably reduces the gamma dose component as compared to a polymethyl methacrylate phantom. Conclusions: A computer model for BNCT dosimetry was established, allowing the prediction of dosimetric quantities without further calibration and within a reasonable computation time for clinical applications. The good agreement between the MCNP5 simulations and experiments demonstrates that the ATTILA model overestimates the gamma dose contribution. The detailed model can be used for the planning of structural modifications in the thermal column irradiation channel or the use of different irradiation sites than the thermal column, e.g., the beam tubes.« less
Wellenberg, Ruud H H; Boomsma, Martijn F; van Osch, Jochen A C; Vlassenbroek, Alain; Milles, Julien; Edens, Mireille A; Streekstra, Geert J; Slump, Cornelis H; Maas, Mario
To quantify the combined use of iterative model-based reconstruction (IMR) and orthopaedic metal artefact reduction (O-MAR) in reducing metal artefacts and improving image quality in a total hip arthroplasty phantom. Scans acquired at several dose levels and kVps were reconstructed with filtered back-projection (FBP), iterative reconstruction (iDose) and IMR, with and without O-MAR. Computed tomography (CT) numbers, noise levels, signal-to-noise-ratios and contrast-to-noise-ratios were analysed. Iterative model-based reconstruction results in overall improved image quality compared to iDose and FBP (P < 0.001). Orthopaedic metal artefact reduction is most effective in reducing severe metal artefacts improving CT number accuracy by 50%, 60%, and 63% (P < 0.05) and reducing noise by 1%, 62%, and 85% (P < 0.001) whereas improving signal-to-noise-ratios by 27%, 47%, and 46% (P < 0.001) and contrast-to-noise-ratios by 16%, 25%, and 19% (P < 0.001) with FBP, iDose, and IMR, respectively. The combined use of IMR and O-MAR strongly improves overall image quality and strongly reduces metal artefacts in the CT imaging of a total hip arthroplasty phantom.
Load Balancing Strategies for Multiphase Flows on Structured Grids
NASA Astrophysics Data System (ADS)
Olshefski, Kristopher; Owkes, Mark
2017-11-01
The computation time required to perform large simulations of complex systems is currently one of the leading bottlenecks of computational research. Parallelization allows multiple processing cores to perform calculations simultaneously and reduces computational times. However, load imbalances between processors waste computing resources as processors wait for others to complete imbalanced tasks. In multiphase flows, these imbalances arise due to the additional computational effort required at the gas-liquid interface. However, many current load balancing schemes are only designed for unstructured grid applications. The purpose of this research is to develop a load balancing strategy while maintaining the simplicity of a structured grid. Several approaches are investigated including brute force oversubscription, node oversubscription through Message Passing Interface (MPI) commands, and shared memory load balancing using OpenMP. Each of these strategies are tested with a simple one-dimensional model prior to implementation into the three-dimensional NGA code. Current results show load balancing will reduce computational time by at least 30%.
The potential benefits of photonics in the computing platform
NASA Astrophysics Data System (ADS)
Bautista, Jerry
2005-03-01
The increase in computational requirements for real-time image processing, complex computational fluid dynamics, very large scale data mining in the health industry/Internet, and predictive models for financial markets are driving computer architects to consider new paradigms that rely upon very high speed interconnects within and between computing elements. Further challenges result from reduced power requirements, reduced transmission latency, and greater interconnect density. Optical interconnects may solve many of these problems with the added benefit extended reach. In addition, photonic interconnects provide relative EMI immunity which is becoming an increasing issue with a greater dependence on wireless connectivity. However, to be truly functional, the optical interconnect mesh should be able to support arbitration, addressing, etc. completely in the optical domain with a BER that is more stringent than "traditional" communication requirements. Outlined are challenges in the advanced computing environment, some possible optical architectures and relevant platform technologies, as well roughly sizing these opportunities which are quite large relative to the more "traditional" optical markets.
A Novel Market-Oriented Dynamic Collaborative Cloud Service Platform
NASA Astrophysics Data System (ADS)
Hassan, Mohammad Mehedi; Huh, Eui-Nam
In today's world the emerging Cloud computing (Weiss, 2007) offer a new computing model where resources such as computing power, storage, online applications and networking infrastructures can be shared as "services" over the internet. Cloud providers (CPs) are incentivized by the profits to be made by charging consumers for accessing these services. Consumers, such as enterprises, are attracted by the opportunity for reducing or eliminating costs associated with "in-house" provision of these services.
Computational model of retinal photocoagulation and rupture
NASA Astrophysics Data System (ADS)
Sramek, Christopher; Paulus, Yannis M.; Nomoto, Hiroyuki; Huie, Phil; Palanker, Daniel
2009-02-01
In patterned scanning laser photocoagulation, shorter duration (< 20 ms) pulses help reduce thermal damage beyond the photoreceptor layer, decrease treatment time and minimize pain. However, safe therapeutic window (defined as the ratio of rupture threshold power to that of light coagulation) decreases for shorter exposures. To quantify the extent of thermal damage in the retina, and maximize the therapeutic window, we developed a computational model of retinal photocoagulation and rupture. Model parameters were adjusted to match measured thresholds of vaporization, coagulation, and retinal pigment epithelial (RPE) damage. Computed lesion width agreed with histological measurements in a wide range of pulse durations and power. Application of ring-shaped beam profile was predicted to double the therapeutic window width for exposures in the range of 1 - 10 ms.
Concentrator optical characterization using computer mathematical modelling and point source testing
NASA Technical Reports Server (NTRS)
Dennison, E. W.; John, S. L.; Trentelman, G. F.
1984-01-01
The optical characteristics of a paraboloidal solar concentrator are analyzed using the intercept factor curve (a format for image data) to describe the results of a mathematical model and to represent reduced data from experimental testing. This procedure makes it possible not only to test an assembled concentrator, but also to evaluate single optical panels or to conduct non-solar tests of an assembled concentrator. The use of three-dimensional ray tracing computer programs to calculate the mathematical model is described. These ray tracing programs can include any type of optical configuration from simple paraboloids to array of spherical facets and can be adapted to microcomputers or larger computers, which can graphically display real-time comparison of calculated and measured data.
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Perry, Boyd, III; Florance, James R.; Sanetrik, Mark D.; Wieseman, Carol D.; Stevens, William L.; Funk, Christie J.; Hur, Jiyoung; Christhilf, David M.; Coulson, David A.
2011-01-01
A summary of computational and experimental aeroelastic and aeroservoelastic (ASE) results for the Semi-Span Super-Sonic Transport (S4T) wind-tunnel model is presented. A broad range of analyses and multiple ASE wind-tunnel tests of the S4T have been performed in support of the ASE element in the Supersonics Program, part of NASA's Fundamental Aeronautics Program. The computational results to be presented include linear aeroelastic and ASE analyses, nonlinear aeroelastic analyses using an aeroelastic CFD code, and rapid aeroelastic analyses using CFD-based reduced-order models (ROMs). Experimental results from two closed-loop wind-tunnel tests performed at NASA Langley's Transonic Dynamics Tunnel (TDT) will be presented as well.
From cosmos to connectomes: the evolution of data-intensive science.
Burns, Randal; Vogelstein, Joshua T; Szalay, Alexander S
2014-09-17
The analysis of data requires computation: originally by hand and more recently by computers. Different models of computing are designed and optimized for different kinds of data. In data-intensive science, the scale and complexity of data exceeds the comfort zone of local data stores on scientific workstations. Thus, cloud computing emerges as the preeminent model, utilizing data centers and high-performance clusters, enabling remote users to access and query subsets of the data efficiently. We examine how data-intensive computational systems originally built for cosmology, the Sloan Digital Sky Survey (SDSS), are now being used in connectomics, at the Open Connectome Project. We list lessons learned and outline the top challenges we expect to face. Success in computational connectomics would drastically reduce the time between idea and discovery, as SDSS did in cosmology. Copyright © 2014 Elsevier Inc. All rights reserved.
Acoustic environmental accuracy requirements for response determination
NASA Technical Reports Server (NTRS)
Pettitt, M. R.
1983-01-01
A general purpose computer program was developed for the prediction of vehicle interior noise. This program, named VIN, has both modal and statistical energy analysis capabilities for structural/acoustic interaction analysis. The analytic models and their computer implementation were verified through simple test cases with well-defined experimental results. The model was also applied in a space shuttle payload bay launch acoustics prediction study. The computer program processes large and small problems with equal efficiency because all arrays are dynamically sized by program input variables at run time. A data base is built and easily accessed for design studies. The data base significantly reduces the computational costs of such studies by allowing the reuse of the still-valid calculated parameters of previous iterations.
NASA Astrophysics Data System (ADS)
Moslehi, M.; de Barros, F.; Rajagopal, R.
2014-12-01
Hydrogeological models that represent flow and transport in subsurface domains are usually large-scale with excessive computational complexity and uncertain characteristics. Uncertainty quantification for predicting flow and transport in heterogeneous formations often entails utilizing a numerical Monte Carlo framework, which repeatedly simulates the model according to a random field representing hydrogeological characteristics of the field. The physical resolution (e.g. grid resolution associated with the physical space) for the simulation is customarily chosen based on recommendations in the literature, independent of the number of Monte Carlo realizations. This practice may lead to either excessive computational burden or inaccurate solutions. We propose an optimization-based methodology that considers the trade-off between the following conflicting objectives: time associated with computational costs, statistical convergence of the model predictions and physical errors corresponding to numerical grid resolution. In this research, we optimally allocate computational resources by developing a modeling framework for the overall error based on a joint statistical and numerical analysis and optimizing the error model subject to a given computational constraint. The derived expression for the overall error explicitly takes into account the joint dependence between the discretization error of the physical space and the statistical error associated with Monte Carlo realizations. The accuracy of the proposed framework is verified in this study by applying it to several computationally extensive examples. Having this framework at hand aims hydrogeologists to achieve the optimum physical and statistical resolutions to minimize the error with a given computational budget. Moreover, the influence of the available computational resources and the geometric properties of the contaminant source zone on the optimum resolutions are investigated. We conclude that the computational cost associated with optimal allocation can be substantially reduced compared with prevalent recommendations in the literature.
NASA Astrophysics Data System (ADS)
Ravishankar, Bharani
Conventional space vehicles have thermal protection systems (TPS) that provide protection to an underlying structure that carries the flight loads. In an attempt to save weight, there is interest in an integrated TPS (ITPS) that combines the structural function and the TPS function. This has weight saving potential, but complicates the design of the ITPS that now has both thermal and structural failure modes. The main objectives of this dissertation was to optimally design the ITPS subjected to thermal and mechanical loads through deterministic and reliability based optimization. The optimization of the ITPS structure requires computationally expensive finite element analyses of 3D ITPS (solid) model. To reduce the computational expenses involved in the structural analysis, finite element based homogenization method was employed, homogenizing the 3D ITPS model to a 2D orthotropic plate. However it was found that homogenization was applicable only for panels that are much larger than the characteristic dimensions of the repeating unit cell in the ITPS panel. Hence a single unit cell was used for the optimization process to reduce the computational cost. Deterministic and probabilistic optimization of the ITPS panel required evaluation of failure constraints at various design points. This further demands computationally expensive finite element analyses which was replaced by efficient, low fidelity surrogate models. In an optimization process, it is important to represent the constraints accurately to find the optimum design. Instead of building global surrogate models using large number of designs, the computational resources were directed towards target regions near constraint boundaries for accurate representation of constraints using adaptive sampling strategies. Efficient Global Reliability Analyses (EGRA) facilitates sequentially sampling of design points around the region of interest in the design space. EGRA was applied to the response surface construction of the failure constraints in the deterministic and reliability based optimization of the ITPS panel. It was shown that using adaptive sampling, the number of designs required to find the optimum were reduced drastically, while improving the accuracy. System reliability of ITPS was estimated using Monte Carlo Simulation (MCS) based method. Separable Monte Carlo method was employed that allowed separable sampling of the random variables to predict the probability of failure accurately. The reliability analysis considered uncertainties in the geometry, material properties, loading conditions of the panel and error in finite element modeling. These uncertainties further increased the computational cost of MCS techniques which was also reduced by employing surrogate models. In order to estimate the error in the probability of failure estimate, bootstrapping method was applied. This research work thus demonstrates optimization of the ITPS composite panel with multiple failure modes and large number of uncertainties using adaptive sampling techniques.
NASA Astrophysics Data System (ADS)
Iacobucci, Joseph V.
The research objective for this manuscript is to develop a Rapid Architecture Alternative Modeling (RAAM) methodology to enable traceable Pre-Milestone A decision making during the conceptual phase of design of a system of systems. Rather than following current trends that place an emphasis on adding more analysis which tends to increase the complexity of the decision making problem, RAAM improves on current methods by reducing both runtime and model creation complexity. RAAM draws upon principles from computer science, system architecting, and domain specific languages to enable the automatic generation and evaluation of architecture alternatives. For example, both mission dependent and mission independent metrics are considered. Mission dependent metrics are determined by the performance of systems accomplishing a task, such as Probability of Success. In contrast, mission independent metrics, such as acquisition cost, are solely determined and influenced by the other systems in the portfolio. RAAM also leverages advances in parallel computing to significantly reduce runtime by defining executable models that are readily amendable to parallelization. This allows the use of cloud computing infrastructures such as Amazon's Elastic Compute Cloud and the PASTEC cluster operated by the Georgia Institute of Technology Research Institute (GTRI). Also, the amount of data that can be generated when fully exploring the design space can quickly exceed the typical capacity of computational resources at the analyst's disposal. To counter this, specific algorithms and techniques are employed. Streaming algorithms and recursive architecture alternative evaluation algorithms are used that reduce computer memory requirements. Lastly, a domain specific language is created to provide a reduction in the computational time of executing the system of systems models. A domain specific language is a small, usually declarative language that offers expressive power focused on a particular problem domain by establishing an effective means to communicate the semantics from the RAAM framework. These techniques make it possible to include diverse multi-metric models within the RAAM framework in addition to system and operational level trades. A canonical example was used to explore the uses of the methodology. The canonical example contains all of the features of a full system of systems architecture analysis study but uses fewer tasks and systems. Using RAAM with the canonical example it was possible to consider both system and operational level trades in the same analysis. Once the methodology had been tested with the canonical example, a Suppression of Enemy Air Defenses (SEAD) capability model was developed. Due to the sensitive nature of analyses on that subject, notional data was developed. The notional data has similar trends and properties to realistic Suppression of Enemy Air Defenses data. RAAM was shown to be traceable and provided a mechanism for a unified treatment of a variety of metrics. The SEAD capability model demonstrated lower computer runtimes and reduced model creation complexity as compared to methods currently in use. To determine the usefulness of the implementation of the methodology on current computing hardware, RAAM was tested with system of system architecture studies of different sizes. This was necessary since system of systems may be called upon to accomplish thousands of tasks. It has been clearly demonstrated that RAAM is able to enumerate and evaluate the types of large, complex design spaces usually encountered in capability based design, oftentimes providing the ability to efficiently search the entire decision space. The core algorithms for generation and evaluation of alternatives scale linearly with expected problem sizes. The SEAD capability model outputs prompted the discovery a new issue, the data storage and manipulation requirements for an analysis. Two strategies were developed to counter large data sizes, the use of portfolio views and top 'n' analysis. This proved the usefulness of the RAAM framework and methodology during Pre-Milestone A capability based analysis. (Abstract shortened by UMI.).
Solution to the spectral filter problem of residual terrain modelling (RTM)
NASA Astrophysics Data System (ADS)
Rexer, Moritz; Hirt, Christian; Bucha, Blažej; Holmes, Simon
2018-06-01
In physical geodesy, the residual terrain modelling (RTM) technique is frequently used for high-frequency gravity forward modelling. In the RTM technique, a detailed elevation model is high-pass-filtered in the topography domain, which is not equivalent to filtering in the gravity domain. This in-equivalence, denoted as spectral filter problem of the RTM technique, gives rise to two imperfections (errors). The first imperfection is unwanted low-frequency (LF) gravity signals, and the second imperfection is missing high-frequency (HF) signals in the forward-modelled RTM gravity signal. This paper presents new solutions to the RTM spectral filter problem. Our solutions are based on explicit modelling of the two imperfections via corrections. The HF correction is computed using spectral domain gravity forward modelling that delivers the HF gravity signal generated by the long-wavelength RTM reference topography. The LF correction is obtained from pre-computed global RTM gravity grids that are low-pass-filtered using surface or solid spherical harmonics. A numerical case study reveals maximum absolute signal strengths of ˜ 44 mGal (0.5 mGal RMS) for the HF correction and ˜ 33 mGal (0.6 mGal RMS) for the LF correction w.r.t. a degree-2160 reference topography within the data coverage of the SRTM topography model (56°S ≤ φ ≤ 60°N). Application of the LF and HF corrections to pre-computed global gravity models (here the GGMplus gravity maps) demonstrates the efficiency of the new corrections over topographically rugged terrain. Over Switzerland, consideration of the HF and LF corrections reduced the RMS of the residuals between GGMplus and ground-truth gravity from 4.41 to 3.27 mGal, which translates into ˜ 26% improvement. Over a second test area (Canada), our corrections reduced the RMS of the residuals between GGMplus and ground-truth gravity from 5.65 to 5.30 mGal (˜ 6% improvement). Particularly over Switzerland, geophysical signals (associated, e.g. with valley fillings) were found to stand out more clearly in the RTM-reduced gravity measurements when the HF and LF correction are taken into account. In summary, the new RTM filter corrections can be easily computed and applied to improve the spectral filter characteristics of the popular RTM approach. Benefits are expected, e.g. in the context of the development of future ultra-high-resolution global gravity models, smoothing of observed gravity data in mountainous terrain and geophysical interpretations of RTM-reduced gravity measurements.
An efficient Bayesian data-worth analysis using a multilevel Monte Carlo method
NASA Astrophysics Data System (ADS)
Lu, Dan; Ricciuto, Daniel; Evans, Katherine
2018-03-01
Improving the understanding of subsurface systems and thus reducing prediction uncertainty requires collection of data. As the collection of subsurface data is costly, it is important that the data collection scheme is cost-effective. Design of a cost-effective data collection scheme, i.e., data-worth analysis, requires quantifying model parameter, prediction, and both current and potential data uncertainties. Assessment of these uncertainties in large-scale stochastic subsurface hydrological model simulations using standard Monte Carlo (MC) sampling or surrogate modeling is extremely computationally intensive, sometimes even infeasible. In this work, we propose an efficient Bayesian data-worth analysis using a multilevel Monte Carlo (MLMC) method. Compared to the standard MC that requires a significantly large number of high-fidelity model executions to achieve a prescribed accuracy in estimating expectations, the MLMC can substantially reduce computational costs using multifidelity approximations. Since the Bayesian data-worth analysis involves a great deal of expectation estimation, the cost saving of the MLMC in the assessment can be outstanding. While the proposed MLMC-based data-worth analysis is broadly applicable, we use it for a highly heterogeneous two-phase subsurface flow simulation to select an optimal candidate data set that gives the largest uncertainty reduction in predicting mass flow rates at four production wells. The choices made by the MLMC estimation are validated by the actual measurements of the potential data, and consistent with the standard MC estimation. But compared to the standard MC, the MLMC greatly reduces the computational costs.
High-resolution LES of the rotating stall in a reduced scale model pump-turbine
NASA Astrophysics Data System (ADS)
Pacot, Olivier; Kato, Chisachi; Avellan, François
2014-03-01
Extending the operating range of modern pump-turbines becomes increasingly important in the course of the integration of renewable energy sources in the existing power grid. However, at partial load condition in pumping mode, the occurrence of rotating stall is critical to the operational safety of the machine and on the grid stability. The understanding of the mechanisms behind this flow phenomenon yet remains vague and incomplete. Past numerical simulations using a RANS approach often led to inconclusive results concerning the physical background. For the first time, the rotating stall is investigated by performing a large scale LES calculation on the HYDRODYNA pump-turbine scale model featuring approximately 100 million elements. The computations were performed on the PRIMEHPC FX10 of the University of Tokyo using the overset Finite Element open source code FrontFlow/blue with the dynamic Smagorinsky turbulence model and the no-slip wall condition. The internal flow computed is the one when operating the pump-turbine at 76% of the best efficiency point in pumping mode, as previous experimental research showed the presence of four rotating cells. The rotating stall phenomenon is accurately reproduced for a reduced Reynolds number using the LES approach with acceptable computing resources. The results show an excellent agreement with available experimental data from the reduced scale model testing at the EPFL Laboratory for Hydraulic Machines. The number of stall cells as well as the propagation speed corroborates the experiment.
NASA Astrophysics Data System (ADS)
Musenge, Eustasius; Chirwa, Tobias Freeman; Kahn, Kathleen; Vounatsou, Penelope
2013-06-01
Longitudinal mortality data with few deaths usually have problems of zero-inflation. This paper presents and applies two Bayesian models which cater for zero-inflation, spatial and temporal random effects. To reduce the computational burden experienced when a large number of geo-locations are treated as a Gaussian field (GF) we transformed the field to a Gaussian Markov Random Fields (GMRF) by triangulation. We then modelled the spatial random effects using the Stochastic Partial Differential Equations (SPDEs). Inference was done using a computationally efficient alternative to Markov chain Monte Carlo (MCMC) called Integrated Nested Laplace Approximation (INLA) suited for GMRF. The models were applied to data from 71,057 children aged 0 to under 10 years from rural north-east South Africa living in 15,703 households over the years 1992-2010. We found protective effects on HIV/TB mortality due to greater birth weight, older age and more antenatal clinic visits during pregnancy (adjusted RR (95% CI)): 0.73(0.53;0.99), 0.18(0.14;0.22) and 0.96(0.94;0.97) respectively. Therefore childhood HIV/TB mortality could be reduced if mothers are better catered for during pregnancy as this can reduce mother-to-child transmissions and contribute to improved birth weights. The INLA and SPDE approaches are computationally good alternatives in modelling large multilevel spatiotemporal GMRF data structures.
Fast multigrid-based computation of the induced electric field for transcranial magnetic stimulation
NASA Astrophysics Data System (ADS)
Laakso, Ilkka; Hirata, Akimasa
2012-12-01
In transcranial magnetic stimulation (TMS), the distribution of the induced electric field, and the affected brain areas, depends on the position of the stimulation coil and the individual geometry of the head and brain. The distribution of the induced electric field in realistic anatomies can be modelled using computational methods. However, existing computational methods for accurately determining the induced electric field in realistic anatomical models have suffered from long computation times, typically in the range of tens of minutes or longer. This paper presents a matrix-free implementation of the finite-element method with a geometric multigrid method that can potentially reduce the computation time to several seconds or less even when using an ordinary computer. The performance of the method is studied by computing the induced electric field in two anatomically realistic models. An idealized two-loop coil is used as the stimulating coil. Multiple computational grid resolutions ranging from 2 to 0.25 mm are used. The results show that, for macroscopic modelling of the electric field in an anatomically realistic model, computational grid resolutions of 1 mm or 2 mm appear to provide good numerical accuracy compared to higher resolutions. The multigrid iteration typically converges in less than ten iterations independent of the grid resolution. Even without parallelization, each iteration takes about 1.0 s or 0.1 s for the 1 and 2 mm resolutions, respectively. This suggests that calculating the electric field with sufficient accuracy in real time is feasible.
Saa, Pedro A.; Nielsen, Lars K.
2016-01-01
Motivation: Computation of steady-state flux solutions in large metabolic models is routinely performed using flux balance analysis based on a simple LP (Linear Programming) formulation. A minimal requirement for thermodynamic feasibility of the flux solution is the absence of internal loops, which are enforced using ‘loopless constraints’. The resulting loopless flux problem is a substantially harder MILP (Mixed Integer Linear Programming) problem, which is computationally expensive for large metabolic models. Results: We developed a pre-processing algorithm that significantly reduces the size of the original loopless problem into an easier and equivalent MILP problem. The pre-processing step employs a fast matrix sparsification algorithm—Fast- sparse null-space pursuit (SNP)—inspired by recent results on SNP. By finding a reduced feasible ‘loop-law’ matrix subject to known directionalities, Fast-SNP considerably improves the computational efficiency in several metabolic models running different loopless optimization problems. Furthermore, analysis of the topology encoded in the reduced loop matrix enabled identification of key directional constraints for the potential permanent elimination of infeasible loops in the underlying model. Overall, Fast-SNP is an effective and simple algorithm for efficient formulation of loop-law constraints, making loopless flux optimization feasible and numerically tractable at large scale. Availability and Implementation: Source code for MATLAB including examples is freely available for download at http://www.aibn.uq.edu.au/cssb-resources under Software. Optimization uses Gurobi, CPLEX or GLPK (the latter is included with the algorithm). Contact: lars.nielsen@uq.edu.au Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27559155
NASA Astrophysics Data System (ADS)
Hoi, Yiemeng; Ionita, Ciprian N.; Tranquebar, Rekha V.; Hoffmann, Kenneth R.; Woodward, Scott H.; Taulbee, Dale B.; Meng, Hui; Rudin, Stephen
2006-03-01
An asymmetric stent with low porosity patch across the intracranial aneurysm neck and high porosity elsewhere is designed to modify the flow to result in thrombogenesis and occlusion of the aneurysm and yet to reduce the possibility of also occluding adjacent perforator vessels. The purposes of this study are to evaluate the flow field induced by an asymmetric stent using both numerical and digital subtraction angiography (DSA) methods and to quantify the flow dynamics of an asymmetric stent in an in vivo aneurysm model. We created a vein-pouch aneurysm model on the canine carotid artery. An asymmetric stent was implanted at the aneurysm, with 25% porosity across the aneurysm neck and 80% porosity elsewhere. The aneurysm geometry, before and after stent implantation, was acquired using cone beam CT and reconstructed for computational fluid dynamics (CFD) analysis. Both steady-state and pulsatile flow conditions using the measured waveforms from the aneurysm model were studied. To reduce computational costs, we modeled the asymmetric stent effect by specifying a pressure drop over the layer across the aneurysm orifice where the low porosity patch was located. From the CFD results, we found the asymmetric stent reduced the inflow into the aneurysm by 51%, and appeared to create a stasis-like environment which favors thrombus formation. The DSA sequences also showed substantial flow reduction into the aneurysm. Asymmetric stents may be a viable image guided intervention for treating intracranial aneurysms with desired flow modification features.
Two-phase strategy of controlling motor coordination determined by task performance optimality.
Shimansky, Yury P; Rand, Miya K
2013-02-01
A quantitative model of optimal coordination between hand transport and grip aperture has been derived in our previous studies of reach-to-grasp movements without utilizing explicit knowledge of the optimality criterion or motor plant dynamics. The model's utility for experimental data analysis has been demonstrated. Here we show how to generalize this model for a broad class of reaching-type, goal-directed movements. The model allows for measuring the variability of motor coordination and studying its dependence on movement phase. The experimentally found characteristics of that dependence imply that execution noise is low and does not affect motor coordination significantly. From those characteristics it is inferred that the cost of neural computations required for information acquisition and processing is included in the criterion of task performance optimality as a function of precision demand for state estimation and decision making. The precision demand is an additional optimized control variable that regulates the amount of neurocomputational resources activated dynamically. It is shown that an optimal control strategy in this case comprises two different phases. During the initial phase, the cost of neural computations is significantly reduced at the expense of reducing the demand for their precision, which results in speed-accuracy tradeoff violation and significant inter-trial variability of motor coordination. During the final phase, neural computations and thus motor coordination are considerably more precise to reduce the cost of errors in making a contact with the target object. The generality of the optimal coordination model and the two-phase control strategy is illustrated on several diverse examples.
De Paris, Renata; Frantz, Fábio A.; Norberto de Souza, Osmar; Ruiz, Duncan D. A.
2013-01-01
Molecular docking simulations of fully flexible protein receptor (FFR) models are coming of age. In our studies, an FFR model is represented by a series of different conformations derived from a molecular dynamic simulation trajectory of the receptor. For each conformation in the FFR model, a docking simulation is executed and analyzed. An important challenge is to perform virtual screening of millions of ligands using an FFR model in a sequential mode since it can become computationally very demanding. In this paper, we propose a cloud-based web environment, called web Flexible Receptor Docking Workflow (wFReDoW), which reduces the CPU time in the molecular docking simulations of FFR models to small molecules. It is based on the new workflow data pattern called self-adaptive multiple instances (P-SaMIs) and on a middleware built on Amazon EC2 instances. P-SaMI reduces the number of molecular docking simulations while the middleware speeds up the docking experiments using a High Performance Computing (HPC) environment on the cloud. The experimental results show a reduction in the total elapsed time of docking experiments and the quality of the new reduced receptor models produced by discarding the nonpromising conformations from an FFR model ruled by the P-SaMI data pattern. PMID:23691504
Reduced-form air quality modeling for community-scale ...
Transportation plays an important role in modern society, but its impact on air quality has been shown to have significant adverse effects on public health. Numerous reviews (HEI, CDC, WHO) summarizing findings of hundreds of studies conducted mainly in the last decade, conclude that exposures to traffic emissions near roads are a public health concern. The Community LINE Source Model (C-LINE) is a web-based model designed to inform the community user of local air quality impacts due to roadway vehicles in their region of interest using a simplified modeling approach. Reduced-form air quality modeling is a useful tool for examining what-if scenarios of changes in emissions, such as those due to changes in traffic volume, fleet mix, or vehicle speed. Examining various scenarios of air quality impacts in this way can identify potentially at-risk populations located near roadways, and the effects that a change in traffic activity may have on them. C-LINE computes dispersion of primary mobile source pollutants using meteorological conditions for the region of interest and computes air-quality concentrations corresponding to these selected conditions. C-LINE functionality has been expanded to model emissions from port-related activities (e.g. ships, trucks, cranes, etc.) in a reduced-form modeling system for local-scale near-port air quality analysis. This presentation describes the Community modeling tools C-LINE and C-PORT that are intended to be used by local gove
Reducing Earth Topography Resolution for SMAP Mission Ground Tracks Using K-Means Clustering
NASA Technical Reports Server (NTRS)
Rizvi, Farheen
2013-01-01
The K-means clustering algorithm is used to reduce Earth topography resolution for the SMAP mission ground tracks. As SMAP propagates in orbit, knowledge of the radar antenna footprints on Earth is required for the antenna misalignment calibration. Each antenna footprint contains a latitude and longitude location pair on the Earth surface. There are 400 pairs in one data set for the calibration model. It is computationally expensive to calculate corresponding Earth elevation for these data pairs. Thus, the antenna footprint resolution is reduced. Similar topographical data pairs are grouped together with the K-means clustering algorithm. The resolution is reduced to the mean of each topographical cluster called the cluster centroid. The corresponding Earth elevation for each cluster centroid is assigned to the entire group. Results show that 400 data points are reduced to 60 while still maintaining algorithm performance and computational efficiency. In this work, sensitivity analysis is also performed to show a trade-off between algorithm performance versus computational efficiency as the number of cluster centroids and algorithm iterations are increased.
Application of Local Discretization Methods in the NASA Finite-Volume General Circulation Model
NASA Technical Reports Server (NTRS)
Yeh, Kao-San; Lin, Shian-Jiann; Rood, Richard B.
2002-01-01
We present the basic ideas of the dynamics system of the finite-volume General Circulation Model developed at NASA Goddard Space Flight Center for climate simulations and other applications in meteorology. The dynamics of this model is designed with emphases on conservative and monotonic transport, where the property of Lagrangian conservation is used to maintain the physical consistency of the computational fluid for long-term simulations. As the model benefits from the noise-free solutions of monotonic finite-volume transport schemes, the property of Lagrangian conservation also partly compensates the accuracy of transport for the diffusion effects due to the treatment of monotonicity. By faithfully maintaining the fundamental laws of physics during the computation, this model is able to achieve sufficient accuracy for the global consistency of climate processes. Because the computing algorithms are based on local memory, this model has the advantage of efficiency in parallel computation with distributed memory. Further research is yet desirable to reduce the diffusion effects of monotonic transport for better accuracy, and to mitigate the limitation due to fast-moving gravity waves for better efficiency.
NASA Astrophysics Data System (ADS)
Zhang, Ling; Nan, Zhuotong; Liang, Xu; Xu, Yi; Hernández, Felipe; Li, Lianxia
2018-03-01
Although process-based distributed hydrological models (PDHMs) are evolving rapidly over the last few decades, their extensive applications are still challenged by the computational expenses. This study attempted, for the first time, to apply the numerically efficient MacCormack algorithm to overland flow routing in a representative high-spatial resolution PDHM, i.e., the distributed hydrology-soil-vegetation model (DHSVM), in order to improve its computational efficiency. The analytical verification indicates that both the semi and full versions of the MacCormack schemes exhibit robust numerical stability and are more computationally efficient than the conventional explicit linear scheme. The full-version outperforms the semi-version in terms of simulation accuracy when a same time step is adopted. The semi-MacCormack scheme was implemented into DHSVM (version 3.1.2) to solve the kinematic wave equations for overland flow routing. The performance and practicality of the enhanced DHSVM-MacCormack model was assessed by performing two groups of modeling experiments in the Mercer Creek watershed, a small urban catchment near Bellevue, Washington. The experiments show that DHSVM-MacCormack can considerably improve the computational efficiency without compromising the simulation accuracy of the original DHSVM model. More specifically, with the same computational environment and model settings, the computational time required by DHSVM-MacCormack can be reduced to several dozen minutes for a simulation period of three months (in contrast with one day and a half by the original DHSVM model) without noticeable sacrifice of the accuracy. The MacCormack scheme proves to be applicable to overland flow routing in DHSVM, which implies that it can be coupled into other PHDMs for watershed routing to either significantly improve their computational efficiency or to make the kinematic wave routing for high resolution modeling computational feasible.
Program Helps Generate Boundary-Element Mathematical Models
NASA Technical Reports Server (NTRS)
Goldberg, R. K.
1995-01-01
Composite Model Generation-Boundary Element Method (COM-GEN-BEM) computer program significantly reduces time and effort needed to construct boundary-element mathematical models of continuous-fiber composite materials at micro-mechanical (constituent) scale. Generates boundary-element models compatible with BEST-CMS boundary-element code for anlaysis of micromechanics of composite material. Written in PATRAN Command Language (PCL).
Multi-objective reverse logistics model for integrated computer waste management.
Ahluwalia, Poonam Khanijo; Nema, Arvind K
2006-12-01
This study aimed to address the issues involved in the planning and design of a computer waste management system in an integrated manner. A decision-support tool is presented for selecting an optimum configuration of computer waste management facilities (segregation, storage, treatment/processing, reuse/recycle and disposal) and allocation of waste to these facilities. The model is based on an integer linear programming method with the objectives of minimizing environmental risk as well as cost. The issue of uncertainty in the estimated waste quantities from multiple sources is addressed using the Monte Carlo simulation technique. An illustrated example of computer waste management in Delhi, India is presented to demonstrate the usefulness of the proposed model and to study tradeoffs between cost and risk. The results of the example problem show that it is possible to reduce the environmental risk significantly by a marginal increase in the available cost. The proposed model can serve as a powerful tool to address the environmental problems associated with exponentially growing quantities of computer waste which are presently being managed using rudimentary methods of reuse, recovery and disposal by various small-scale vendors.
Evaluation and optimization of sampling errors for the Monte Carlo Independent Column Approximation
NASA Astrophysics Data System (ADS)
Räisänen, Petri; Barker, W. Howard
2004-07-01
The Monte Carlo Independent Column Approximation (McICA) method for computing domain-average broadband radiative fluxes is unbiased with respect to the full ICA, but its flux estimates contain conditional random noise. McICA's sampling errors are evaluated here using a global climate model (GCM) dataset and a correlated-k distribution (CKD) radiation scheme. Two approaches to reduce McICA's sampling variance are discussed. The first is to simply restrict all of McICA's samples to cloudy regions. This avoids wasting precious few samples on essentially homogeneous clear skies. Clear-sky fluxes need to be computed separately for this approach, but this is usually done in GCMs for diagnostic purposes anyway. Second, accuracy can be improved by repeated sampling, and averaging those CKD terms with large cloud radiative effects. Although this naturally increases computational costs over the standard CKD model, random errors for fluxes and heating rates are reduced by typically 50% to 60%, for the present radiation code, when the total number of samples is increased by 50%. When both variance reduction techniques are applied simultaneously, globally averaged flux and heating rate random errors are reduced by a factor of #3.
Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake
Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less
Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD
Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake; ...
2017-03-24
Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less
Computing volume potentials for noninvasive imaging of cardiac excitation.
van der Graaf, A W Maurits; Bhagirath, Pranav; van Driel, Vincent J H M; Ramanna, Hemanth; de Hooge, Jacques; de Groot, Natasja M S; Götte, Marco J W
2015-03-01
In noninvasive imaging of cardiac excitation, the use of body surface potentials (BSP) rather than body volume potentials (BVP) has been favored due to enhanced computational efficiency and reduced modeling effort. Nowadays, increased computational power and the availability of open source software enable the calculation of BVP for clinical purposes. In order to illustrate the possible advantages of this approach, the explanatory power of BVP is investigated using a rectangular tank filled with an electrolytic conductor and a patient specific three dimensional model. MRI images of the tank and of a patient were obtained in three orthogonal directions using a turbo spin echo MRI sequence. MRI images were segmented in three dimensional using custom written software. Gmsh software was used for mesh generation. BVP were computed using a transfer matrix and FEniCS software. The solution for 240,000 nodes, corresponding to a resolution of 5 mm throughout the thorax volume, was computed in 3 minutes. The tank experiment revealed that an increased electrode surface renders the position of the 4 V equipotential plane insensitive to mesh cell size and reduces simulated deviations. In the patient-specific model, the impact of assigning a different conductivity to lung tissue on the distribution of volume potentials could be visualized. Generation of high quality volume meshes and computation of BVP with a resolution of 5 mm is feasible using generally available software and hardware. Estimation of BVP may lead to an improved understanding of the genesis of BSP and sources of local inaccuracies. © 2014 Wiley Periodicals, Inc.
Evidence Accumulation and Change Rate Inference in Dynamic Environments.
Radillo, Adrian E; Veliz-Cuba, Alan; Josić, Krešimir; Kilpatrick, Zachary P
2017-06-01
In a constantly changing world, animals must account for environmental volatility when making decisions. To appropriately discount older, irrelevant information, they need to learn the rate at which the environment changes. We develop an ideal observer model capable of inferring the present state of the environment along with its rate of change. Key to this computation is an update of the posterior probability of all possible change point counts. This computation can be challenging, as the number of possibilities grows rapidly with time. However, we show how the computations can be simplified in the continuum limit by a moment closure approximation. The resulting low-dimensional system can be used to infer the environmental state and change rate with accuracy comparable to the ideal observer. The approximate computations can be performed by a neural network model via a rate-correlation-based plasticity rule. We thus show how optimal observers accumulate evidence in changing environments and map this computation to reduced models that perform inference using plausible neural mechanisms.
NASA Technical Reports Server (NTRS)
Kline, S. J. (Editor); Cantwell, B. J. (Editor); Lilley, G. M.
1982-01-01
Computational techniques for simulating turbulent flows were explored, together with the results of experimental investigations. Particular attention was devoted to the possibility of defining a universal closure model, applicable for all turbulence situations; however, conclusions were drawn that zonal models, describing localized structures, were the most promising techniques to date. The taxonomy of turbulent flows was summarized, as were algebraic, differential, integral, and partial differential methods for numerical depiction of turbulent flows. Numerous comparisons of theoretically predicted and experimentally obtained data for wall pressure distributions, velocity profiles, turbulent kinetic energy profiles, Reynolds shear stress profiles, and flows around transonic airfoils were presented. Simplifying techniques for reducing the necessary computational time for modeling complex flowfields were surveyed, together with the industrial requirements and applications of computational fluid dynamics techniques.
Understanding Islamist political violence through computational social simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watkins, Jennifer H; Mackerrow, Edward P; Patelli, Paolo G
Understanding the process that enables political violence is of great value in reducing the future demand for and support of violent opposition groups. Methods are needed that allow alternative scenarios and counterfactuals to be scientifically researched. Computational social simulation shows promise in developing 'computer experiments' that would be unfeasible or unethical in the real world. Additionally, the process of modeling and simulation reveals and challenges assumptions that may not be noted in theories, exposes areas where data is not available, and provides a rigorous, repeatable, and transparent framework for analyzing the complex dynamics of political violence. This paper demonstrates themore » computational modeling process using two simulation techniques: system dynamics and agent-based modeling. The benefits and drawbacks of both techniques are discussed. In developing these social simulations, we discovered that the social science concepts and theories needed to accurately simulate the associated psychological and social phenomena were lacking.« less
Reduced cost mission design using surrogate models
NASA Astrophysics Data System (ADS)
Feldhacker, Juliana D.; Jones, Brandon A.; Doostan, Alireza; Hampton, Jerrad
2016-01-01
This paper uses surrogate models to reduce the computational cost associated with spacecraft mission design in three-body dynamical systems. Sampling-based least squares regression is used to project the system response onto a set of orthogonal bases, providing a representation of the ΔV required for rendezvous as a reduced-order surrogate model. Models are presented for mid-field rendezvous of spacecraft in orbits in the Earth-Moon circular restricted three-body problem, including a halo orbit about the Earth-Moon L2 libration point (EML-2) and a distant retrograde orbit (DRO) about the Moon. In each case, the initial position of the spacecraft, the time of flight, and the separation between the chaser and the target vehicles are all considered as design inputs. The results show that sample sizes on the order of 102 are sufficient to produce accurate surrogates, with RMS errors reaching 0.2 m/s for the halo orbit and falling below 0.01 m/s for the DRO. A single function call to the resulting surrogate is up to two orders of magnitude faster than computing the same solution using full fidelity propagators. The expansion coefficients solved for in the surrogates are then used to conduct a global sensitivity analysis of the ΔV on each of the input parameters, which identifies the separation between the spacecraft as the primary contributor to the ΔV cost. Finally, the models are demonstrated to be useful for cheap evaluation of the cost function in constrained optimization problems seeking to minimize the ΔV required for rendezvous. These surrogate models show significant advantages for mission design in three-body systems, in terms of both computational cost and capabilities, over traditional Monte Carlo methods.
Multilevel Monte Carlo and improved timestepping methods in atmospheric dispersion modelling
NASA Astrophysics Data System (ADS)
Katsiolides, Grigoris; Müller, Eike H.; Scheichl, Robert; Shardlow, Tony; Giles, Michael B.; Thomson, David J.
2018-02-01
A common way to simulate the transport and spread of pollutants in the atmosphere is via stochastic Lagrangian dispersion models. Mathematically, these models describe turbulent transport processes with stochastic differential equations (SDEs). The computational bottleneck is the Monte Carlo algorithm, which simulates the motion of a large number of model particles in a turbulent velocity field; for each particle, a trajectory is calculated with a numerical timestepping method. Choosing an efficient numerical method is particularly important in operational emergency-response applications, such as tracking radioactive clouds from nuclear accidents or predicting the impact of volcanic ash clouds on international aviation, where accurate and timely predictions are essential. In this paper, we investigate the application of the Multilevel Monte Carlo (MLMC) method to simulate the propagation of particles in a representative one-dimensional dispersion scenario in the atmospheric boundary layer. MLMC can be shown to result in asymptotically superior computational complexity and reduced computational cost when compared to the Standard Monte Carlo (StMC) method, which is currently used in atmospheric dispersion modelling. To reduce the absolute cost of the method also in the non-asymptotic regime, it is equally important to choose the best possible numerical timestepping method on each level. To investigate this, we also compare the standard symplectic Euler method, which is used in many operational models, with two improved timestepping algorithms based on SDE splitting methods.
Flutter Analysis for Turbomachinery Using Volterra Series
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Yao, Weigang
2014-01-01
The objective of this paper is to describe an accurate and efficient reduced order modeling method for aeroelastic (AE) analysis and for determining the flutter boundary. Without losing accuracy, we develop a reduced order model based on the Volterra series to achieve significant savings in computational cost. The aerodynamic force is provided by a high-fidelity solution from the Reynolds-averaged Navier-Stokes (RANS) equations; the structural mode shapes are determined from the finite element analysis. The fluid-structure coupling is then modeled by the state-space formulation with the structural displacement as input and the aerodynamic force as output, which in turn acts as an external force to the aeroelastic displacement equation for providing the structural deformation. NASA's rotor 67 blade is used to study its aeroelastic characteristics under the designated operating condition. First, the CFD results are validated against measured data available for the steady state condition. Then, the accuracy of the developed reduced order model is compared with the full-order solutions. Finally the aeroelastic solutions of the blade are computed and a flutter boundary is identified, suggesting that the rotor, with the material property chosen for the study, is structurally stable at the operating condition, free of encountering flutter.
Occupational stress in human computer interaction.
Smith, M J; Conway, F T; Karsh, B T
1999-04-01
There have been a variety of research approaches that have examined the stress issues related to human computer interaction including laboratory studies, cross-sectional surveys, longitudinal case studies and intervention studies. A critical review of these studies indicates that there are important physiological, biochemical, somatic and psychological indicators of stress that are related to work activities where human computer interaction occurs. Many of the stressors of human computer interaction at work are similar to those stressors that have historically been observed in other automated jobs. These include high workload, high work pressure, diminished job control, inadequate employee training to use new technology, monotonous tasks, por supervisory relations, and fear for job security. New stressors have emerged that can be tied primarily to human computer interaction. These include technology breakdowns, technology slowdowns, and electronic performance monitoring. The effects of the stress of human computer interaction in the workplace are increased physiological arousal; somatic complaints, especially of the musculoskeletal system; mood disturbances, particularly anxiety, fear and anger; and diminished quality of working life, such as reduced job satisfaction. Interventions to reduce the stress of computer technology have included improved technology implementation approaches and increased employee participation in implementation. Recommendations for ways to reduce the stress of human computer interaction at work are presented. These include proper ergonomic conditions, increased organizational support, improved job content, proper workload to decrease work pressure, and enhanced opportunities for social support. A model approach to the design of human computer interaction at work that focuses on the system "balance" is proposed.
Maxwell: A semi-analytic 4D code for earthquake cycle modeling of transform fault systems
NASA Astrophysics Data System (ADS)
Sandwell, David; Smith-Konter, Bridget
2018-05-01
We have developed a semi-analytic approach (and computational code) for rapidly calculating 3D time-dependent deformation and stress caused by screw dislocations imbedded within an elastic layer overlying a Maxwell viscoelastic half-space. The maxwell model is developed in the Fourier domain to exploit the computational advantages of the convolution theorem, hence substantially reducing the computational burden associated with an arbitrarily complex distribution of force couples necessary for fault modeling. The new aspect of this development is the ability to model lateral variations in shear modulus. Ten benchmark examples are provided for testing and verification of the algorithms and code. One final example simulates interseismic deformation along the San Andreas Fault System where lateral variations in shear modulus are included to simulate lateral variations in lithospheric structure.
The Lag Model, a Turbulence Model for Wall Bounded Flows Including Separation
NASA Technical Reports Server (NTRS)
Olsen, Michael E.; Coakley, Thomas J.; Kwak, Dochan (Technical Monitor)
2001-01-01
A new class of turbulence model is described for wall bounded, high Reynolds number flows. A specific turbulence model is demonstrated, with results for favorable and adverse pressure gradient flowfields. Separation predictions are as good or better than either Spalart Almaras or SST models, do not require specification of wall distance, and have similar or reduced computational effort compared with these models.
Exploiting the chaotic behaviour of atmospheric models with reconfigurable architectures
NASA Astrophysics Data System (ADS)
Russell, Francis P.; Düben, Peter D.; Niu, Xinyu; Luk, Wayne; Palmer, T. N.
2017-12-01
Reconfigurable architectures are becoming mainstream: Amazon, Microsoft and IBM are supporting such architectures in their data centres. The computationally intensive nature of atmospheric modelling is an attractive target for hardware acceleration using reconfigurable computing. Performance of hardware designs can be improved through the use of reduced-precision arithmetic, but maintaining appropriate accuracy is essential. We explore reduced-precision optimisation for simulating chaotic systems, targeting atmospheric modelling, in which even minor changes in arithmetic behaviour will cause simulations to diverge quickly. The possibility of equally valid simulations having differing outcomes means that standard techniques for comparing numerical accuracy are inappropriate. We use the Hellinger distance to compare statistical behaviour between reduced-precision CPU implementations to guide reconfigurable designs of a chaotic system, then analyse accuracy, performance and power efficiency of the resulting implementations. Our results show that with only a limited loss in accuracy corresponding to less than 10% uncertainty in input parameters, the throughput and energy efficiency of a single-precision chaotic system implemented on a Xilinx Virtex-6 SX475T Field Programmable Gate Array (FPGA) can be more than doubled.
Symplectic multi-particle tracking on GPUs
NASA Astrophysics Data System (ADS)
Liu, Zhicong; Qiang, Ji
2018-05-01
A symplectic multi-particle tracking model is implemented on the Graphic Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) language. The symplectic tracking model can preserve phase space structure and reduce non-physical effects in long term simulation, which is important for beam property evaluation in particle accelerators. Though this model is computationally expensive, it is very suitable for parallelization and can be accelerated significantly by using GPUs. In this paper, we optimized the implementation of the symplectic tracking model on both single GPU and multiple GPUs. Using a single GPU processor, the code achieves a factor of 2-10 speedup for a range of problem sizes compared with the time on a single state-of-the-art Central Processing Unit (CPU) node with similar power consumption and semiconductor technology. It also shows good scalability on a multi-GPU cluster at Oak Ridge Leadership Computing Facility. In an application to beam dynamics simulation, the GPU implementation helps save more than a factor of two total computing time in comparison to the CPU implementation.
3D CFD Quantification of the Performance of a Multi-Megawatt Wind Turbine
NASA Astrophysics Data System (ADS)
Laursen, J.; Enevoldsen, P.; Hjort, S.
2007-07-01
This paper presents the results of 3D CFD rotor computations of a Siemens SWT-2.3-93 variable speed wind turbine with 45m blades. In the paper CFD is applied to a rotor at stationary wind conditions without wind shear, using the commercial multi-purpose CFD-solvers ANSYS CFX 10.0 and 11.0. When comparing modelled mechanical effects with findings from other models and measurements, good agreement is obtained. Similarly the computed force distributions compare very well, whereas some discrepancies are found when comparing with an in-house BEM model. By applying the reduced axial velocity method the local angle of attack has been derived from the CFD solutions, and from this knowledge and the computed force distributions, local airfoil profile coefficients have been computed and compared to BEM airfoil coefficients. Finally, the transition model of Langtry and Menter is tested on the rotor, and the results are compared with the results from the fully turbulent setup.
The Numerical Propulsion System Simulation: An Overview
NASA Technical Reports Server (NTRS)
Lytle, John K.
2000-01-01
Advances in computational technology and in physics-based modeling are making large-scale, detailed simulations of complex systems possible within the design environment. For example, the integration of computing, communications, and aerodynamics has reduced the time required to analyze major propulsion system components from days and weeks to minutes and hours. This breakthrough has enabled the detailed simulation of major propulsion system components to become a routine part of designing systems, providing the designer with critical information about the components early in the design process. This paper describes the development of the numerical propulsion system simulation (NPSS), a modular and extensible framework for the integration of multicomponent and multidisciplinary analysis tools using geographically distributed resources such as computing platforms, data bases, and people. The analysis is currently focused on large-scale modeling of complete aircraft engines. This will provide the product developer with a "virtual wind tunnel" that will reduce the number of hardware builds and tests required during the development of advanced aerospace propulsion systems.
Experiences in Automated Calibration of a Nickel Equation of State
NASA Astrophysics Data System (ADS)
Carpenter, John H.
2017-06-01
Wide availability of large computers has led to increasing incorporation of computational data, such as from density functional theory molecular dynamics, in the development of equation of state (EOS) models. Once a grid of computational data is available, it is usually left to an expert modeler to model the EOS using traditional techniques. One can envision the possibility of using the increasing computing resources to perform black-box calibration of EOS models, with the goal of reducing the workload on the modeler or enabling non-experts to generate good EOSs with such a tool. Progress towards building such a black-box calibration tool will be explored in the context of developing a new, wide-range EOS for nickel. While some details of the model and data will be shared, the focus will be on what was learned by automatically calibrating the model in a black-box method. Model choices and ensuring physicality will also be discussed. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Gradient Learning Algorithms for Ontology Computing
Gao, Wei; Zhu, Linli
2014-01-01
The gradient learning model has been raising great attention in view of its promising perspectives for applications in statistics, data dimensionality reducing, and other specific fields. In this paper, we raise a new gradient learning model for ontology similarity measuring and ontology mapping in multidividing setting. The sample error in this setting is given by virtue of the hypothesis space and the trick of ontology dividing operator. Finally, two experiments presented on plant and humanoid robotics field verify the efficiency of the new computation model for ontology similarity measure and ontology mapping applications in multidividing setting. PMID:25530752
NASA Technical Reports Server (NTRS)
Liang, Shoudan
2000-01-01
Our research effort has produced nine publications in peer-reviewed journals listed at the end of this report. The work reported here are in the following areas: (1) genetic network modeling; (2) autocatalytic model of pre-biotic evolution; (3) theoretical and computational studies of strongly correlated electron systems; (4) reducing thermal oscillations in atomic force microscope; (5) transcription termination mechanism in prokaryotic cells; and (6) the low glutamine usage in thennophiles obtained by studying completely sequenced genomes. We discuss the main accomplishments of these publications.
POD/DEIM reduced-order strategies for efficient four dimensional variational data assimilation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ştefănescu, R., E-mail: rstefane@vt.edu; Sandu, A., E-mail: sandu@cs.vt.edu; Navon, I.M., E-mail: inavon@fsu.edu
2015-08-15
This work studies reduced order modeling (ROM) approaches to speed up the solution of variational data assimilation problems with large scale nonlinear dynamical models. It is shown that a key requirement for a successful reduced order solution is that reduced order Karush–Kuhn–Tucker conditions accurately represent their full order counterparts. In particular, accurate reduced order approximations are needed for the forward and adjoint dynamical models, as well as for the reduced gradient. New strategies to construct reduced order based are developed for proper orthogonal decomposition (POD) ROM data assimilation using both Galerkin and Petrov–Galerkin projections. For the first time POD, tensorialmore » POD, and discrete empirical interpolation method (DEIM) are employed to develop reduced data assimilation systems for a geophysical flow model, namely, the two dimensional shallow water equations. Numerical experiments confirm the theoretical framework for Galerkin projection. In the case of Petrov–Galerkin projection, stabilization strategies must be considered for the reduced order models. The new reduced order shallow water data assimilation system provides analyses similar to those produced by the full resolution data assimilation system in one tenth of the computational time.« less
Fast neuromimetic object recognition using FPGA outperforms GPU implementations.
Orchard, Garrick; Martin, Jacob G; Vogelstein, R Jacob; Etienne-Cummings, Ralph
2013-08-01
Recognition of objects in still images has traditionally been regarded as a difficult computational problem. Although modern automated methods for visual object recognition have achieved steadily increasing recognition accuracy, even the most advanced computational vision approaches are unable to obtain performance equal to that of humans. This has led to the creation of many biologically inspired models of visual object recognition, among them the hierarchical model and X (HMAX) model. HMAX is traditionally known to achieve high accuracy in visual object recognition tasks at the expense of significant computational complexity. Increasing complexity, in turn, increases computation time, reducing the number of images that can be processed per unit time. In this paper we describe how the computationally intensive and biologically inspired HMAX model for visual object recognition can be modified for implementation on a commercial field-programmable aate Array, specifically the Xilinx Virtex 6 ML605 evaluation board with XC6VLX240T FPGA. We show that with minor modifications to the traditional HMAX model we can perform recognition on images of size 128 × 128 pixels at a rate of 190 images per second with a less than 1% loss in recognition accuracy in both binary and multiclass visual object recognition tasks.
Computational Planning in Facial Surgery.
Zachow, Stefan
2015-10-01
This article reflects the research of the last two decades in computational planning for cranio-maxillofacial surgery. Model-guided and computer-assisted surgery planning has tremendously developed due to ever increasing computational capabilities. Simulators for education, planning, and training of surgery are often compared with flight simulators, where maneuvers are also trained to reduce a possible risk of failure. Meanwhile, digital patient models can be derived from medical image data with astonishing accuracy and thus can serve for model surgery to derive a surgical template model that represents the envisaged result. Computerized surgical planning approaches, however, are often still explorative, meaning that a surgeon tries to find a therapeutic concept based on his or her expertise using computational tools that are mimicking real procedures. Future perspectives of an improved computerized planning may be that surgical objectives will be generated algorithmically by employing mathematical modeling, simulation, and optimization techniques. Planning systems thus act as intelligent decision support systems. However, surgeons can still use the existing tools to vary the proposed approach, but they mainly focus on how to transfer objectives into reality. Such a development may result in a paradigm shift for future surgery planning. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Nakatsui, M; Horimoto, K; Lemaire, F; Ürgüplü, A; Sedoglavic, A; Boulier, F
2011-09-01
Recent remarkable advances in computer performance have enabled us to estimate parameter values by the huge power of numerical computation, the so-called 'Brute force', resulting in the high-speed simultaneous estimation of a large number of parameter values. However, these advancements have not been fully utilised to improve the accuracy of parameter estimation. Here the authors review a novel method for parameter estimation using symbolic computation power, 'Bruno force', named after Bruno Buchberger, who found the Gröbner base. In the method, the objective functions combining the symbolic computation techniques are formulated. First, the authors utilise a symbolic computation technique, differential elimination, which symbolically reduces an equivalent system of differential equations to a system in a given model. Second, since its equivalent system is frequently composed of large equations, the system is further simplified by another symbolic computation. The performance of the authors' method for parameter accuracy improvement is illustrated by two representative models in biology, a simple cascade model and a negative feedback model in comparison with the previous numerical methods. Finally, the limits and extensions of the authors' method are discussed, in terms of the possible power of 'Bruno force' for the development of a new horizon in parameter estimation.
Reliable, Memory Speed Storage for Cluster Computing Frameworks
2014-06-16
specification API that can capture computations in many of today’s popular data -parallel computing models, e.g., MapReduce and SQL. We also ported the Hadoop ...today’s big data workloads: • Immutable data : Data is immutable once written, since dominant underlying storage systems, such as HDFS [3], only support...network transfers, so reads can be data -local. • Program size vs. data size: In big data processing, the same operation is repeatedly applied on massive
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Rui
2017-09-03
Mixing, thermal-stratification, and mass transport phenomena in large pools or enclosures play major roles for the safety of reactor systems. Depending on the fidelity requirement and computational resources, various modeling methods, from the 0-D perfect mixing model to 3-D Computational Fluid Dynamics (CFD) models, are available. Each is associated with its own advantages and shortcomings. It is very desirable to develop an advanced and efficient thermal mixing and stratification modeling capability embedded in a modern system analysis code to improve the accuracy of reactor safety analyses and to reduce modeling uncertainties. An advanced system analysis tool, SAM, is being developedmore » at Argonne National Laboratory for advanced non-LWR reactor safety analysis. While SAM is being developed as a system-level modeling and simulation tool, a reduced-order three-dimensional module is under development to model the multi-dimensional flow and thermal mixing and stratification in large enclosures of reactor systems. This paper provides an overview of the three-dimensional finite element flow model in SAM, including the governing equations, stabilization scheme, and solution methods. Additionally, several verification and validation tests are presented, including lid-driven cavity flow, natural convection inside a cavity, laminar flow in a channel of parallel plates. Based on the comparisons with the analytical solutions and experimental results, it is demonstrated that the developed 3-D fluid model can perform very well for a wide range of flow problems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curran, L.
1988-03-03
Interest has been building in recent months over the imminent arrival of a new class of supercomputer, called the ''supercomputer on a desk'' or the single-user model. Most observers expected the first such product to come from either of two startups, Ardent Computer Corp. or Stellar Computer Inc. But a surprise entry has shown up. Apollo Computer Inc. is launching a new work station this week that racks up an impressive list of industry first as it puts supercomputer power at the disposal of a single user. The new series 10000 from the Chelmsford, Mass., a company is built aroundmore » a reduced-instruction-set architecture that the company calls Prism, for parallel reduced-instruction-set multiprocessor. This article describes the 10000 and Prism.« less
Computational Fluid Dynamics Demonstration of Rigid Bodies in Motion
NASA Technical Reports Server (NTRS)
Camarena, Ernesto; Vu, Bruce T.
2011-01-01
The Design Analysis Branch (NE-Ml) at the Kennedy Space Center has not had the ability to accurately couple Rigid Body Dynamics (RBD) and Computational Fluid Dynamics (CFD). OVERFLOW-D is a flow solver that has been developed by NASA to have the capability to analyze and simulate dynamic motions with up to six Degrees of Freedom (6-DOF). Two simulations were prepared over the course of the internship to demonstrate 6DOF motion of rigid bodies under aerodynamic loading. The geometries in the simulations were based on a conceptual Space Launch System (SLS). The first simulation that was prepared and computed was the motion of a Solid Rocket Booster (SRB) as it separates from its core stage. To reduce computational time during the development of the simulation, only half of the physical domain with respect to the symmetry plane was simulated. Then a full solution was prepared and computed. The second simulation was a model of the SLS as it departs from a launch pad under a 20 knot crosswind. This simulation was reduced to Two Dimensions (2D) to reduce both preparation and computation time. By allowing 2-DOF for translations and 1-DOF for rotation, the simulation predicted unrealistic rotation. The simulation was then constrained to only allow translations.
An Eye Model for Computational Dosimetry Using A Multi-Scale Voxel Phantom
NASA Astrophysics Data System (ADS)
Caracappa, Peter F.; Rhodes, Ashley; Fiedler, Derek
2014-06-01
The lens of the eye is a radiosensitive tissue with cataract formation being the major concern. Recently reduced recommended dose limits to the lens of the eye have made understanding the dose to this tissue of increased importance. Due to memory limitations, the voxel resolution of computational phantoms used for radiation dose calculations is too large to accurately represent the dimensions of the eye. A revised eye model is constructed using physiological data for the dimensions of radiosensitive tissues, and is then transformed into a high-resolution voxel model. This eye model is combined with an existing set of whole body models to form a multi-scale voxel phantom, which is used with the MCNPX code to calculate radiation dose from various exposure types. This phantom provides an accurate representation of the radiation transport through the structures of the eye. Two alternate methods of including a high-resolution eye model within an existing whole body model are developed. The accuracy and performance of each method is compared against existing computational phantoms.
Lardy, Matthew A; Lebrun, Laurie; Bullard, Drew; Kissinger, Charles; Gobbi, Alberto
2012-05-25
In modern day drug discovery campaigns, computational chemists have to be concerned not only about improving the potency of molecules but also reducing any off-target ADMET activity. There are a plethora of antitargets that computational chemists may have to consider. Fortunately many antitargets have crystal structures deposited in the PDB. These structures are immediately useful to our Autocorrelator: an automated model generator that optimizes variables for building computational models. This paper describes the use of the Autocorrelator to construct high quality docking models for cytochrome P450 2C9 (CYP2C9) from two publicly available crystal structures. Both models result in strong correlation coefficients (R² > 0.66) between the predicted and experimental determined log(IC₅₀) values. Results from the two models overlap well with each other, converging on the same scoring function, deprotonated charge state, and predicted the binding orientation for our collection of molecules.
Computational biology in the cloud: methods and new insights from computing at scale.
Kasson, Peter M
2013-01-01
The past few years have seen both explosions in the size of biological data sets and the proliferation of new, highly flexible on-demand computing capabilities. The sheer amount of information available from genomic and metagenomic sequencing, high-throughput proteomics, experimental and simulation datasets on molecular structure and dynamics affords an opportunity for greatly expanded insight, but it creates new challenges of scale for computation, storage, and interpretation of petascale data. Cloud computing resources have the potential to help solve these problems by offering a utility model of computing and storage: near-unlimited capacity, the ability to burst usage, and cheap and flexible payment models. Effective use of cloud computing on large biological datasets requires dealing with non-trivial problems of scale and robustness, since performance-limiting factors can change substantially when a dataset grows by a factor of 10,000 or more. New computing paradigms are thus often needed. The use of cloud platforms also creates new opportunities to share data, reduce duplication, and to provide easy reproducibility by making the datasets and computational methods easily available.
Reduced-Order Models Based on Linear and Nonlinear Aerodynamic Impulse Responses
NASA Technical Reports Server (NTRS)
Silva, Walter A.
1999-01-01
This paper discusses a method for the identification and application of reduced-order models based on linear and nonlinear aerodynamic impulse responses. The Volterra theory of nonlinear systems and an appropriate kernel identification technique are described. Insight into the nature of kernels is provided by applying the method to the nonlinear Riccati equation in a non-aerodynamic application. The method is then applied to a nonlinear aerodynamic model of RAE 2822 supercritical airfoil undergoing plunge motions using the CFL3D Navier-Stokes flow solver with the Spalart-Allmaras turbulence model. Results demonstrate the computational efficiency of the technique.
Reduced Order Models Based on Linear and Nonlinear Aerodynamic Impulse Responses
NASA Technical Reports Server (NTRS)
Silva, Walter A.
1999-01-01
This paper discusses a method for the identification and application of reduced-order models based on linear and nonlinear aerodynamic impulse responses. The Volterra theory of nonlinear systems and an appropriate kernel identification technique are described. Insight into the nature of kernels is provided by applying the method to the nonlinear Riccati equation in a non-aerodynamic application. The method is then applied to a nonlinear aerodynamic model of an RAE 2822 supercritical airfoil undergoing plunge motions using the CFL3D Navier-Stokes flow solver with the Spalart-Allmaras turbulence model. Results demonstrate the computational efficiency of the technique.
Computational Modeling Using OpenSim to Simulate a Squat Exercise Motion
NASA Technical Reports Server (NTRS)
Gallo, C. A.; Thompson, W. K.; Lewandowski, B. E.; Humphreys, B. T.; Funk, J. H.; Funk, N. H.; Weaver, A. S.; Perusek, G. P.; Sheehan, C. C.; Mulugeta, L.
2015-01-01
Long duration space travel to destinations such as Mars or an asteroid will expose astronauts to extended periods of reduced gravity. Astronauts will use an exercise regime for the duration of the space flight to minimize the loss of bone density, muscle mass and aerobic capacity that occurs during exposure to a reduced gravity environment. Since the area available in the spacecraft for an exercise device is limited and gravity is not present to aid loading, compact resistance exercise device prototypes are being developed. Since it is difficult to rigorously test these proposed devices in space flight, computational modeling provides an estimation of the muscle forces, joint torques and joint loads during exercise to gain insight on the efficacy to protect the musculoskeletal health of astronauts.
NASA Astrophysics Data System (ADS)
Chen, Xiuhong; Huang, Xianglei; Jiao, Chaoyi; Flanner, Mark G.; Raeker, Todd; Palen, Brock
2017-01-01
The suites of numerical models used for simulating climate of our planet are usually run on dedicated high-performance computing (HPC) resources. This study investigates an alternative to the usual approach, i.e. carrying out climate model simulations on commercially available cloud computing environment. We test the performance and reliability of running the CESM (Community Earth System Model), a flagship climate model in the United States developed by the National Center for Atmospheric Research (NCAR), on Amazon Web Service (AWS) EC2, the cloud computing environment by Amazon.com, Inc. StarCluster is used to create virtual computing cluster on the AWS EC2 for the CESM simulations. The wall-clock time for one year of CESM simulation on the AWS EC2 virtual cluster is comparable to the time spent for the same simulation on a local dedicated high-performance computing cluster with InfiniBand connections. The CESM simulation can be efficiently scaled with the number of CPU cores on the AWS EC2 virtual cluster environment up to 64 cores. For the standard configuration of the CESM at a spatial resolution of 1.9° latitude by 2.5° longitude, increasing the number of cores from 16 to 64 reduces the wall-clock running time by more than 50% and the scaling is nearly linear. Beyond 64 cores, the communication latency starts to outweigh the benefit of distributed computing and the parallel speedup becomes nearly unchanged.
In-silico wear prediction for knee replacements--methodology and corroboration.
Strickland, M A; Taylor, M
2009-07-22
The capability to predict in-vivo wear of knee replacements is a valuable pre-clinical analysis tool for implant designers. Traditionally, time-consuming experimental tests provided the principal means of investigating wear. Today, computational models offer an alternative. However, the validity of these models has not been demonstrated across a range of designs and test conditions, and several different formulas are in contention for estimating wear rates, limiting confidence in the predictive power of these in-silico models. This study collates and retrospectively simulates a wide range of experimental wear tests using fast rigid-body computational models with extant wear prediction algorithms, to assess the performance of current in-silico wear prediction tools. The number of tests corroborated gives a broader, more general assessment of the performance of these wear-prediction tools, and provides better estimates of the wear 'constants' used in computational models. High-speed rigid-body modelling allows a range of alternative algorithms to be evaluated. Whilst most cross-shear (CS)-based models perform comparably, the 'A/A+B' wear model appears to offer the best predictive power amongst existing wear algorithms. However, the range and variability of experimental data leaves considerable uncertainty in the results. More experimental data with reduced variability and more detailed reporting of studies will be necessary to corroborate these models with greater confidence. With simulation times reduced to only a few minutes, these models are ideally suited to large-volume 'design of experiment' or probabilistic studies (which are essential if pre-clinical assessment tools are to begin addressing the degree of variation observed clinically and in explanted components).
NASA Astrophysics Data System (ADS)
Newman, Gregory A.
2014-01-01
Many geoscientific applications exploit electrostatic and electromagnetic fields to interrogate and map subsurface electrical resistivity—an important geophysical attribute for characterizing mineral, energy, and water resources. In complex three-dimensional geologies, where many of these resources remain to be found, resistivity mapping requires large-scale modeling and imaging capabilities, as well as the ability to treat significant data volumes, which can easily overwhelm single-core and modest multicore computing hardware. To treat such problems requires large-scale parallel computational resources, necessary for reducing the time to solution to a time frame acceptable to the exploration process. The recognition that significant parallel computing processes must be brought to bear on these problems gives rise to choices that must be made in parallel computing hardware and software. In this review, some of these choices are presented, along with the resulting trade-offs. We also discuss future trends in high-performance computing and the anticipated impact on electromagnetic (EM) geophysics. Topics discussed in this review article include a survey of parallel computing platforms, graphics processing units to multicore CPUs with a fast interconnect, along with effective parallel solvers and associated solver libraries effective for inductive EM modeling and imaging.
ERIC Educational Resources Information Center
Plavnick, Joshua B.
2012-01-01
Video modeling is an effective and efficient methodology for teaching new skills to individuals with autism. New technology may enhance video modeling as smartphones or tablet computers allow for portable video displays. However, the reduced screen size may decrease the likelihood of attending to the video model for some children. The present…
Fully implicit adaptive mesh refinement algorithm for reduced MHD
NASA Astrophysics Data System (ADS)
Philip, Bobby; Pernice, Michael; Chacon, Luis
2006-10-01
In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technology to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite grid --FAC-- algorithms) for scalability. We demonstrate that the concept is indeed feasible, featuring near-optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations in challenging dissipation regimes will be presented on a variety of problems that benefit from this capability, including tearing modes, the island coalescence instability, and the tilt mode instability. L. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) B. Philip, M. Pernice, and L. Chac'on, Lecture Notes in Computational Science and Engineering, accepted (2006)
User settings on dive computers: reliability in aiding conservative diving.
Sayer, Martin D J; Azzopardi, Elaine; Sieber, Arne
2016-06-01
Divers can make adjustments to diving computers when they may need or want to dive more conservatively (e.g., diving with a persistent (patent) foramen ovale). Information describing the effects of these alterations or how they compare to other methods, such as using enriched air nitrox (EANx) with air dive planning tools, is lacking. Seven models of dive computer from four manufacturers (Mares, Suunto, Oceanic and UWATEC) were subjected to single square-wave compression profiles (maximum depth: 20 or 40 metres' sea water, msw), single multi-level profiles (maximum depth: 30 msw; stops at 15 and 6 msw), and multi-dive series (two dives to 30 msw followed by one to 20 msw). Adjustable settings were employed for each dive profile; some modified profiles were compared against stand-alone use of EANx. Dives were shorter or indicated longer decompression obligations when conservative settings were applied. However, some computers in default settings produced more conservative dives than others that had been modified. Some computer-generated penalties were greater than when using EANx alone, particularly at partial pressures of oxygen (PO₂) below 1.40 bar. Some computers 'locked out' during the multi-dive series; others would continue to support decompression with, in some cases, automatically-reduced levels of conservatism. Changing reduced gradient bubble model values on Suunto computers produced few differences. The range of possible adjustments and the non-standard computer response to them complicates the ability to provide accurate guidance to divers wanting to dive more conservatively. The use of EANx alone may not always generate satisfactory levels of conservatism.
Considering dominance in reduced single-step genomic evaluations.
Ertl, J; Edel, C; Pimentel, E C G; Emmerling, R; Götz, K-U
2018-06-01
Single-step models including dominance can be an enormous computational task and can even be prohibitive for practical application. In this study, we try to answer the question whether a reduced single-step model is able to estimate breeding values of bulls and breeding values, dominance deviations and total genetic values of cows with acceptable quality. Genetic values and phenotypes were simulated (500 repetitions) for a small Fleckvieh pedigree consisting of 371 bulls (180 thereof genotyped) and 553 cows (40 thereof genotyped). This pedigree was virtually extended for 2,407 non-genotyped daughters. Genetic values were estimated with the single-step model and with different reduced single-step models. Including more relatives of genotyped cows in the reduced single-step model resulted in a better agreement of results with the single-step model. Accuracies of genetic values were largest with single-step and smallest with reduced single-step when only the cows genotyped were modelled. The results indicate that a reduced single-step model is suitable to estimate breeding values of bulls and breeding values, dominance deviations and total genetic values of cows with acceptable quality. © 2018 Blackwell Verlag GmbH.
Effect of roof strength in injury mitigation during pole impact.
Friedman, Keith; Hutchinson, John; Mihora, Dennis; Kumar, Sri; Frieder, Russell; Sances, Anthony
2007-01-01
Motor vehicle accidents involving pole impacts often result in serious head and neck injuries to occupants. Pole impacts are typically associated with rollover and side collisions. During such events, the roof structure is often deformed into the occupant survival space. The existence of a strengthened roof structure would reduce roof deformation and accordingly provide better protection to occupants. The present study examines the effect of reinforced (strengthened) roofs using experimental crash study and computer model simulation. The experimental study includes the production cab structure of a pickup truck. The cab structure was loaded using an actual telephone pole under controlled laboratory conditions. The cab structure was subjected to two separate load conditions at the A-pillar and door frame. The contact force and deformation were measured using a force gauge and potentiometer, respectively. A computer finite element model was created to simulate the experimental studies. The results of finite element model matched well with experimental data during two different load conditions. The validated finite element model was then used to simulate a reinforced roof structure. The reinforced roof significantly reduced the structural deformations compared to those observed in the production roof. The peak deformation was reduced by approximately 75% and peak velocity was reduced by approximately 50%. Such a reduction in the deformation of the roof structure helps to maintain a safe occupant survival space.
NASA Astrophysics Data System (ADS)
Pitton, Giuseppe; Quaini, Annalisa; Rozza, Gianluigi
2017-09-01
We focus on reducing the computational costs associated with the hydrodynamic stability of solutions of the incompressible Navier-Stokes equations for a Newtonian and viscous fluid in contraction-expansion channels. In particular, we are interested in studying steady bifurcations, occurring when non-unique stable solutions appear as physical and/or geometric control parameters are varied. The formulation of the stability problem requires solving an eigenvalue problem for a partial differential operator. An alternative to this approach is the direct simulation of the flow to characterize the asymptotic behavior of the solution. Both approaches can be extremely expensive in terms of computational time. We propose to apply Reduced Order Modeling (ROM) techniques to reduce the demanding computational costs associated with the detection of a type of steady bifurcations in fluid dynamics. The application that motivated the present study is the onset of asymmetries (i.e., symmetry breaking bifurcation) in blood flow through a regurgitant mitral valve, depending on the Reynolds number and the regurgitant mitral valve orifice shape.
Real-time characterization of partially observed epidemics using surrogate models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Safta, Cosmin; Ray, Jaideep; Lefantzi, Sophia
We present a statistical method, predicated on the use of surrogate models, for the 'real-time' characterization of partially observed epidemics. Observations consist of counts of symptomatic patients, diagnosed with the disease, that may be available in the early epoch of an ongoing outbreak. Characterization, in this context, refers to estimation of epidemiological parameters that can be used to provide short-term forecasts of the ongoing epidemic, as well as to provide gross information on the dynamics of the etiologic agent in the affected population e.g., the time-dependent infection rate. The characterization problem is formulated as a Bayesian inverse problem, and epidemiologicalmore » parameters are estimated as distributions using a Markov chain Monte Carlo (MCMC) method, thus quantifying the uncertainty in the estimates. In some cases, the inverse problem can be computationally expensive, primarily due to the epidemic simulator used inside the inversion algorithm. We present a method, based on replacing the epidemiological model with computationally inexpensive surrogates, that can reduce the computational time to minutes, without a significant loss of accuracy. The surrogates are created by projecting the output of an epidemiological model on a set of polynomial chaos bases; thereafter, computations involving the surrogate model reduce to evaluations of a polynomial. We find that the epidemic characterizations obtained with the surrogate models is very close to that obtained with the original model. We also find that the number of projections required to construct a surrogate model is O(10)-O(10{sup 2}) less than the number of samples required by the MCMC to construct a stationary posterior distribution; thus, depending upon the epidemiological models in question, it may be possible to omit the offline creation and caching of surrogate models, prior to their use in an inverse problem. The technique is demonstrated on synthetic data as well as observations from the 1918 influenza pandemic collected at Camp Custer, Michigan.« less
A Comment on Early Student Blunders on Computer-Based Adaptive Tests
ERIC Educational Resources Information Center
Green, Bert F.
2011-01-01
This article refutes a recent claim that computer-based tests produce biased scores for very proficient test takers who make mistakes on one or two initial items and that the "bias" can be reduced by using a four-parameter IRT model. Because the same effect occurs with pattern scores on nonadaptive tests, the effect results from IRT scoring, not…
ERIC Educational Resources Information Center
Lamb, Richard L.; Firestone, Jonah B.
2017-01-01
Conflicting explanations and unrelated information in science classrooms increase cognitive load and decrease efficiency in learning. This reduced efficiency ultimately limits one's ability to solve reasoning problems in the science. In reasoning, it is the ability of students to sift through and identify critical pieces of information that is of…
NASA Technical Reports Server (NTRS)
Lin, Shian-Jiann; Atlas, Robert (Technical Monitor)
2002-01-01
The Data Assimilation Office (DAO) has been developing a new generation of ultra-high resolution General Circulation Model (GCM) that is suitable for 4-D data assimilation, numerical weather predictions, and climate simulations. These three applications have conflicting requirements. For 4-D data assimilation and weather predictions, it is highly desirable to run the model at the highest possible spatial resolution (e.g., 55 km or finer) so as to be able to resolve and predict socially and economically important weather phenomena such as tropical cyclones, hurricanes, and severe winter storms. For climate change applications, the model simulations need to be carried out for decades, if not centuries. To reduce uncertainty in climate change assessments, the next generation model would also need to be run at a fine enough spatial resolution that can at least marginally simulate the effects of intense tropical cyclones. Scientific problems (e.g., parameterization of subgrid scale moist processes) aside, all three areas of application require the model's computational performance to be dramatically improved as compared to the previous generation. In this talk, I will present the current and future developments of the "finite-volume dynamical core" at the Data Assimilation Office. This dynamical core applies modem monotonicity preserving algorithms and is genuinely conservative by construction, not by an ad hoc fixer. The "discretization" of the conservation laws is purely local, which is clearly advantageous for resolving sharp gradient flow features. In addition, the local nature of the finite-volume discretization also has a significant advantage on distributed memory parallel computers. Together with a unique vertically Lagrangian control volume discretization that essentially reduces the dimension of the computational problem from three to two, the finite-volume dynamical core is very efficient, particularly at high resolutions. I will also present the computational design of the dynamical core using a hybrid distributed-shared memory programming paradigm that is portable to virtually any of today's high-end parallel super-computing clusters.
NASA Technical Reports Server (NTRS)
Lin, Shian-Jiann; Atlas, Robert (Technical Monitor)
2002-01-01
The Data Assimilation Office (DAO) has been developing a new generation of ultra-high resolution General Circulation Model (GCM) that is suitable for 4-D data assimilation, numerical weather predictions, and climate simulations. These three applications have conflicting requirements. For 4-D data assimilation and weather predictions, it is highly desirable to run the model at the highest possible spatial resolution (e.g., 55 kin or finer) so as to be able to resolve and predict socially and economically important weather phenomena such as tropical cyclones, hurricanes, and severe winter storms. For climate change applications, the model simulations need to be carried out for decades, if not centuries. To reduce uncertainty in climate change assessments, the next generation model would also need to be run at a fine enough spatial resolution that can at least marginally simulate the effects of intense tropical cyclones. Scientific problems (e.g., parameterization of subgrid scale moist processes) aside, all three areas of application require the model's computational performance to be dramatically improved as compared to the previous generation. In this talk, I will present the current and future developments of the "finite-volume dynamical core" at the Data Assimilation Office. This dynamical core applies modem monotonicity preserving algorithms and is genuinely conservative by construction, not by an ad hoc fixer. The "discretization" of the conservation laws is purely local, which is clearly advantageous for resolving sharp gradient flow features. In addition, the local nature of the finite-volume discretization also has a significant advantage on distributed memory parallel computers. Together with a unique vertically Lagrangian control volume discretization that essentially reduces the dimension of the computational problem from three to two, the finite-volume dynamical core is very efficient, particularly at high resolutions. I will also present the computational design of the dynamical core using a hybrid distributed- shared memory programming paradigm that is portable to virtually any of today's high-end parallel super-computing clusters.
Petri Net controller synthesis based on decomposed manufacturing models.
Dideban, Abbas; Zeraatkar, Hashem
2018-06-01
Utilizing of supervisory control theory on the real systems in many modeling tools such as Petri Net (PN) becomes challenging in recent years due to the significant states in the automata models or uncontrollable events. The uncontrollable events initiate the forbidden states which might be removed by employing some linear constraints. Although there are many methods which have been proposed to reduce these constraints, enforcing them to a large-scale system is very difficult and complicated. This paper proposes a new method for controller synthesis based on PN modeling. In this approach, the original PN model is broken down into some smaller models in which the computational cost reduces significantly. Using this method, it is easy to reduce and enforce the constraints to a Petri net model. The appropriate results of our proposed method on the PN models denote worthy controller synthesis for the large scale systems. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Performance of a reduced-order FSI model for flow-induced vocal fold vibration
NASA Astrophysics Data System (ADS)
Chang, Siyuan; Luo, Haoxiang; Luo's lab Team
2016-11-01
Vocal fold vibration during speech production involves a three-dimensional unsteady glottal jet flow and three-dimensional nonlinear tissue mechanics. A full 3D fluid-structure interaction (FSI) model is computationally expensive even though it provides most accurate information about the system. On the other hand, an efficient reduced-order FSI model is useful for fast simulation and analysis of the vocal fold dynamics, which is often needed in procedures such as optimization and parameter estimation. In this work, we study the performance of a reduced-order model as compared with the corresponding full 3D model in terms of its accuracy in predicting the vibration frequency and deformation mode. In the reduced-order model, we use a 1D flow model coupled with a 3D tissue model. Two different hyperelastic tissue behaviors are assumed. In addition, the vocal fold thickness and subglottal pressure are varied for systematic comparison. The result shows that the reduced-order model provides consistent predictions as the full 3D model across different tissue material assumptions and subglottal pressures. However, the vocal fold thickness has most effect on the model accuracy, especially when the vocal fold is thin. Supported by the NSF.
Scalable parallel distance field construction for large-scale applications
Yu, Hongfeng; Xie, Jinrong; Ma, Kwan -Liu; ...
2015-10-01
Computing distance fields is fundamental to many scientific and engineering applications. Distance fields can be used to direct analysis and reduce data. In this paper, we present a highly scalable method for computing 3D distance fields on massively parallel distributed-memory machines. Anew distributed spatial data structure, named parallel distance tree, is introduced to manage the level sets of data and facilitate surface tracking overtime, resulting in significantly reduced computation and communication costs for calculating the distance to the surface of interest from any spatial locations. Our method supports several data types and distance metrics from real-world applications. We demonstrate itsmore » efficiency and scalability on state-of-the-art supercomputers using both large-scale volume datasets and surface models. We also demonstrate in-situ distance field computation on dynamic turbulent flame surfaces for a petascale combustion simulation. In conclusion, our work greatly extends the usability of distance fields for demanding applications.« less
Sector and Sphere: the design and implementation of a high-performance data cloud
Gu, Yunhong; Grossman, Robert L.
2009-01-01
Cloud computing has demonstrated that processing very large datasets over commodity clusters can be done simply, given the right programming model and infrastructure. In this paper, we describe the design and implementation of the Sector storage cloud and the Sphere compute cloud. By contrast with the existing storage and compute clouds, Sector can manage data not only within a data centre, but also across geographically distributed data centres. Similarly, the Sphere compute cloud supports user-defined functions (UDFs) over data both within and across data centres. As a special case, MapReduce-style programming can be implemented in Sphere by using a Map UDF followed by a Reduce UDF. We describe some experimental studies comparing Sector/Sphere and Hadoop using the Terasort benchmark. In these studies, Sector is approximately twice as fast as Hadoop. Sector/Sphere is open source. PMID:19451100
Scalable Parallel Distance Field Construction for Large-Scale Applications.
Yu, Hongfeng; Xie, Jinrong; Ma, Kwan-Liu; Kolla, Hemanth; Chen, Jacqueline H
2015-10-01
Computing distance fields is fundamental to many scientific and engineering applications. Distance fields can be used to direct analysis and reduce data. In this paper, we present a highly scalable method for computing 3D distance fields on massively parallel distributed-memory machines. A new distributed spatial data structure, named parallel distance tree, is introduced to manage the level sets of data and facilitate surface tracking over time, resulting in significantly reduced computation and communication costs for calculating the distance to the surface of interest from any spatial locations. Our method supports several data types and distance metrics from real-world applications. We demonstrate its efficiency and scalability on state-of-the-art supercomputers using both large-scale volume datasets and surface models. We also demonstrate in-situ distance field computation on dynamic turbulent flame surfaces for a petascale combustion simulation. Our work greatly extends the usability of distance fields for demanding applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlsson, Mats; Johansson, Mikael; Larson, Jeffrey
Previous approaches for scheduling a league with round-robin and divisional tournaments involved decomposing the problem into easier subproblems. This approach, used to schedule the top Swedish handball league Elitserien, reduces the problem complexity but can result in suboptimal schedules. This paper presents an integrated constraint programming model that allows to perform the scheduling in a single step. Particular attention is given to identifying implied and symmetry-breaking constraints that reduce the computational complexity significantly. The experimental evaluation of the integrated approach takes considerably less computational effort than the previous approach.
NASA Technical Reports Server (NTRS)
Goussis, D. A.; Lam, S. H.; Gnoffo, P. A.
1990-01-01
The Computational Singular Perturbation CSP methods is employed (1) in the modeling of a homogeneous isothermal reacting system and (2) in the numerical simulation of the chemical reactions in a hypersonic flowfield. Reduced and simplified mechanisms are constructed. The solutions obtained on the basis of these approximate mechanisms are shown to be in very good agreement with the exact solution based on the full mechanism. Physically meaningful approximations are derived. It is demonstrated that the deduction of these approximations from CSP is independent of the complexity of the problem and requires no intuition or experience in chemical kinetics.
NASA Technical Reports Server (NTRS)
Johnson, D. R.; Uccellini, L. W.
1983-01-01
In connection with the employment of the sigma coordinates introduced by Phillips (1957), problems can arise regarding an accurate finite-difference computation of the pressure gradient force. Over steeply sloped terrain, the calculation of the sigma-coordinate pressure gradient force involves computing the difference between two large terms of opposite sign which results in large truncation error. To reduce the truncation error, several finite-difference methods have been designed and implemented. The present investigation has the objective to provide another method of computing the sigma-coordinate pressure gradient force. Phillips' method is applied for the elimination of a hydrostatic component to a flux formulation. The new technique is compared with four other methods for computing the pressure gradient force. The work is motivated by the desire to use an isentropic and sigma-coordinate hybrid model for experiments designed to study flow near mountainous terrain.
Aeroelastic Uncertainty Quantification Studies Using the S4T Wind Tunnel Model
NASA Technical Reports Server (NTRS)
Nikbay, Melike; Heeg, Jennifer
2017-01-01
This paper originates from the joint efforts of an aeroelastic study team in the Applied Vehicle Technology Panel from NATO Science and Technology Organization, with the Task Group number AVT-191, titled "Application of Sensitivity Analysis and Uncertainty Quantification to Military Vehicle Design." We present aeroelastic uncertainty quantification studies using the SemiSpan Supersonic Transport wind tunnel model at the NASA Langley Research Center. The aeroelastic study team decided treat both structural and aerodynamic input parameters as uncertain and represent them as samples drawn from statistical distributions, propagating them through aeroelastic analysis frameworks. Uncertainty quantification processes require many function evaluations to asses the impact of variations in numerous parameters on the vehicle characteristics, rapidly increasing the computational time requirement relative to that required to assess a system deterministically. The increased computational time is particularly prohibitive if high-fidelity analyses are employed. As a remedy, the Istanbul Technical University team employed an Euler solver in an aeroelastic analysis framework, and implemented reduced order modeling with Polynomial Chaos Expansion and Proper Orthogonal Decomposition to perform the uncertainty propagation. The NASA team chose to reduce the prohibitive computational time by employing linear solution processes. The NASA team also focused on determining input sample distributions.
Refinement, Reduction, and Replacement of Animal Toxicity Tests by Computational Methods.
Ford, Kevin A
2016-12-01
Widespread public and scientific interest in promoting the care and well-being of animals used for toxicity testing has given rise to improvements in animal welfare practices and views over time, as well as laws and regulations that support means to reduce, refine, and replace animal use (known as the 3Rs) in certain toxicity studies. One way these regulations continue to achieve their aim is by promoting the research, development, and application of alternative testing approaches to characterize potential toxicities either without animals or with minimal use. An important example of an alternative approach is the use of computational toxicology models. Along with the potential capacity to reduce or replace the use of animals for the assessment of particular toxicological endpoints, computational models offer several advantages compared to in vitro and in vivo approaches, including cost-effectiveness, rapid availability of results, and the ability to fully standardize procedures. Pharmaceutical research incorporating the use of computational models has increased steadily over the past 15 years, likely driven by the motivation of companies to screen out toxic compounds in the early stages of development. Models are currently available to aid in the prediction of several important toxicological endpoints, including mutagenicity, carcinogenicity, eye irritation, hepatotoxicity, and skin sensitization, albeit with varying degrees of success. This review serves to introduce the concepts of computational toxicology and evaluate their role in the safety assessment of compounds, while also highlighting the application of in silico methods in the support of the goal and vision of the 3Rs. © The Author 2016. Published by Oxford University Press on behalf of the Institute for Laboratory Animal Research.All rights reserved. For permissions, please email: journals.permissions@oup.com.
Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems
Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R
2006-01-01
Background We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. Results We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Conclusion Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems. PMID:17081289
Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems.
Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R
2006-11-02
We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems.
Linear static structural and vibration analysis on high-performance computers
NASA Technical Reports Server (NTRS)
Baddourah, M. A.; Storaasli, O. O.; Bostic, S. W.
1993-01-01
Parallel computers offer the oppurtunity to significantly reduce the computation time necessary to analyze large-scale aerospace structures. This paper presents algorithms developed for and implemented on massively-parallel computers hereafter referred to as Scalable High-Performance Computers (SHPC), for the most computationally intensive tasks involved in structural analysis, namely, generation and assembly of system matrices, solution of systems of equations and calculation of the eigenvalues and eigenvectors. Results on SHPC are presented for large-scale structural problems (i.e. models for High-Speed Civil Transport). The goal of this research is to develop a new, efficient technique which extends structural analysis to SHPC and makes large-scale structural analyses tractable.
NASA Astrophysics Data System (ADS)
Zimovets, Artem; Matviychuk, Alexander; Ushakov, Vladimir
2016-12-01
The paper presents two different approaches to reduce the time of computer calculation of reachability sets. First of these two approaches use different data structures for storing the reachability sets in the computer memory for calculation in single-threaded mode. Second approach is based on using parallel algorithms with reference to the data structures from the first approach. Within the framework of this paper parallel algorithm of approximate reachability set calculation on computer with SMP-architecture is proposed. The results of numerical modelling are presented in the form of tables which demonstrate high efficiency of parallel computing technology and also show how computing time depends on the used data structure.
Towards feasible and effective predictive wavefront control for adaptive optics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poyneer, L A; Veran, J
We have recently proposed Predictive Fourier Control, a computationally efficient and adaptive algorithm for predictive wavefront control that assumes frozen flow turbulence. We summarize refinements to the state-space model that allow operation with arbitrary computational delays and reduce the computational cost of solving for new control. We present initial atmospheric characterization using observations with Gemini North's Altair AO system. These observations, taken over 1 year, indicate that frozen flow is exists, contains substantial power, and is strongly detected 94% of the time.
A locally p-adaptive approach for Large Eddy Simulation of compressible flows in a DG framework
NASA Astrophysics Data System (ADS)
Tugnoli, Matteo; Abbà, Antonella; Bonaventura, Luca; Restelli, Marco
2017-11-01
We investigate the possibility of reducing the computational burden of LES models by employing local polynomial degree adaptivity in the framework of a high-order DG method. A novel degree adaptation technique especially featured to be effective for LES applications is proposed and its effectiveness is compared to that of other criteria already employed in the literature. The resulting locally adaptive approach allows to achieve significant reductions in computational cost of representative LES computations.
Distributed intrusion detection system based on grid security model
NASA Astrophysics Data System (ADS)
Su, Jie; Liu, Yahui
2008-03-01
Grid computing has developed rapidly with the development of network technology and it can solve the problem of large-scale complex computing by sharing large-scale computing resource. In grid environment, we can realize a distributed and load balance intrusion detection system. This paper first discusses the security mechanism in grid computing and the function of PKI/CA in the grid security system, then gives the application of grid computing character in the distributed intrusion detection system (IDS) based on Artificial Immune System. Finally, it gives a distributed intrusion detection system based on grid security system that can reduce the processing delay and assure the detection rates.
A Worst-Case Approach for On-Line Flutter Prediction
NASA Technical Reports Server (NTRS)
Lind, Rick C.; Brenner, Martin J.
1998-01-01
Worst-case flutter margins may be computed for a linear model with respect to a set of uncertainty operators using the structured singular value. This paper considers an on-line implementation to compute these robust margins in a flight test program. Uncertainty descriptions are updated at test points to account for unmodeled time-varying dynamics of the airplane by ensuring the robust model is not invalidated by measured flight data. Robust margins computed with respect to this uncertainty remain conservative to the changing dynamics throughout the flight. A simulation clearly demonstrates this method can improve the efficiency of flight testing by accurately predicting the flutter margin to improve safety while reducing the necessary flight time.
NASA Astrophysics Data System (ADS)
Huang, Xiaomeng; Tang, Qiang; Tseng, Yuheng; Hu, Yong; Baker, Allison H.; Bryan, Frank O.; Dennis, John; Fu, Haohuan; Yang, Guangwen
2016-11-01
In the Community Earth System Model (CESM), the ocean model is computationally expensive for high-resolution grids and is often the least scalable component for high-resolution production experiments. The major bottleneck is that the barotropic solver scales poorly at high core counts. We design a new barotropic solver to accelerate the high-resolution ocean simulation. The novel solver adopts a Chebyshev-type iterative method to reduce the global communication cost in conjunction with an effective block preconditioner to further reduce the iterations. The algorithm and its computational complexity are theoretically analyzed and compared with other existing methods. We confirm the significant reduction of the global communication time with a competitive convergence rate using a series of idealized tests. Numerical experiments using the CESM 0.1° global ocean model show that the proposed approach results in a factor of 1.7 speed-up over the original method with no loss of accuracy, achieving 10.5 simulated years per wall-clock day on 16 875 cores.
A multiscale climate emulator for long-term morphodynamics (MUSCLE-morpho)
NASA Astrophysics Data System (ADS)
Antolínez, José Antonio A.; Méndez, Fernando J.; Camus, Paula; Vitousek, Sean; González, E. Mauricio; Ruggiero, Peter; Barnard, Patrick
2016-01-01
Interest in understanding long-term coastal morphodynamics has recently increased as climate change impacts become perceptible and accelerated. Multiscale, behavior-oriented and process-based models, or hybrids of the two, are typically applied with deterministic approaches which require considerable computational effort. In order to reduce the computational cost of modeling large spatial and temporal scales, input reduction and morphological acceleration techniques have been developed. Here we introduce a general framework for reducing dimensionality of wave-driver inputs to morphodynamic models. The proposed framework seeks to account for dependencies with global atmospheric circulation fields and deals simultaneously with seasonality, interannual variability, long-term trends, and autocorrelation of wave height, wave period, and wave direction. The model is also able to reproduce future wave climate time series accounting for possible changes in the global climate system. An application of long-term shoreline evolution is presented by comparing the performance of the real and the simulated wave climate using a one-line model. This article was corrected on 2 FEB 2016. See the end of the full text for details.
Mathematical Description of Complex Chemical Kinetics and Application to CFD Modeling Codes
NASA Technical Reports Server (NTRS)
Bittker, D. A.
1993-01-01
A major effort in combustion research at the present time is devoted to the theoretical modeling of practical combustion systems. These include turbojet and ramjet air-breathing engines as well as ground-based gas-turbine power generating systems. The ability to use computational modeling extensively in designing these products not only saves time and money, but also helps designers meet the quite rigorous environmental standards that have been imposed on all combustion devices. The goal is to combine the very complex solution of the Navier-Stokes flow equations with realistic turbulence and heat-release models into a single computer code. Such a computational fluid-dynamic (CFD) code simulates the coupling of fluid mechanics with the chemistry of combustion to describe the practical devices. This paper will focus on the task of developing a simplified chemical model which can predict realistic heat-release rates as well as species composition profiles, and is also computationally rapid. We first discuss the mathematical techniques used to describe a complex, multistep fuel oxidation chemical reaction and develop a detailed mechanism for the process. We then show how this mechanism may be reduced and simplified to give an approximate model which adequately predicts heat release rates and a limited number of species composition profiles, but is computationally much faster than the original one. Only such a model can be incorporated into a CFD code without adding significantly to long computation times. Finally, we present some of the recent advances in the development of these simplified chemical mechanisms.
Mathematical description of complex chemical kinetics and application to CFD modeling codes
NASA Technical Reports Server (NTRS)
Bittker, D. A.
1993-01-01
A major effort in combustion research at the present time is devoted to the theoretical modeling of practical combustion systems. These include turbojet and ramjet air-breathing engines as well as ground-based gas-turbine power generating systems. The ability to use computational modeling extensively in designing these products not only saves time and money, but also helps designers meet the quite rigorous environmental standards that have been imposed on all combustion devices. The goal is to combine the very complex solution of the Navier-Stokes flow equations with realistic turbulence and heat-release models into a single computer code. Such a computational fluid-dynamic (CFD) code simulates the coupling of fluid mechanics with the chemistry of combustion to describe the practical devices. This paper will focus on the task of developing a simplified chemical model which can predict realistic heat-release rates as well as species composition profiles, and is also computationally rapid. We first discuss the mathematical techniques used to describe a complex, multistep fuel oxidation chemical reaction and develop a detailed mechanism for the process. We then show how this mechanism may be reduced and simplified to give an approximate model which adequately predicts heat release rates and a limited number of species composition profiles, but is computationally much faster than the original one. Only such a model can be incorporated into a CFD code without adding significantly to long computation times. Finally, we present some of the recent advances in the development of these simplified chemical mechanisms.
Uniform rovibrational collisional N2 bin model for DSMC, with application to atmospheric entry flows
NASA Astrophysics Data System (ADS)
Torres, E.; Bondar, Ye. A.; Magin, T. E.
2016-11-01
A state-to-state model for internal energy exchange and molecular dissociation allows for high-fidelity DSMC simulations. Elementary reaction cross sections for the N2 (v, J)+ N system were previously extracted from a quantum-chemical database, originally compiled at NASA Ames Research Center. Due to the high computational cost of simulating the full range of inelastic collision processes (approx. 23 million reactions), a coarse-grain model, called the Uniform RoVibrational Collisional (URVC) bin model can be used instead. This allows to reduce the original 9390 rovibrational levels of N2 to 10 energy bins. In the present work, this reduced model is used to simulate a 2D flow configuration, which more closely reproduces the conditions of high-speed entry into Earth's atmosphere. For this purpose, the URVC bin model had to be adapted for integration into the "Rarefied Gas Dynamics Analysis System" (RGDAS), a separate high-performance DSMC code capable of handling complex geometries and parallel computations. RGDAS was developed at the Institute of Theoretical and Applied Mechanics in Novosibirsk, Russia for use by the European Space Agency (ESA) and shares many features with the well-known SMILE code developed by the same group. We show that the reduced mechanism developed previously can be implemented in RGDAS, and the results exhibit nonequilibrium effects consistent with those observed in previous 1D-simulations.
Preliminary skyshine calculations for the Poloidal Diverter Tokamak Experiment
NASA Astrophysics Data System (ADS)
Nigg, D. W.; Wheeler, F. J.
1981-01-01
A calculational model is presented to estimate the radiation dose, due to the skyshine effect, in the control room and at the site boundary of the Poloidal Diverter Experiment (PDX) facility at Princeton University which requires substantial radiation shielding. The required composition and thickness of a water-filled roof shield that would reduce this effect to an acceptable level is computed, using an efficient one-dimensional model with an Sn calculation in slab geometry. The actual neutron skyshine dose is computed using a Monte Carlo model with the neutron source at the roof surface obtained from the slab Sn calculation, and the capture gamma dose is computed using a simple point-kernel single-scatter method. It is maintained that the slab model provides the exact probability of leakage out the top surface of the roof and that it is nearly as accurate as and much less costly than multi-dimensional techniques.
Preliminary skyshine calculations for the Poloidal Diverter Tokamak Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nigg, D.W.; Wheeler, F.J.
1981-01-01
A calculational model is presented to estimate the radiation dose, due to the skyshine effect, in the control room and at the site boundary of the Poloidal Diverter Experiment (PDX) facility at Princeton University which requires substantial radiation shielding. The required composition and thickness of a water-filled roof shield that would reduce this effect to an acceptable level is computed, using an efficient one-dimensional model with an Sn calculation in slab geometry. The actual neutron skyshine dose is computed using a Monte Carlo model with the neutron source at the roof surface obtained from the slab Sn calculation, and themore » capture gamma dose is computed using a simple point-kernel single-scatter method. It is maintained that the slab model provides the exact probability of leakage out the top surface of the roof and that it is nearly as accurate as and much less costly than multi-dimensional techniques.« less
NASA Astrophysics Data System (ADS)
Zhiying, Chen; Ping, Zhou
2017-11-01
Considering the robust optimization computational precision and efficiency for complex mechanical assembly relationship like turbine blade-tip radial running clearance, a hierarchically response surface robust optimization algorithm is proposed. The distribute collaborative response surface method is used to generate assembly system level approximation model of overall parameters and blade-tip clearance, and then a set samples of design parameters and objective response mean and/or standard deviation is generated by using system approximation model and design of experiment method. Finally, a new response surface approximation model is constructed by using those samples, and this approximation model is used for robust optimization process. The analyses results demonstrate the proposed method can dramatic reduce the computational cost and ensure the computational precision. The presented research offers an effective way for the robust optimization design of turbine blade-tip radial running clearance.
Kim, Chang-Hyun; Song, Kwang-Soup; Trayanova, Natalia A; Lim, Ki Moo
2018-05-01
Intra-aortic balloon pump (IABP) is normally contraindicated in significant aortic regurgitation (AR). It causes and aggravates pre-existing AR while performing well in the event of mitral regurgitation (MR). Indirect parameters, such as the mean systolic pressure, product of heart rate and peak systolic pressure, and pressure-volume are used to quantify the effect of IABP on ventricular workload. However, to date, no studies have directly quantified the reduction in workload with IABP. The goal of this study is to examine the effect of IABP therapy on ventricular mechanics under valvular insufficiency by using a computational model of the heart. For this purpose, the 3D electromechanical model of the failing ventricles used in previous studies was coupled with a lumped parameter model of valvular regurgitation and the IABP-treated vascular system. The IABP therapy was disturbed in terms of reducing the myocardial tension generation and contractile ATP consumption by valvular regurgitation, particularly in the AR condition. The IABP worsened the problem of ventricular expansion induced as a result of the regurgitated blood volume during the diastole under the AR condition. The IABP reduced the LV stroke work in the AR, MR, and no regurgitation conditions. Therefore, the IABP helped the ventricle to pump blood and reduced the ventricular workload. In conclusion, the IABP partially performed its role in the MR condition. However, it was disturbed by the AR and worsened the cardiovascular responses that followed the AR. Therefore, this study computationally proved the reason for the clinical contraindication of IABP in AR patients.
Recent advances in QM/MM free energy calculations using reference potentials.
Duarte, Fernanda; Amrein, Beat A; Blaha-Nelson, David; Kamerlin, Shina C L
2015-05-01
Recent years have seen enormous progress in the development of methods for modeling (bio)molecular systems. This has allowed for the simulation of ever larger and more complex systems. However, as such complexity increases, the requirements needed for these models to be accurate and physically meaningful become more and more difficult to fulfill. The use of simplified models to describe complex biological systems has long been shown to be an effective way to overcome some of the limitations associated with this computational cost in a rational way. Hybrid QM/MM approaches have rapidly become one of the most popular computational tools for studying chemical reactivity in biomolecular systems. However, the high cost involved in performing high-level QM calculations has limited the applicability of these approaches when calculating free energies of chemical processes. In this review, we present some of the advances in using reference potentials and mean field approximations to accelerate high-level QM/MM calculations. We present illustrative applications of these approaches and discuss challenges and future perspectives for the field. The use of physically-based simplifications has shown to effectively reduce the cost of high-level QM/MM calculations. In particular, lower-level reference potentials enable one to reduce the cost of expensive free energy calculations, thus expanding the scope of problems that can be addressed. As was already demonstrated 40 years ago, the usage of simplified models still allows one to obtain cutting edge results with substantially reduced computational cost. This article is part of a Special Issue entitled Recent developments of molecular dynamics. Copyright © 2014. Published by Elsevier B.V.
Influence of Boussinesq coefficient on depth-averaged modelling of rapid flows
NASA Astrophysics Data System (ADS)
Yang, Fan; Liang, Dongfang; Xiao, Yang
2018-04-01
The traditional Alternating Direction Implicit (ADI) scheme has been proven to be incapable of modelling trans-critical flows. Its inherent lack of shock-capturing capability often results in spurious oscillations and computational instabilities. However, the ADI scheme is still widely adopted in flood modelling software, and various special treatments have been designed to stabilise the computation. Modification of the Boussinesq coefficient to adjust the amount of fluid inertia is a numerical treatment that allows the ADI scheme to be applicable to rapid flows. This study comprehensively examines the impact of this numerical treatment over a range of flow conditions. A shock-capturing TVD-MacCormack model is used to provide reference results. For unsteady flows over a frictionless bed, such as idealised dam-break floods, the results suggest that an increase in the value of the Boussinesq coefficient reduces the amplitude of the spurious oscillations. The opposite is observed for steady rapid flows over a frictional bed. Finally, a two-dimensional urban flooding phenomenon is presented, involving unsteady flow over a frictional bed. The results show that increasing the value of the Boussinesq coefficient can significantly reduce the numerical oscillations and reduce the predicted area of inundation. In order to stabilise the ADI computations, the Boussinesq coefficient could be judiciously raised or lowered depending on whether the rapid flow is steady or unsteady and whether the bed is frictional or frictionless. An increase in the Boussinesq coefficient generally leads to overprediction of the propagating speed of the flood wave over a frictionless bed, but the opposite is true when bed friction is significant.
More efficient optimization of long-term water supply portfolios
NASA Astrophysics Data System (ADS)
Kirsch, Brian R.; Characklis, Gregory W.; Dillard, Karen E. M.; Kelley, C. T.
2009-03-01
The use of temporary transfers, such as options and leases, has grown as utilities attempt to meet increases in demand while reducing dependence on the expansion of costly infrastructure capacity (e.g., reservoirs). Earlier work has been done to construct optimal portfolios comprising firm capacity and transfers, using decision rules that determine the timing and volume of transfers. However, such work has only focused on the short-term (e.g., 1-year scenarios), which limits the utility of these planning efforts. Developing multiyear portfolios can lead to the exploration of a wider range of alternatives but also increases the computational burden. This work utilizes a coupled hydrologic-economic model to simulate the long-term performance of a city's water supply portfolio. This stochastic model is linked with an optimization search algorithm that is designed to handle the high-frequency, low-amplitude noise inherent in many simulations, particularly those involving expected values. This noise is detrimental to the accuracy and precision of the optimized solution and has traditionally been controlled by investing greater computational effort in the simulation. However, the increased computational effort can be substantial. This work describes the integration of a variance reduction technique (control variate method) within the simulation/optimization as a means of more efficiently identifying minimum cost portfolios. Random variation in model output (i.e., noise) is moderated using knowledge of random variations in stochastic input variables (e.g., reservoir inflows, demand), thereby reducing the computing time by 50% or more. Using these efficiency gains, water supply portfolios are evaluated over a 10-year period in order to assess their ability to reduce costs and adapt to demand growth, while still meeting reliability goals. As a part of the evaluation, several multiyear option contract structures are explored and compared.
A study of modelling simplifications in ground vibration predictions for railway traffic at grade
NASA Astrophysics Data System (ADS)
Germonpré, M.; Degrande, G.; Lombaert, G.
2017-10-01
Accurate computational models are required to predict ground-borne vibration due to railway traffic. Such models generally require a substantial computational effort. Therefore, much research has focused on developing computationally efficient methods, by either exploiting the regularity of the problem geometry in the direction along the track or assuming a simplified track structure. This paper investigates the modelling errors caused by commonly made simplifications of the track geometry. A case study is presented investigating a ballasted track in an excavation. The soil underneath the ballast is stiffened by a lime treatment. First, periodic track models with different cross sections are analyzed, revealing that a prediction of the rail receptance only requires an accurate representation of the soil layering directly underneath the ballast. A much more detailed representation of the cross sectional geometry is required, however, to calculate vibration transfer from track to free field. Second, simplifications in the longitudinal track direction are investigated by comparing 2.5D and periodic track models. This comparison shows that the 2.5D model slightly overestimates the track stiffness, while the transfer functions between track and free field are well predicted. Using a 2.5D model to predict the response during a train passage leads to an overestimation of both train-track interaction forces and free field vibrations. A combined periodic/2.5D approach is therefore proposed in this paper. First, the dynamic axle loads are computed by solving the train-track interaction problem with a periodic model. Next, the vibration transfer to the free field is computed with a 2.5D model. This combined periodic/2.5D approach only introduces small modelling errors compared to an approach in which a periodic model is used in both steps, while significantly reducing the computational cost.
Multiplexed Predictive Control of a Large Commercial Turbofan Engine
NASA Technical Reports Server (NTRS)
Richter, hanz; Singaraju, Anil; Litt, Jonathan S.
2008-01-01
Model predictive control is a strategy well-suited to handle the highly complex, nonlinear, uncertain, and constrained dynamics involved in aircraft engine control problems. However, it has thus far been infeasible to implement model predictive control in engine control applications, because of the combination of model complexity and the time allotted for the control update calculation. In this paper, a multiplexed implementation is proposed that dramatically reduces the computational burden of the quadratic programming optimization that must be solved online as part of the model-predictive-control algorithm. Actuator updates are calculated sequentially and cyclically in a multiplexed implementation, as opposed to the simultaneous optimization taking place in conventional model predictive control. Theoretical aspects are discussed based on a nominal model, and actual computational savings are demonstrated using a realistic commercial engine model.
Chiu, Chia-Yi; Köhn, Hans-Friedrich
2016-09-01
The asymptotic classification theory of cognitive diagnosis (ACTCD) provided the theoretical foundation for using clustering methods that do not rely on a parametric statistical model for assigning examinees to proficiency classes. Like general diagnostic classification models, clustering methods can be useful in situations where the true diagnostic classification model (DCM) underlying the data is unknown and possibly misspecified, or the items of a test conform to a mix of multiple DCMs. Clustering methods can also be an option when fitting advanced and complex DCMs encounters computational difficulties. These can range from the use of excessive CPU times to plain computational infeasibility. However, the propositions of the ACTCD have only been proven for the Deterministic Input Noisy Output "AND" gate (DINA) model and the Deterministic Input Noisy Output "OR" gate (DINO) model. For other DCMs, there does not exist a theoretical justification to use clustering for assigning examinees to proficiency classes. But if clustering is to be used legitimately, then the ACTCD must cover a larger number of DCMs than just the DINA model and the DINO model. Thus, the purpose of this article is to prove the theoretical propositions of the ACTCD for two other important DCMs, the Reduced Reparameterized Unified Model and the General Diagnostic Model.
Systematic comparison of the behaviors produced by computational models of epileptic neocortex.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warlaumont, A. S.; Lee, H. C.; Benayoun, M.
2010-12-01
Two existing models of brain dynamics in epilepsy, one detailed (i.e., realistic) and one abstract (i.e., simplified) are compared in terms of behavioral range and match to in vitro mouse recordings. A new method is introduced for comparing across computational models that may have very different forms. First, high-level metrics were extracted from model and in vitro output time series. A principal components analysis was then performed over these metrics to obtain a reduced set of derived features. These features define a low-dimensional behavior space in which quantitative measures of behavioral range and degree of match to real data canmore » be obtained. The detailed and abstract models and the mouse recordings overlapped considerably in behavior space. Both the range of behaviors and similarity to mouse data were similar between the detailed and abstract models. When no high-level metrics were used and principal components analysis was computed over raw time series, the models overlapped minimally with the mouse recordings. The method introduced here is suitable for comparing across different kinds of model data and across real brain recordings. It appears that, despite differences in form and computational expense, detailed and abstract models do not necessarily differ in their behaviors.« less
Adaptive surrogate model based multiobjective optimization for coastal aquifer management
NASA Astrophysics Data System (ADS)
Song, Jian; Yang, Yun; Wu, Jianfeng; Wu, Jichun; Sun, Xiaomin; Lin, Jin
2018-06-01
In this study, a novel surrogate model assisted multiobjective memetic algorithm (SMOMA) is developed for optimal pumping strategies of large-scale coastal groundwater problems. The proposed SMOMA integrates an efficient data-driven surrogate model with an improved non-dominated sorted genetic algorithm-II (NSGAII) that employs a local search operator to accelerate its convergence in optimization. The surrogate model based on Kernel Extreme Learning Machine (KELM) is developed and evaluated as an approximate simulator to generate the patterns of regional groundwater flow and salinity levels in coastal aquifers for reducing huge computational burden. The KELM model is adaptively trained during evolutionary search to satisfy desired fidelity level of surrogate so that it inhibits error accumulation of forecasting and results in correctly converging to true Pareto-optimal front. The proposed methodology is then applied to a large-scale coastal aquifer management in Baldwin County, Alabama. Objectives of minimizing the saltwater mass increase and maximizing the total pumping rate in the coastal aquifers are considered. The optimal solutions achieved by the proposed adaptive surrogate model are compared against those solutions obtained from one-shot surrogate model and original simulation model. The adaptive surrogate model does not only improve the prediction accuracy of Pareto-optimal solutions compared with those by the one-shot surrogate model, but also maintains the equivalent quality of Pareto-optimal solutions compared with those by NSGAII coupled with original simulation model, while retaining the advantage of surrogate models in reducing computational burden up to 94% of time-saving. This study shows that the proposed methodology is a computationally efficient and promising tool for multiobjective optimizations of coastal aquifer managements.
Seth, Ajay; Matias, Ricardo; Veloso, António P.; Delp, Scott L.
2016-01-01
The complexity of shoulder mechanics combined with the movement of skin relative to the scapula makes it difficult to measure shoulder kinematics with sufficient accuracy to distinguish between symptomatic and asymptomatic individuals. Multibody skeletal models can improve motion capture accuracy by reducing the space of possible joint movements, and models are used widely to improve measurement of lower limb kinematics. In this study, we developed a rigid-body model of a scapulothoracic joint to describe the kinematics of the scapula relative to the thorax. This model describes scapular kinematics with four degrees of freedom: 1) elevation and 2) abduction of the scapula on an ellipsoidal thoracic surface, 3) upward rotation of the scapula normal to the thoracic surface, and 4) internal rotation of the scapula to lift the medial border of the scapula off the surface of the thorax. The surface dimensions and joint axes can be customized to match an individual’s anthropometry. We compared the model to “gold standard” bone-pin kinematics collected during three shoulder tasks and found modeled scapular kinematics to be accurate to within 2mm root-mean-squared error for individual bone-pin markers across all markers and movement tasks. As an additional test, we added random and systematic noise to the bone-pin marker data and found that the model reduced kinematic variability due to noise by 65% compared to Euler angles computed without the model. Our scapulothoracic joint model can be used for inverse and forward dynamics analyses and to compute joint reaction loads. The computational performance of the scapulothoracic joint model is well suited for real-time applications; it is freely available for use with OpenSim 3.2, and is customizable and usable with other OpenSim models. PMID:26734761
Seth, Ajay; Matias, Ricardo; Veloso, António P; Delp, Scott L
2016-01-01
The complexity of shoulder mechanics combined with the movement of skin relative to the scapula makes it difficult to measure shoulder kinematics with sufficient accuracy to distinguish between symptomatic and asymptomatic individuals. Multibody skeletal models can improve motion capture accuracy by reducing the space of possible joint movements, and models are used widely to improve measurement of lower limb kinematics. In this study, we developed a rigid-body model of a scapulothoracic joint to describe the kinematics of the scapula relative to the thorax. This model describes scapular kinematics with four degrees of freedom: 1) elevation and 2) abduction of the scapula on an ellipsoidal thoracic surface, 3) upward rotation of the scapula normal to the thoracic surface, and 4) internal rotation of the scapula to lift the medial border of the scapula off the surface of the thorax. The surface dimensions and joint axes can be customized to match an individual's anthropometry. We compared the model to "gold standard" bone-pin kinematics collected during three shoulder tasks and found modeled scapular kinematics to be accurate to within 2 mm root-mean-squared error for individual bone-pin markers across all markers and movement tasks. As an additional test, we added random and systematic noise to the bone-pin marker data and found that the model reduced kinematic variability due to noise by 65% compared to Euler angles computed without the model. Our scapulothoracic joint model can be used for inverse and forward dynamics analyses and to compute joint reaction loads. The computational performance of the scapulothoracic joint model is well suited for real-time applications; it is freely available for use with OpenSim 3.2, and is customizable and usable with other OpenSim models.
NASA Technical Reports Server (NTRS)
Drozda, Tomasz G.; Quinlan, Jesse R.; Pisciuneri, Patrick H.; Yilmaz, S. Levent
2012-01-01
Significant progress has been made in the development of subgrid scale (SGS) closures based on a filtered density function (FDF) for large eddy simulations (LES) of turbulent reacting flows. The FDF is the counterpart of the probability density function (PDF) method, which has proven effective in Reynolds averaged simulations (RAS). However, while systematic progress is being made advancing the FDF models for relatively simple flows and lab-scale flames, the application of these methods in complex geometries and high speed, wall-bounded flows with shocks remains a challenge. The key difficulties are the significant computational cost associated with solving the FDF transport equation and numerically stiff finite rate chemistry. For LES/FDF methods to make a more significant impact in practical applications a pragmatic approach must be taken that significantly reduces the computational cost while maintaining high modeling fidelity. An example of one such ongoing effort is at the NASA Langley Research Center, where the first generation FDF models, namely the scalar filtered mass density function (SFMDF) are being implemented into VULCAN, a production-quality RAS and LES solver widely used for design of high speed propulsion flowpaths. This effort leverages internal and external collaborations to reduce the overall computational cost of high fidelity simulations in VULCAN by: implementing high order methods that allow reduction in the total number of computational cells without loss in accuracy; implementing first generation of high fidelity scalar PDF/FDF models applicable to high-speed compressible flows; coupling RAS/PDF and LES/FDF into a hybrid framework to efficiently and accurately model the effects of combustion in the vicinity of the walls; developing efficient Lagrangian particle tracking algorithms to support robust solutions of the FDF equations for high speed flows; and utilizing finite rate chemistry parametrization, such as flamelet models, to reduce the number of transported reactive species and remove numerical stiffness. This paper briefly introduces the SFMDF model (highlighting key benefits and challenges), and discusses particle tracking for flows with shocks, the hybrid coupled RAS/PDF and LES/FDF model, flamelet generated manifolds (FGM) model, and the Irregularly Portioned Lagrangian Monte Carlo Finite Difference (IPLMCFD) methodology for scalable simulation of high-speed reacting compressible flows.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Zuwei; Zhao, Haibo, E-mail: klinsmannzhb@163.com; Zheng, Chuguang
2015-01-15
This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule providesmore » a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance–rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are demonstrated in a physically realistic Brownian coagulation case. The computational accuracy is validated with benchmark solution of discrete-sectional method. The simulation results show that the comprehensive approach can attain very favorable improvement in cost without sacrificing computational accuracy.« less
Laboratory modeling and analysis of aircraft-lightning interactions
NASA Technical Reports Server (NTRS)
Turner, C. D.; Trost, T. F.
1982-01-01
Modeling studies of the interaction of a delta wing aircraft with direct lightning strikes were carried out using an approximate scale model of an F-106B. The model, which is three feet in length, is subjected to direct injection of fast current pulses supplied by wires, which simulate the lightning channel and are attached at various locations on the model. Measurements are made of the resulting transient electromagnetic fields using time derivative sensors. The sensor outputs are sampled and digitized by computer. The noise level is reduced by averaging the sensor output from ten input pulses at each sample time. Computer analysis of the measured fields includes Fourier transformation and the computation of transfer functions for the model. Prony analysis is also used to determine the natural frequencies of the model. Comparisons of model natural frequencies extracted by Prony analysis with those for in flight direct strike data usually show lower damping in the in flight case. This is indicative of either a lightning channel with a higher impedance than the wires on the model, only one attachment point, or short streamers instead of a long channel.
Global/local methods for probabilistic structural analysis
NASA Technical Reports Server (NTRS)
Millwater, H. R.; Wu, Y.-T.
1993-01-01
A probabilistic global/local method is proposed to reduce the computational requirements of probabilistic structural analysis. A coarser global model is used for most of the computations with a local more refined model used only at key probabilistic conditions. The global model is used to establish the cumulative distribution function (cdf) and the Most Probable Point (MPP). The local model then uses the predicted MPP to adjust the cdf value. The global/local method is used within the advanced mean value probabilistic algorithm. The local model can be more refined with respect to the g1obal model in terms of finer mesh, smaller time step, tighter tolerances, etc. and can be used with linear or nonlinear models. The basis for this approach is described in terms of the correlation between the global and local models which can be estimated from the global and local MPPs. A numerical example is presented using the NESSUS probabilistic structural analysis program with the finite element method used for the structural modeling. The results clearly indicate a significant computer savings with minimal loss in accuracy.
Global/local methods for probabilistic structural analysis
NASA Astrophysics Data System (ADS)
Millwater, H. R.; Wu, Y.-T.
1993-04-01
A probabilistic global/local method is proposed to reduce the computational requirements of probabilistic structural analysis. A coarser global model is used for most of the computations with a local more refined model used only at key probabilistic conditions. The global model is used to establish the cumulative distribution function (cdf) and the Most Probable Point (MPP). The local model then uses the predicted MPP to adjust the cdf value. The global/local method is used within the advanced mean value probabilistic algorithm. The local model can be more refined with respect to the g1obal model in terms of finer mesh, smaller time step, tighter tolerances, etc. and can be used with linear or nonlinear models. The basis for this approach is described in terms of the correlation between the global and local models which can be estimated from the global and local MPPs. A numerical example is presented using the NESSUS probabilistic structural analysis program with the finite element method used for the structural modeling. The results clearly indicate a significant computer savings with minimal loss in accuracy.
Computation of Turbulent Wake Flows in Variable Pressure Gradient
NASA Technical Reports Server (NTRS)
Duquesne, N.; Carlson, J. R.; Rumsey, C. L.; Gatski, T. B.
1999-01-01
Transport aircraft performance is strongly influenced by the effectiveness of high-lift systems. Developing wakes generated by the airfoil elements are subjected to strong pressure gradients and can thicken very rapidly, limiting maximum lift. This paper focuses on the effects of various pressure gradients on developing symmetric wakes and on the ability of a linear eddy viscosity model and a non-linear explicit algebraic stress model to accurately predict their downstream evolution. In order to reduce the uncertainties arising from numerical issues when assessing the performance of turbulence models, three different numerical codes with the same turbulence models are used. Results are compared to available experimental data to assess the accuracy of the computational results.
Phenotypic models of evolution and development: geometry as destiny.
François, Paul; Siggia, Eric D
2012-12-01
Quantitative models of development that consider all relevant genes typically are difficult to fit to embryonic data alone and have many redundant parameters. Computational evolution supplies models of phenotype with relatively few variables and parameters that allows the patterning dynamics to be reduced to a geometrical picture for how the state of a cell moves. The clock and wavefront model, that defines the phenotype of somitogenesis, can be represented as a sequence of two discrete dynamical transitions (bifurcations). The expression-time to space map for Hox genes and the posterior dominance rule are phenotypes that naturally follow from computational evolution without considering the genetics of Hox regulation. Copyright © 2012 Elsevier Ltd. All rights reserved.
Ferrucci, Filomena; Salza, Pasquale; Sarro, Federica
2017-06-29
The need to improve the scalability of Genetic Algorithms (GAs) has motivated the research on Parallel Genetic Algorithms (PGAs), and different technologies and approaches have been used. Hadoop MapReduce represents one of the most mature technologies to develop parallel algorithms. Based on the fact that parallel algorithms introduce communication overhead, the aim of the present work is to understand if, and possibly when, the parallel GAs solutions using Hadoop MapReduce show better performance than sequential versions in terms of execution time. Moreover, we are interested in understanding which PGA model can be most effective among the global, grid, and island models. We empirically assessed the performance of these three parallel models with respect to a sequential GA on a software engineering problem, evaluating the execution time and the achieved speedup. We also analysed the behaviour of the parallel models in relation to the overhead produced by the use of Hadoop MapReduce and the GAs' computational effort, which gives a more machine-independent measure of these algorithms. We exploited three problem instances to differentiate the computation load and three cluster configurations based on 2, 4, and 8 parallel nodes. Moreover, we estimated the costs of the execution of the experimentation on a potential cloud infrastructure, based on the pricing of the major commercial cloud providers. The empirical study revealed that the use of PGA based on the island model outperforms the other parallel models and the sequential GA for all the considered instances and clusters. Using 2, 4, and 8 nodes, the island model achieves an average speedup over the three datasets of 1.8, 3.4, and 7.0 times, respectively. Hadoop MapReduce has a set of different constraints that need to be considered during the design and the implementation of parallel algorithms. The overhead of data store (i.e., HDFS) accesses, communication, and latency requires solutions that reduce data store operations. For this reason, the island model is more suitable for PGAs than the global and grid model, also in terms of costs when executed on a commercial cloud provider.
Progress of High Efficiency Centrifugal Compressor Simulations Using TURBO
NASA Technical Reports Server (NTRS)
Kulkarni, Sameer; Beach, Timothy A.
2017-01-01
Three-dimensional, time-accurate, and phase-lagged computational fluid dynamics (CFD) simulations of the High Efficiency Centrifugal Compressor (HECC) stage were generated using the TURBO solver. Changes to the TURBO Parallel Version 4 source code were made in order to properly model the no-slip boundary condition along the spinning hub region for centrifugal impellers. A startup procedure was developed to generate a converged flow field in TURBO. This procedure initialized computations on a coarsened mesh generated by the Turbomachinery Gridding System (TGS) and relied on a method of systematically increasing wheel speed and backpressure. Baseline design-speed TURBO results generally overpredicted total pressure ratio, adiabatic efficiency, and the choking flow rate of the HECC stage as compared with the design-intent CFD results of Code Leo. Including diffuser fillet geometry in the TURBO computation resulted in a 0.6 percent reduction in the choking flow rate and led to a better match with design-intent CFD. Diffuser fillets reduced annulus cross-sectional area but also reduced corner separation, and thus blockage, in the diffuser passage. It was found that the TURBO computations are somewhat insensitive to inlet total pressure changing from the TURBO default inlet pressure of 14.7 pounds per square inch (101.35 kilopascals) down to 11.0 pounds per square inch (75.83 kilopascals), the inlet pressure of the component test. Off-design tip clearance was modeled in TURBO in two computations: one in which the blade tip geometry was trimmed by 12 mils (0.3048 millimeters), and another in which the hub flow path was moved to reflect a 12-mil axial shift in the impeller hub, creating a step at the hub. The one-dimensional results of these two computations indicate non-negligible differences between the two modeling approaches.
Markov Jump-Linear Performance Models for Recoverable Flight Control Computers
NASA Technical Reports Server (NTRS)
Zhang, Hong; Gray, W. Steven; Gonzalez, Oscar R.
2004-01-01
Single event upsets in digital flight control hardware induced by atmospheric neutrons can reduce system performance and possibly introduce a safety hazard. One method currently under investigation to help mitigate the effects of these upsets is NASA Langley s Recoverable Computer System. In this paper, a Markov jump-linear model is developed for a recoverable flight control system, which will be validated using data from future experiments with simulated and real neutron environments. The method of tracking error analysis and the plan for the experiments are also described.
On the use of programmable hardware and reduced numerical precision in earth-system modeling.
Düben, Peter D; Russell, Francis P; Niu, Xinyu; Luk, Wayne; Palmer, T N
2015-09-01
Programmable hardware, in particular Field Programmable Gate Arrays (FPGAs), promises a significant increase in computational performance for simulations in geophysical fluid dynamics compared with CPUs of similar power consumption. FPGAs allow adjusting the representation of floating-point numbers to specific application needs. We analyze the performance-precision trade-off on FPGA hardware for the two-scale Lorenz '95 model. We scale the size of this toy model to that of a high-performance computing application in order to make meaningful performance tests. We identify the minimal level of precision at which changes in model results are not significant compared with a maximal precision version of the model and find that this level is very similar for cases where the model is integrated for very short or long intervals. It is therefore a useful approach to investigate model errors due to rounding errors for very short simulations (e.g., 50 time steps) to obtain a range for the level of precision that can be used in expensive long-term simulations. We also show that an approach to reduce precision with increasing forecast time, when model errors are already accumulated, is very promising. We show that a speed-up of 1.9 times is possible in comparison to FPGA simulations in single precision if precision is reduced with no strong change in model error. The single-precision FPGA setup shows a speed-up of 2.8 times in comparison to our model implementation on two 6-core CPUs for large model setups.
Modeling Interactions in Small Groups
ERIC Educational Resources Information Center
Heise, David R.
2013-01-01
A new theory of interaction within small groups posits that group members initiate actions when tension mounts between the affective meanings of their situational identities and impressions produced by recent events. Actors choose partners and behaviors so as to reduce the tensions. A computer model based on this theory, incorporating reciprocal…
Dynamic Simulation of Crime Perpetration and Reporting to Examine Community Intervention Strategies
ERIC Educational Resources Information Center
Yonas, Michael A.; Burke, Jessica G.; Brown, Shawn T.; Borrebach, Jeffrey D.; Garland, Richard; Burke, Donald S.; Grefenstette, John J.
2013-01-01
Objective: To develop a conceptual computational agent-based model (ABM) to explore community-wide versus spatially focused crime reporting interventions to reduce community crime perpetrated by youth. Method: Agents within the model represent individual residents and interact on a two-dimensional grid representing an abstract nonempirically…
NASA Technical Reports Server (NTRS)
Beutner, Thomas John
1993-01-01
Porous wall wind tunnels have been used for several decades and have proven effective in reducing wall interference effects in both low speed and transonic testing. They allow for testing through Mach 1, reduce blockage effects and reduce shock wave reflections in the test section. Their usefulness in developing computational fluid dynamics (CFD) codes has been limited, however, by the difficulties associated with modelling the effect of a porous wall in CFD codes. Previous approaches to modelling porous wall effects have depended either upon a simplified linear boundary condition, which has proven inadequate, or upon detailed measurements of the normal velocity near the wall, which require extensive wind tunnel time. The current work was initiated in an effort to find a simple, accurate method of modelling a porous wall boundary condition in CFD codes. The development of such a method would allow data from porous wall wind tunnels to be used more readily in validating CFD codes. This would be beneficial when transonic validations are desired, or when large models are used to achieve high Reynolds numbers in testing. A computational and experimental study was undertaken to investigate a new method of modelling solid and porous wall boundary conditions in CFD codes. The method utilized experimental measurements at the walls to develop a flow field solution based on the method of singularities. This flow field solution was then imposed as a pressure boundary condition in a CFD simulation of the internal flow field. The effectiveness of this method in describing the effect of porosity changes on the wall was investigated. Also, the effectiveness of this method when only sparse experimental measurements were available has been investigated. The current work demonstrated this approach for low speed flows and compared the results with experimental data obtained from a heavily instrumented variable porosity test section. The approach developed was simple, computationally inexpensive, and did not require extensive or intrusive measurements of the boundary conditions during the wind tunnel test. It may be applied to both solid and porous wall wind tunnel tests.
Cycle-averaged dynamics of a periodically driven, closed-loop circulation model
NASA Technical Reports Server (NTRS)
Heldt, T.; Chang, J. L.; Chen, J. J. S.; Verghese, G. C.; Mark, R. G.
2005-01-01
Time-varying elastance models have been used extensively in the past to simulate the pulsatile nature of cardiovascular waveforms. Frequently, however, one is interested in dynamics that occur over longer time scales, in which case a detailed simulation of each cardiac contraction becomes computationally burdensome. In this paper, we apply circuit-averaging techniques to a periodically driven, closed-loop, three-compartment recirculation model. The resultant cycle-averaged model is linear and time invariant, and greatly reduces the computational burden. It is also amenable to systematic order reduction methods that lead to further efficiencies. Despite its simplicity, the averaged model captures the dynamics relevant to the representation of a range of cardiovascular reflex mechanisms. c2004 Elsevier Ltd. All rights reserved.
Efficient Computation Of Behavior Of Aircraft Tires
NASA Technical Reports Server (NTRS)
Tanner, John A.; Noor, Ahmed K.; Andersen, Carl M.
1989-01-01
NASA technical paper discusses challenging application of computational structural mechanics to numerical simulation of responses of aircraft tires during taxing, takeoff, and landing. Presents details of three main elements of computational strategy: use of special three-field, mixed-finite-element models; use of operator splitting; and application of technique reducing substantially number of degrees of freedom. Proposed computational strategy applied to two quasi-symmetric problems: linear analysis of anisotropic tires through use of two-dimensional-shell finite elements and nonlinear analysis of orthotropic tires subjected to unsymmetric loading. Three basic types of symmetry and combinations exhibited by response of tire identified.
Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew
2015-01-01
Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists. PMID:25742012
Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew
2015-01-01
Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists.
The QSE-Reduced Nuclear Reaction Network for Silicon Burning
NASA Astrophysics Data System (ADS)
Hix, W. Raphael; Parete-Koon, Suzanne T.; Freiburghaus, Christian; Thielemann, Friedrich-Karl
2007-09-01
Iron and neighboring nuclei are formed in massive stars shortly before core collapse and during their supernova outbursts, as well as during thermonuclear supernovae. Complete and incomplete silicon burning are responsible for the production of a wide range of nuclei with atomic mass numbers from 28 to 64. Because of the large number of nuclei involved, accurate modeling of silicon burning is computationally expensive. However, examination of the physics of silicon burning has revealed that the nuclear evolution is dominated by large groups of nuclei in mutual equilibrium. We present a new hybrid equilibrium-network scheme which takes advantage of this quasi-equilibrium in order to reduce the number of independent variables calculated. This allows accurate prediction of the nuclear abundance evolution, deleptonization, and energy generation at a greatly reduced computational cost when compared to a conventional nuclear reaction network. During silicon burning, the resultant QSE-reduced network is approximately an order of magnitude faster than the full network it replaces and requires the tracking of less than a third as many abundance variables, without significant loss of accuracy. These reductions in computational cost and the number of species evolved make QSE-reduced networks well suited for inclusion within hydrodynamic simulations, particularly in multidimensional applications.
Junwei Ma; Han Yuan; Sunderam, Sridhar; Besio, Walter; Lei Ding
2017-07-01
Neural activity inside the human brain generate electrical signals that can be detected on the scalp. Electroencephalograph (EEG) is one of the most widely utilized techniques helping physicians and researchers to diagnose and understand various brain diseases. Due to its nature, EEG signals have very high temporal resolution but poor spatial resolution. To achieve higher spatial resolution, a novel tri-polar concentric ring electrode (TCRE) has been developed to directly measure Surface Laplacian (SL). The objective of the present study is to accurately calculate SL for TCRE based on a realistic geometry head model. A locally dense mesh was proposed to represent the head surface, where the local dense parts were to match the small structural components in TCRE. Other areas without dense mesh were used for the purpose of reducing computational load. We conducted computer simulations to evaluate the performance of the proposed mesh and evaluated possible numerical errors as compared with a low-density model. Finally, with achieved accuracy, we presented the computed forward lead field of SL for TCRE for the first time in a realistic geometry head model and demonstrated that it has better spatial resolution than computed SL from classic EEG recordings.
Adaptive reduction of constitutive model-form error using a posteriori error estimation techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bishop, Joseph E.; Brown, Judith Alice
In engineering practice, models are typically kept as simple as possible for ease of setup and use, computational efficiency, maintenance, and overall reduced complexity to achieve robustness. In solid mechanics, a simple and efficient constitutive model may be favored over one that is more predictive, but is difficult to parameterize, is computationally expensive, or is simply not available within a simulation tool. In order to quantify the modeling error due to the choice of a relatively simple and less predictive constitutive model, we adopt the use of a posteriori model-form error-estimation techniques. Based on local error indicators in the energymore » norm, an algorithm is developed for reducing the modeling error by spatially adapting the material parameters in the simpler constitutive model. The resulting material parameters are not material properties per se, but depend on the given boundary-value problem. As a first step to the more general nonlinear case, we focus here on linear elasticity in which the “complex” constitutive model is general anisotropic elasticity and the chosen simpler model is isotropic elasticity. As a result, the algorithm for adaptive error reduction is demonstrated using two examples: (1) A transversely-isotropic plate with hole subjected to tension, and (2) a transversely-isotropic tube with two side holes subjected to torsion.« less
Adaptive reduction of constitutive model-form error using a posteriori error estimation techniques
Bishop, Joseph E.; Brown, Judith Alice
2018-06-15
In engineering practice, models are typically kept as simple as possible for ease of setup and use, computational efficiency, maintenance, and overall reduced complexity to achieve robustness. In solid mechanics, a simple and efficient constitutive model may be favored over one that is more predictive, but is difficult to parameterize, is computationally expensive, or is simply not available within a simulation tool. In order to quantify the modeling error due to the choice of a relatively simple and less predictive constitutive model, we adopt the use of a posteriori model-form error-estimation techniques. Based on local error indicators in the energymore » norm, an algorithm is developed for reducing the modeling error by spatially adapting the material parameters in the simpler constitutive model. The resulting material parameters are not material properties per se, but depend on the given boundary-value problem. As a first step to the more general nonlinear case, we focus here on linear elasticity in which the “complex” constitutive model is general anisotropic elasticity and the chosen simpler model is isotropic elasticity. As a result, the algorithm for adaptive error reduction is demonstrated using two examples: (1) A transversely-isotropic plate with hole subjected to tension, and (2) a transversely-isotropic tube with two side holes subjected to torsion.« less
Discrete-time model reduction in limited frequency ranges
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Juang, Jer-Nan; Longman, Richard W.
1991-01-01
A mathematical formulation for model reduction of discrete time systems such that the reduced order model represents the system in a particular frequency range is discussed. The algorithm transforms the full order system into balanced coordinates using frequency weighted discrete controllability and observability grammians. In this form a criterion is derived to guide truncation of states based on their contribution to the frequency range of interest. Minimization of the criterion is accomplished without need for numerical optimization. Balancing requires the computation of discrete frequency weighted grammians. Close form solutions for the computation of frequency weighted grammians are developed. Numerical examples are discussed to demonstrate the algorithm.
NASA Technical Reports Server (NTRS)
Levison, W. H.; Baron, S.
1984-01-01
Preliminary results in the application of a closed loop pilot/simulator model to the analysis of some simulator fidelity issues are discussed in the context of an air to air target tracking task. The closed loop model is described briefly. Then, problem simplifications that are employed to reduce computational costs are discussed. Finally, model results showing sensitivity of performance to various assumptions concerning the simulator and/or the pilot are presented.
Finite Element Model to Reduce Fire and Blast Vulnerability
2013-01-01
4 Figure 4. Scapula, Clavicle and Arm Models Attached to the Larger Model .............................. 5 Figure 5. The Full Body...Finite Element Model of the Lower Limbs UNCLASSIFIED 4 UNCLASSIFIED Anatomical surfaces of the scapula and clavicle were obtained and...ulna and hand bones. For the arms, hands, scapula and clavicle , the materials were made to be rigid and joints created using computational constraints
Experimental and Computational Analysis of Polyglutamine-Mediated Cytotoxicity
Tang, Matthew Y.; Proctor, Carole J.; Woulfe, John; Gray, Douglas A.
2010-01-01
Expanded polyglutamine (polyQ) proteins are known to be the causative agents of a number of human neurodegenerative diseases but the molecular basis of their cytoxicity is still poorly understood. PolyQ tracts may impede the activity of the proteasome, and evidence from single cell imaging suggests that the sequestration of polyQ into inclusion bodies can reduce the proteasomal burden and promote cell survival, at least in the short term. The presence of misfolded protein also leads to activation of stress kinases such as p38MAPK, which can be cytotoxic. The relationships of these systems are not well understood. We have used fluorescent reporter systems imaged in living cells, and stochastic computer modeling to explore the relationships of polyQ, p38MAPK activation, generation of reactive oxygen species (ROS), proteasome inhibition, and inclusion body formation. In cells expressing a polyQ protein inclusion, body formation was preceded by proteasome inhibition but cytotoxicity was greatly reduced by administration of a p38MAPK inhibitor. Computer simulations suggested that without the generation of ROS, the proteasome inhibition and activation of p38MAPK would have significantly reduced toxicity. Our data suggest a vicious cycle of stress kinase activation and proteasome inhibition that is ultimately lethal to cells. There was close agreement between experimental data and the predictions of a stochastic computer model, supporting a central role for proteasome inhibition and p38MAPK activation in inclusion body formation and ROS-mediated cell death. PMID:20885783
Refining new-physics searches in B→Dτν with lattice QCD.
Bailey, Jon A; Bazavov, A; Bernard, C; Bouchard, C M; Detar, C; Du, Daping; El-Khadra, A X; Foley, J; Freeland, E D; Gámiz, E; Gottlieb, Steven; Heller, U M; Kim, Jongjeong; Kronfeld, A S; Laiho, J; Levkova, L; Mackenzie, P B; Meurice, Y; Neil, E T; Oktay, M B; Qiu, Si-Wei; Simone, J N; Sugar, R; Toussaint, D; Van de Water, R S; Zhou, Ran
2012-08-17
The semileptonic decay channel B→Dτν is sensitive to the presence of a scalar current, such as that mediated by a charged-Higgs boson. Recently, the BABAR experiment reported the first observation of the exclusive semileptonic decay B→Dτ(-)ν, finding an approximately 2σ disagreement with the standard-model prediction for the ratio R(D)=BR(B→Dτν)/BR(B→Dℓν), where ℓ = e,μ. We compute this ratio of branching fractions using hadronic form factors computed in unquenched lattice QCD and obtain R(D)=0.316(12)(7), where the errors are statistical and total systematic, respectively. This result is the first standard-model calculation of R(D) from ab initio full QCD. Its error is smaller than that of previous estimates, primarily due to the reduced uncertainty in the scalar form factor f(0)(q(2)). Our determination of R(D) is approximately 1σ higher than previous estimates and, thus, reduces the tension with experiment. We also compute R(D) in models with electrically charged scalar exchange, such as the type-II two-Higgs-doublet model. Once again, our result is consistent with, but approximately 1σ higher than, previous estimates for phenomenologically relevant values of the scalar coupling in the type-II model. As a by-product of our calculation, we also present the standard-model prediction for the longitudinal-polarization ratio P(L)(D)=0.325(4)(3).
Cyberinfrastructure to Support Collaborative and Reproducible Computational Hydrologic Modeling
NASA Astrophysics Data System (ADS)
Goodall, J. L.; Castronova, A. M.; Bandaragoda, C.; Morsy, M. M.; Sadler, J. M.; Essawy, B.; Tarboton, D. G.; Malik, T.; Nijssen, B.; Clark, M. P.; Liu, Y.; Wang, S. W.
2017-12-01
Creating cyberinfrastructure to support reproducibility of computational hydrologic models is an important research challenge. Addressing this challenge requires open and reusable code and data with machine and human readable metadata, organized in ways that allow others to replicate results and verify published findings. Specific digital objects that must be tracked for reproducible computational hydrologic modeling include (1) raw initial datasets, (2) data processing scripts used to clean and organize the data, (3) processed model inputs, (4) model results, and (5) the model code with an itemization of all software dependencies and computational requirements. HydroShare is a cyberinfrastructure under active development designed to help users store, share, and publish digital research products in order to improve reproducibility in computational hydrology, with an architecture supporting hydrologic-specific resource metadata. Researchers can upload data required for modeling, add hydrology-specific metadata to these resources, and use the data directly within HydroShare.org for collaborative modeling using tools like CyberGIS, Sciunit-CLI, and JupyterHub that have been integrated with HydroShare to run models using notebooks, Docker containers, and cloud resources. Current research aims to implement the Structure For Unifying Multiple Modeling Alternatives (SUMMA) hydrologic model within HydroShare to support hypothesis-driven hydrologic modeling while also taking advantage of the HydroShare cyberinfrastructure. The goal of this integration is to create the cyberinfrastructure that supports hypothesis-driven model experimentation, education, and training efforts by lowering barriers to entry, reducing the time spent on informatics technology and software development, and supporting collaborative research within and across research groups.
Reduced order modeling of head related transfer functions for virtual acoustic displays
NASA Astrophysics Data System (ADS)
Willhite, Joel A.; Frampton, Kenneth D.; Grantham, D. Wesley
2003-04-01
The purpose of this work is to improve the computational efficiency in acoustic virtual applications by creating and testing reduced order models of the head related transfer functions used in localizing sound sources. State space models of varying order were generated from zero-elevation Head Related Impulse Responses (HRIRs) using Kungs Single Value Decomposition (SVD) technique. The inputs to the models are the desired azimuths of the virtual sound sources (from minus 90 deg to plus 90 deg, in 10 deg increments) and the outputs are the left and right ear impulse responses. Trials were conducted in an anechoic chamber in which subjects were exposed to real sounds that were emitted by individual speakers across a numbered speaker array, phantom sources generated from the original HRIRs, and phantom sound sources generated with the different reduced order state space models. The error in the perceived direction of the phantom sources generated from the reduced order models was compared to errors in localization using the original HRIRs.
NASA Astrophysics Data System (ADS)
Myre, Joseph M.
Heterogeneous computing systems have recently come to the forefront of the High-Performance Computing (HPC) community's interest. HPC computer systems that incorporate special purpose accelerators, such as Graphics Processing Units (GPUs), are said to be heterogeneous. Large scale heterogeneous computing systems have consistently ranked highly on the Top500 list since the beginning of the heterogeneous computing trend. By using heterogeneous computing systems that consist of both general purpose processors and special- purpose accelerators, the speed and problem size of many simulations could be dramatically increased. Ultimately this results in enhanced simulation capabilities that allows, in some cases for the first time, the execution of parameter space and uncertainty analyses, model optimizations, and other inverse modeling techniques that are critical for scientific discovery and engineering analysis. However, simplifying the usage and optimization of codes for heterogeneous computing systems remains a challenge. This is particularly true for scientists and engineers for whom understanding HPC architectures and undertaking performance analysis may not be primary research objectives. To enable scientists and engineers to remain focused on their primary research objectives, a modular environment for geophysical inversion and run-time autotuning on heterogeneous computing systems is presented. This environment is composed of three major components: 1) CUSH---a framework for reducing the complexity of programming heterogeneous computer systems, 2) geophysical inversion routines which can be used to characterize physical systems, and 3) run-time autotuning routines designed to determine configurations of heterogeneous computing systems in an attempt to maximize the performance of scientific and engineering codes. Using three case studies, a lattice-Boltzmann method, a non-negative least squares inversion, and a finite-difference fluid flow method, it is shown that this environment provides scientists and engineers with means to reduce the programmatic complexity of their applications, to perform geophysical inversions for characterizing physical systems, and to determine high-performing run-time configurations of heterogeneous computing systems using a run-time autotuner.
Dimensional psychiatry: mental disorders as dysfunctions of basic learning mechanisms.
Heinz, Andreas; Schlagenhauf, Florian; Beck, Anne; Wackerhagen, Carolin
2016-08-01
It has been questioned that the more than 300 mental disorders currently listed in international disease classification systems all have a distinct neurobiological correlate. Here, we support the idea that basic dimensions of mental dysfunctions, such as alterations in reinforcement learning, can be identified, which interact with individual vulnerability and psychosocial stress factors and, thus, contribute to syndromes of distress across traditional nosological boundaries. We further suggest that computational modeling of learning behavior can help to identify specific alterations in reinforcement-based decision-making and their associated neurobiological correlates. For example, attribution of salience to drug-related cues associated with dopamine dysfunction in addiction can increase habitual decision-making via promotion of Pavlovian-to-instrumental transfer as indicated by computational modeling of the effect of Pavlovian-conditioned stimuli (here affectively positive or alcohol-related cues) on instrumental approach and avoidance behavior. In schizophrenia, reward prediction errors can be modeled computationally and associated with functional brain activation, thus revealing reduced encoding of such learning signals in the ventral striatum and compensatory activation in the frontal cortex. With respect to negative mood states, it has been shown that both reduced functional activation of the ventral striatum elicited by reward-predicting stimuli and stress-associated activation of the hypothalamic-pituitary-adrenal axis in interaction with reduced serotonin transporter availability and increased amygdala activation by aversive cues contribute to clinical depression; altogether these observations support the notion that basic learning mechanisms, such as Pavlovian and instrumental conditioning and Pavlovian-to-instrumental transfer, represent a basic dimension of mental disorders that can be mechanistically characterized using computational modeling and associated with specific clinical syndromes across established nosological boundaries. Instead of pursuing a narrow focus on single disorders defined by clinical tradition, we suggest that neurobiological research should focus on such basic dimensions, which can be studied in and compared among several mental disorders.
Adaptation of hidden Markov models for recognizing speech of reduced frame rate.
Lee, Lee-Min; Jean, Fu-Rong
2013-12-01
The frame rate of the observation sequence in distributed speech recognition applications may be reduced to suit a resource-limited front-end device. In order to use models trained using full-frame-rate data in the recognition of reduced frame-rate (RFR) data, we propose a method for adapting the transition probabilities of hidden Markov models (HMMs) to match the frame rate of the observation. Experiments on the recognition of clean and noisy connected digits are conducted to evaluate the proposed method. Experimental results show that the proposed method can effectively compensate for the frame-rate mismatch between the training and the test data. Using our adapted model to recognize the RFR speech data, one can significantly reduce the computation time and achieve the same level of accuracy as that of a method, which restores the frame rate using data interpolation.
Acceleration of discrete stochastic biochemical simulation using GPGPU.
Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira
2015-01-01
For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130.
Acceleration of discrete stochastic biochemical simulation using GPGPU
Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira
2015-01-01
For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130. PMID:25762936
NASA Astrophysics Data System (ADS)
Cheviakov, Alexei F.
2017-11-01
An efficient systematic procedure is provided for symbolic computation of Lie groups of equivalence transformations and generalized equivalence transformations of systems of differential equations that contain arbitrary elements (arbitrary functions and/or arbitrary constant parameters), using the software package GeM for Maple. Application of equivalence transformations to the reduction of the number of arbitrary elements in a given system of equations is discussed, and several examples are considered. The first computational example of generalized equivalence transformations where the transformation of the dependent variable involves an arbitrary constitutive function is presented. As a detailed physical example, a three-parameter family of nonlinear wave equations describing finite anti-plane shear displacements of an incompressible hyperelastic fiber-reinforced medium is considered. Equivalence transformations are computed and employed to radically simplify the model for an arbitrary fiber direction, invertibly reducing the model to a simple form that corresponds to a special fiber direction, and involves no arbitrary elements. The presented computation algorithm is applicable to wide classes of systems of differential equations containing arbitrary elements.
A Novel Method for Reducing Rotor Blade-Vortex Interaction
NASA Technical Reports Server (NTRS)
Glinka, A. T.
2000-01-01
One of the major hindrances to expansion of the rotorcraft market is the high-amplitude noise they produce, especially during low-speed descent, where blade-vortex interactions frequently occur. In an attempt to reduce the noise levels caused by blade-vortex interactions, the flip-tip rotor blade concept was devised. The flip-tip rotor increases the miss distance between the shed vortices and the rotor blades, reducing BVI noise. The distance is increased by rotating an outboard portion of the rotor tip either up or down depending on the flight condition. The proposed plan for the grant consisted of a computational simulation of the rotor aerodynamics and its wake geometry to determine the effectiveness of the concept, coupled with a series of wind tunnel experiments exploring the value of the device and validating the computer model. The computational model did in fact show that the miss distance could be increased, giving a measure of the effectiveness of the flip-tip rotor. However, the wind experiments were not able to be conducted. Increased outside demand for the 7'x lO' wind tunnel at NASA Ames and low priority at Ames for this project forced numerous postponements of the tests, eventually pushing the tests beyond the life of the grant. A design for the rotor blades to be tested in the wind tunnel was completed and an analysis of the strength of the model blades based on predicted loads, including dynamic forces, was done.
Cross-correlation least-squares reverse time migration in the pseudo-time domain
NASA Astrophysics Data System (ADS)
Li, Qingyang; Huang, Jianping; Li, Zhenchun
2017-08-01
The least-squares reverse time migration (LSRTM) method with higher image resolution and amplitude is becoming increasingly popular. However, the LSRTM is not widely used in field land data processing because of its sensitivity to the initial migration velocity model, large computational cost and mismatch of amplitudes between the synthetic and observed data. To overcome the shortcomings of the conventional LSRTM, we propose a cross-correlation least-squares reverse time migration algorithm in pseudo-time domain (PTCLSRTM). Our algorithm not only reduces the depth/velocity ambiguities, but also reduces the effect of velocity error on the imaging results. It relieves the accuracy requirements on the migration velocity model of least-squares migration (LSM). The pseudo-time domain algorithm eliminates the irregular wavelength sampling in the vertical direction, thus it can reduce the vertical grid points and memory requirements used during computation, which makes our method more computationally efficient than the standard implementation. Besides, for field data applications, matching the recorded amplitudes is a very difficult task because of the viscoelastic nature of the Earth and inaccuracies in the estimation of the source wavelet. To relax the requirement for strong amplitude matching of LSM, we extend the normalized cross-correlation objective function to the pseudo-time domain. Our method is only sensitive to the similarity between the predicted and the observed data. Numerical tests on synthetic and land field data confirm the effectiveness of our method and its adaptability for complex models.
NASA Astrophysics Data System (ADS)
Jaber, Khalid Mohammad; Alia, Osama Moh'd.; Shuaib, Mohammed Mahmod
2018-03-01
Finding the optimal parameters that can reproduce experimental data (such as the velocity-density relation and the specific flow rate) is a very important component of the validation and calibration of microscopic crowd dynamic models. Heavy computational demand during parameter search is a known limitation that exists in a previously developed model known as the Harmony Search-Based Social Force Model (HS-SFM). In this paper, a parallel-based mechanism is proposed to reduce the computational time and memory resource utilisation required to find these parameters. More specifically, two MATLAB-based multicore techniques (parfor and create independent jobs) using shared memory are developed by taking advantage of the multithreading capabilities of parallel computing, resulting in a new framework called the Parallel Harmony Search-Based Social Force Model (P-HS-SFM). The experimental results show that the parfor-based P-HS-SFM achieved a better computational time of about 26 h, an efficiency improvement of ? 54% and a speedup factor of 2.196 times in comparison with the HS-SFM sequential processor. The performance of the P-HS-SFM using the create independent jobs approach is also comparable to parfor with a computational time of 26.8 h, an efficiency improvement of about 30% and a speedup of 2.137 times.
Boundary-layer computational model for predicting the flow and heat transfer in sudden expansions
NASA Technical Reports Server (NTRS)
Lewis, J. P.; Pletcher, R. H.
1986-01-01
Fully developed turbulent and laminar flows through symmetric planar and axisymmetric expansions with heat transfer were modeled using a finite-difference discretization of the boundary-layer equations. By using the boundary-layer equations to model separated flow in place of the Navier-Stokes equations, computational effort was reduced permitting turbulence modelling studies to be economically carried out. For laminar flow, the reattachment length was well predicted for Reynolds numbers as low as 20 and the details of the trapped eddy were well predicted for Reynolds numbers above 200. For turbulent flows, the Boussinesq assumption was used to express the Reynolds stresses in terms of a turbulent viscosity. Near-wall algebraic turbulence models based on Prandtl's-mixing-length model and the maximum Reynolds shear stress were compared.
Application of Interface Technology in Progressive Failure Analysis of Composite Panels
NASA Technical Reports Server (NTRS)
Sleight, D. W.; Lotts, C. G.
2002-01-01
A progressive failure analysis capability using interface technology is presented. The capability has been implemented in the COMET-AR finite element analysis code developed at the NASA Langley Research Center and is demonstrated on composite panels. The composite panels are analyzed for damage initiation and propagation from initial loading to final failure using a progressive failure analysis capability that includes both geometric and material nonlinearities. Progressive failure analyses are performed on conventional models and interface technology models of the composite panels. Analytical results and the computational effort of the analyses are compared for the conventional models and interface technology models. The analytical results predicted with the interface technology models are in good correlation with the analytical results using the conventional models, while significantly reducing the computational effort.
NASA Astrophysics Data System (ADS)
Foo, Kam Keong
A two-dimensional dual-mode scramjet flowpath is developed and evaluated using the ANSYS Fluent density-based flow solver with various computational grids. Results are obtained for fuel-off, fuel-on non-reacting, and fuel-on reacting cases at different equivalence ratios. A one-step global chemical kinetics hydrogen-air model is used in conjunction with the eddy-dissipation model. Coarse, medium and fine computational grids are used to evaluate grid sensitivity and to investigate a lack of grid independence. Different grid adaptation strategies are performed on the coarse grid in an attempt to emulate the solutions obtained from the finer grids. The goal of this study is to investigate the feasibility of using various mesh adaptation criteria to significantly decrease computational efforts for high-speed reacting flows.
Improving Weather Forecasts Through Reduced Precision Data Assimilation
NASA Astrophysics Data System (ADS)
Hatfield, Samuel; Düben, Peter; Palmer, Tim
2017-04-01
We present a new approach for improving the efficiency of data assimilation, by trading numerical precision for computational speed. Future supercomputers will allow a greater choice of precision, so that models can use a level of precision that is commensurate with the model uncertainty. Previous studies have already indicated that the quality of climate and weather forecasts is not significantly degraded when using a precision less than double precision [1,2], but so far these studies have not considered data assimilation. Data assimilation is inherently uncertain due to the use of relatively long assimilation windows, noisy observations and imperfect models. Thus, the larger rounding errors incurred from reducing precision may be within the tolerance of the system. Lower precision arithmetic is cheaper, and so by reducing precision in ensemble data assimilation, we can redistribute computational resources towards, for example, a larger ensemble size. Because larger ensembles provide a better estimate of the underlying distribution and are less reliant on covariance inflation and localisation, lowering precision could actually allow us to improve the accuracy of weather forecasts. We will present results on how lowering numerical precision affects the performance of an ensemble data assimilation system, consisting of the Lorenz '96 toy atmospheric model and the ensemble square root filter. We run the system at half precision (using an emulation tool), and compare the results with simulations at single and double precision. We estimate that half precision assimilation with a larger ensemble can reduce assimilation error by 30%, with respect to double precision assimilation with a smaller ensemble, for no extra computational cost. This results in around half a day extra of skillful weather forecasts, if the error-doubling characteristics of the Lorenz '96 model are mapped to those of the real atmosphere. Additionally, we investigate the sensitivity of these results to observational error and assimilation window length. Half precision hardware will become available very shortly, with the introduction of Nvidia's Pascal GPU architecture and the Intel Knights Mill coprocessor. We hope that the results presented here will encourage the uptake of this hardware. References [1] Peter D. Düben and T. N. Palmer, 2014: Benchmark Tests for Numerical Weather Forecasts on Inexact Hardware, Mon. Weather Rev., 142, 3809-3829 [2] Peter D. Düben, Hugh McNamara and T. N. Palmer, 2014: The use of imprecise processing to improve accuracy in weather & climate prediction, J. Comput. Phys., 271, 2-18
NASA Astrophysics Data System (ADS)
Hill, M. C.; Jakeman, J.; Razavi, S.; Tolson, B.
2015-12-01
For many environmental systems model runtimes have remained very long as more capable computers have been used to add more processes and more time and space discretization. Scientists have also added more parameters and kinds of observations, and many model runs are needed to explore the models. Computational demand equals run time multiplied by number of model runs divided by parallelization opportunities. Model exploration is conducted using sensitivity analysis, optimization, and uncertainty quantification. Sensitivity analysis is used to reveal consequences of what may be very complex simulated relations, optimization is used to identify parameter values that fit the data best, or at least better, and uncertainty quantification is used to evaluate the precision of simulated results. The long execution times make such analyses a challenge. Methods for addressing this challenges include computationally frugal analysis of the demanding original model and a number of ingenious surrogate modeling methods. Both commonly use about 50-100 runs of the demanding original model. In this talk we consider the tradeoffs between (1) original model development decisions, (2) computationally frugal analysis of the original model, and (3) using many model runs of the fast surrogate model. Some questions of interest are as follows. If the added processes and discretization invested in (1) are compared with the restrictions and approximations in model analysis produced by long model execution times, is there a net benefit related of the goals of the model? Are there changes to the numerical methods that could reduce the computational demands while giving up less fidelity than is compromised by using computationally frugal methods or surrogate models for model analysis? Both the computationally frugal methods and surrogate models require that the solution of interest be a smooth function of the parameters or interest. How does the information obtained from the local methods typical of (2) and the global averaged methods typical of (3) compare for typical systems? The discussion will use examples of response of the Greenland glacier to global warming and surface and groundwater modeling.
Accelerating Climate and Weather Simulations through Hybrid Computing
NASA Technical Reports Server (NTRS)
Zhou, Shujia; Cruz, Carlos; Duffy, Daniel; Tucker, Robert; Purcell, Mark
2011-01-01
Unconventional multi- and many-core processors (e.g. IBM (R) Cell B.E.(TM) and NVIDIA (R) GPU) have emerged as effective accelerators in trial climate and weather simulations. Yet these climate and weather models typically run on parallel computers with conventional processors (e.g. Intel, AMD, and IBM) using Message Passing Interface. To address challenges involved in efficiently and easily connecting accelerators to parallel computers, we investigated using IBM's Dynamic Application Virtualization (TM) (IBM DAV) software in a prototype hybrid computing system with representative climate and weather model components. The hybrid system comprises two Intel blades and two IBM QS22 Cell B.E. blades, connected with both InfiniBand(R) (IB) and 1-Gigabit Ethernet. The system significantly accelerates a solar radiation model component by offloading compute-intensive calculations to the Cell blades. Systematic tests show that IBM DAV can seamlessly offload compute-intensive calculations from Intel blades to Cell B.E. blades in a scalable, load-balanced manner. However, noticeable communication overhead was observed, mainly due to IP over the IB protocol. Full utilization of IB Sockets Direct Protocol and the lower latency production version of IBM DAV will reduce this overhead.
Reliable low precision simulations in land surface models
NASA Astrophysics Data System (ADS)
Dawson, Andrew; Düben, Peter D.; MacLeod, David A.; Palmer, Tim N.
2017-12-01
Weather and climate models must continue to increase in both resolution and complexity in order that forecasts become more accurate and reliable. Moving to lower numerical precision may be an essential tool for coping with the demand for ever increasing model complexity in addition to increasing computing resources. However, there have been some concerns in the weather and climate modelling community over the suitability of lower precision for climate models, particularly for representing processes that change very slowly over long time-scales. These processes are difficult to represent using low precision due to time increments being systematically rounded to zero. Idealised simulations are used to demonstrate that a model of deep soil heat diffusion that fails when run in single precision can be modified to work correctly using low precision, by splitting up the model into a small higher precision part and a low precision part. This strategy retains the computational benefits of reduced precision whilst preserving accuracy. This same technique is also applied to a full complexity land surface model, resulting in rounding errors that are significantly smaller than initial condition and parameter uncertainties. Although lower precision will present some problems for the weather and climate modelling community, many of the problems can likely be overcome using a straightforward and physically motivated application of reduced precision.
Azmi, Nur Liyana; Ding, Ziyun; Xu, Rui
2018-01-01
The anterior cruciate ligament (ACL) provides resistance to tibial internal rotation torque and anterior shear at the knee. ACL deficiency results in knee instability. Optimisation of muscle contraction through functional electrical stimulation (FES) offers the prospect of mitigating the destabilising effects of ACL deficiency. The hypothesis of this study is that activation of the biceps femoris long head (BFLH) reduces the tibial internal rotation torque and the anterior shear force at the knee. Gait data of twelve healthy subjects were measured with and without the application of FES and taken as inputs to a computational musculoskeletal model. The model was used to investigate the optimum levels of BFLH activation during FES gait in reducing the anterior shear force to zero. This study found that FES significantly reduced the tibial internal rotation torque at the knee during the stance phase of gait (p = 0.0322) and the computational musculoskeletal modelling revealed that a mean BFLH activation of 20.8% (±8.4%) could reduce the anterior shear force to zero. At the time frame when the anterior shear force was zero, the internal rotation torque was reduced by 0.023 ± 0.0167 Nm/BW, with a mean 188% reduction across subjects (p = 0.0002). In conclusion, activation of the BFLH is able to reduce the tibial internal rotation torque and the anterior shear force at the knee in healthy control subjects. This should be tested on ACL deficient subject to consider its effect in mitigating instability due to ligament deficiency. In future clinical practice, activating the BFLH may be used to protect ACL reconstructions during post-operative rehabilitation, assist with residual instabilities post reconstruction, and reduce the need for ACL reconstruction surgery in some cases. PMID:29304102
Azmi, Nur Liyana; Ding, Ziyun; Xu, Rui; Bull, Anthony M J
2018-01-01
The anterior cruciate ligament (ACL) provides resistance to tibial internal rotation torque and anterior shear at the knee. ACL deficiency results in knee instability. Optimisation of muscle contraction through functional electrical stimulation (FES) offers the prospect of mitigating the destabilising effects of ACL deficiency. The hypothesis of this study is that activation of the biceps femoris long head (BFLH) reduces the tibial internal rotation torque and the anterior shear force at the knee. Gait data of twelve healthy subjects were measured with and without the application of FES and taken as inputs to a computational musculoskeletal model. The model was used to investigate the optimum levels of BFLH activation during FES gait in reducing the anterior shear force to zero. This study found that FES significantly reduced the tibial internal rotation torque at the knee during the stance phase of gait (p = 0.0322) and the computational musculoskeletal modelling revealed that a mean BFLH activation of 20.8% (±8.4%) could reduce the anterior shear force to zero. At the time frame when the anterior shear force was zero, the internal rotation torque was reduced by 0.023 ± 0.0167 Nm/BW, with a mean 188% reduction across subjects (p = 0.0002). In conclusion, activation of the BFLH is able to reduce the tibial internal rotation torque and the anterior shear force at the knee in healthy control subjects. This should be tested on ACL deficient subject to consider its effect in mitigating instability due to ligament deficiency. In future clinical practice, activating the BFLH may be used to protect ACL reconstructions during post-operative rehabilitation, assist with residual instabilities post reconstruction, and reduce the need for ACL reconstruction surgery in some cases.
Inexact hardware for modelling weather & climate
NASA Astrophysics Data System (ADS)
Düben, Peter D.; McNamara, Hugh; Palmer, Tim
2014-05-01
The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing exact calculations in exchange for improvements in performance and potentially accuracy and a reduction in power consumption. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud resolving atmospheric modelling. The impact of both, hardware induced faults and low precision arithmetic is tested in the dynamical core of a global atmosphere model. Our simulations show that both approaches to inexact calculations do not substantially affect the quality of the model simulations, provided they are restricted to act only on smaller scales. This suggests that inexact calculations at the small scale could reduce computation and power costs without adversely affecting the quality of the simulations.
Fast and Accurate Circuit Design Automation through Hierarchical Model Switching.
Huynh, Linh; Tagkopoulos, Ilias
2015-08-21
In computer-aided biological design, the trifecta of characterized part libraries, accurate models and optimal design parameters is crucial for producing reliable designs. As the number of parts and model complexity increase, however, it becomes exponentially more difficult for any optimization method to search the solution space, hence creating a trade-off that hampers efficient design. To address this issue, we present a hierarchical computer-aided design architecture that uses a two-step approach for biological design. First, a simple model of low computational complexity is used to predict circuit behavior and assess candidate circuit branches through branch-and-bound methods. Then, a complex, nonlinear circuit model is used for a fine-grained search of the reduced solution space, thus achieving more accurate results. Evaluation with a benchmark of 11 circuits and a library of 102 experimental designs with known characterization parameters demonstrates a speed-up of 3 orders of magnitude when compared to other design methods that provide optimality guarantees.
Hierarchical Nearest-Neighbor Gaussian Process Models for Large Geostatistical Datasets.
Datta, Abhirup; Banerjee, Sudipto; Finley, Andrew O; Gelfand, Alan E
2016-01-01
Spatial process models for analyzing geostatistical data entail computations that become prohibitive as the number of spatial locations become large. This article develops a class of highly scalable nearest-neighbor Gaussian process (NNGP) models to provide fully model-based inference for large geostatistical datasets. We establish that the NNGP is a well-defined spatial process providing legitimate finite-dimensional Gaussian densities with sparse precision matrices. We embed the NNGP as a sparsity-inducing prior within a rich hierarchical modeling framework and outline how computationally efficient Markov chain Monte Carlo (MCMC) algorithms can be executed without storing or decomposing large matrices. The floating point operations (flops) per iteration of this algorithm is linear in the number of spatial locations, thereby rendering substantial scalability. We illustrate the computational and inferential benefits of the NNGP over competing methods using simulation studies and also analyze forest biomass from a massive U.S. Forest Inventory dataset at a scale that precludes alternative dimension-reducing methods. Supplementary materials for this article are available online.
Hierarchical Nearest-Neighbor Gaussian Process Models for Large Geostatistical Datasets
Datta, Abhirup; Banerjee, Sudipto; Finley, Andrew O.; Gelfand, Alan E.
2018-01-01
Spatial process models for analyzing geostatistical data entail computations that become prohibitive as the number of spatial locations become large. This article develops a class of highly scalable nearest-neighbor Gaussian process (NNGP) models to provide fully model-based inference for large geostatistical datasets. We establish that the NNGP is a well-defined spatial process providing legitimate finite-dimensional Gaussian densities with sparse precision matrices. We embed the NNGP as a sparsity-inducing prior within a rich hierarchical modeling framework and outline how computationally efficient Markov chain Monte Carlo (MCMC) algorithms can be executed without storing or decomposing large matrices. The floating point operations (flops) per iteration of this algorithm is linear in the number of spatial locations, thereby rendering substantial scalability. We illustrate the computational and inferential benefits of the NNGP over competing methods using simulation studies and also analyze forest biomass from a massive U.S. Forest Inventory dataset at a scale that precludes alternative dimension-reducing methods. Supplementary materials for this article are available online. PMID:29720777
Zarzycki, Colin M.; Reed, Kevin A.; Bacmeister, Julio T.; ...
2016-02-25
This article discusses the sensitivity of tropical cyclone climatology to surface coupling strategy in high-resolution configurations of the Community Earth System Model. Using two supported model setups, we demonstrate that the choice of grid on which the lowest model level wind stress and surface fluxes are computed may lead to differences in cyclone strength in multi-decadal climate simulations, particularly for the most intense cyclones. Using a deterministic framework, we show that when these surface quantities are calculated on an ocean grid that is coarser than the atmosphere, the computed frictional stress is misaligned with wind vectors in individual atmospheric gridmore » cells. This reduces the effective surface drag, and results in more intense cyclones when compared to a model configuration where the ocean and atmosphere are of equivalent resolution. Our results demonstrate that the choice of computation grid for atmosphere–ocean interactions is non-negligible when considering climate extremes at high horizontal resolution, especially when model components are on highly disparate grids.« less
Adaptive subdomain modeling: A multi-analysis technique for ocean circulation models
NASA Astrophysics Data System (ADS)
Altuntas, Alper; Baugh, John
2017-07-01
Many coastal and ocean processes of interest operate over large temporal and geographical scales and require a substantial amount of computational resources, particularly when engineering design and failure scenarios are also considered. This study presents an adaptive multi-analysis technique that improves the efficiency of these computations when multiple alternatives are being simulated. The technique, called adaptive subdomain modeling, concurrently analyzes any number of child domains, with each instance corresponding to a unique design or failure scenario, in addition to a full-scale parent domain providing the boundary conditions for its children. To contain the altered hydrodynamics originating from the modifications, the spatial extent of each child domain is adaptively adjusted during runtime depending on the response of the model. The technique is incorporated in ADCIRC++, a re-implementation of the popular ADCIRC ocean circulation model with an updated software architecture designed to facilitate this adaptive behavior and to utilize concurrent executions of multiple domains. The results of our case studies confirm that the method substantially reduces computational effort while maintaining accuracy.
2012-02-01
for Low Energy Building Ventilation and Space Conditioning Systems...Building Energy Models ................... 162 APPENDIX D: Reduced-Order Modeling and Control Design for Low Energy Building Systems .... 172 D.1...Design for Low Energy Building Ventilation and Space Conditioning Systems This section focuses on the modeling and control of airflow in buildings
Mahjani, Behrang; Toor, Salman; Nettelblad, Carl; Holmgren, Sverker
2017-01-01
In quantitative trait locus (QTL) mapping significance of putative QTL is often determined using permutation testing. The computational needs to calculate the significance level are immense, 10 4 up to 10 8 or even more permutations can be needed. We have previously introduced the PruneDIRECT algorithm for multiple QTL scan with epistatic interactions. This algorithm has specific strengths for permutation testing. Here, we present a flexible, parallel computing framework for identifying multiple interacting QTL using the PruneDIRECT algorithm which uses the map-reduce model as implemented in Hadoop. The framework is implemented in R, a widely used software tool among geneticists. This enables users to rearrange algorithmic steps to adapt genetic models, search algorithms, and parallelization steps to their needs in a flexible way. Our work underlines the maturity of accessing distributed parallel computing for computationally demanding bioinformatics applications through building workflows within existing scientific environments. We investigate the PruneDIRECT algorithm, comparing its performance to exhaustive search and DIRECT algorithm using our framework on a public cloud resource. We find that PruneDIRECT is vastly superior for permutation testing, and perform 2 ×10 5 permutations for a 2D QTL problem in 15 hours, using 100 cloud processes. We show that our framework scales out almost linearly for a 3D QTL search.
Reduced-order modellin for high-pressure transient flow of hydrogen-natural gas mixture
NASA Astrophysics Data System (ADS)
Agaie, Baba G.; Khan, Ilyas; Alshomrani, Ali Saleh; Alqahtani, Aisha M.
2017-05-01
In this paper the transient flow of hydrogen compressed-natural gas (HCNG) mixture which is also referred to as hydrogen-natural gas mixture in a pipeline is numerically computed using the reduced-order modelling technique. The study on transient conditions is important because the pipeline flows are normally in the unsteady state due to the sudden opening and closure of control valves, but most of the existing studies only analyse the flow in the steady-state conditions. The mathematical model consists in a set of non-linear conservation forms of partial differential equations. The objective of this paper is to improve the accuracy in the prediction of the HCNG transient flow parameters using the Reduced-Order Modelling (ROM). The ROM technique has been successfully used in single-gas and aerodynamic flow problems, the gas mixture has not been done using the ROM. The study is based on the velocity change created by the operation of the valves upstream and downstream the pipeline. Results on the flow characteristics, namely the pressure, density, celerity and mass flux are based on variations of the mixing ratio and valve reaction and actuation time; the ROM computational time cost advantage are also presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Zhenhua; Rose, Adam Z.; Prager, Fynnwin
The state of the art approach to economic consequence analysis (ECA) is computable general equilibrium (CGE) modeling. However, such models contain thousands of equations and cannot readily be incorporated into computerized systems used by policy analysts to yield estimates of economic impacts of various types of transportation system failures due to natural hazards, human related attacks or technological accidents. This paper presents a reduced-form approach to simplify the analytical content of CGE models to make them more transparent and enhance their utilization potential. The reduced-form CGE analysis is conducted by first running simulations one hundred times, varying key parameters, suchmore » as magnitude of the initial shock, duration, location, remediation, and resilience, according to a Latin Hypercube sampling procedure. Statistical analysis is then applied to the “synthetic data” results in the form of both ordinary least squares and quantile regression. The analysis yields linear equations that are incorporated into a computerized system and utilized along with Monte Carlo simulation methods for propagating uncertainties in economic consequences. Although our demonstration and discussion focuses on aviation system disruptions caused by terrorist attacks, the approach can be applied to a broad range of threat scenarios.« less
Performance of a reduced-order FSI model for flow-induced vocal fold vibration
NASA Astrophysics Data System (ADS)
Luo, Haoxiang; Chang, Siyuan; Chen, Ye; Rousseau, Bernard; PhonoSim Team
2017-11-01
Vocal fold vibration during speech production involves a three-dimensional unsteady glottal jet flow and three-dimensional nonlinear tissue mechanics. A full 3D fluid-structure interaction (FSI) model is computationally expensive even though it provides most accurate information about the system. On the other hand, an efficient reduced-order FSI model is useful for fast simulation and analysis of the vocal fold dynamics, which can be applied in procedures such as optimization and parameter estimation. In this work, we study performance of a reduced-order model as compared with the corresponding full 3D model in terms of its accuracy in predicting the vibration frequency and deformation mode. In the reduced-order model, we use a 1D flow model coupled with a 3D tissue model that is the same as in the full 3D model. Two different hyperelastic tissue behaviors are assumed. In addition, the vocal fold thickness and subglottal pressure are varied for systematic comparison. The result shows that the reduced-order model provides consistent predictions as the full 3D model across different tissue material assumptions and subglottal pressures. However, the vocal fold thickness has most effect on the model accuracy, especially when the vocal fold is thin.
Computationally effective solution of the inverse problem in time-of-flight spectroscopy.
Kamran, Faisal; Abildgaard, Otto H A; Subash, Arman A; Andersen, Peter E; Andersson-Engels, Stefan; Khoptyar, Dmitry
2015-03-09
Photon time-of-flight (PTOF) spectroscopy enables the estimation of absorption and reduced scattering coefficients of turbid media by measuring the propagation time of short light pulses through turbid medium. The present investigation provides a comparison of the assessed absorption and reduced scattering coefficients from PTOF measurements of intralipid 20% and India ink-based optical phantoms covering a wide range of optical properties relevant for biological tissues and dairy products. Three different models are used to obtain the optical properties by fitting to measured temporal profiles: the Liemert-Kienle model (LKM), the diffusion model (DM) and a white Monte-Carlo (WMC) simulation-based algorithm. For the infinite space geometry, a very good agreement is found between the LKM and WMC, while the results obtained by the DM differ, indicating that the LKM can provide accurate estimation of the optical parameters beyond the limits of the diffusion approximation in a computational effective and accurate manner. This result increases the potential range of applications for PTOF spectroscopy within industrial and biomedical applications.
Taieb-Maimon, Meirav; Cwikel, Julie; Shapira, Bracha; Orenstein, Ido
2012-03-01
An intervention study was conducted to examine the effectiveness of an innovative self-modeling photo-training method for reducing musculoskeletal risk among office workers using computers. Sixty workers were randomly assigned to either: 1) a control group; 2) an office training group that received personal, ergonomic training and workstation adjustments or 3) a photo-training group that received both office training and an automatic frequent-feedback system that displayed on the computer screen a photo of the worker's current sitting posture together with the correct posture photo taken earlier during office training. Musculoskeletal risk was evaluated using the Rapid Upper Limb Assessment (RULA) method before, during and after the six weeks intervention. Both training methods provided effective short-term posture improvement; however, sustained improvement was only attained with the photo-training method. Both interventions had a greater effect on older workers and on workers suffering more musculoskeletal pain. The photo-training method had a greater positive effect on women than on men. Copyright © 2011 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Nenov, Artur; Mukamel, Shaul; Garavelli, Marco; Rivalta, Ivan
2015-08-11
First-principles simulations of two-dimensional electronic spectroscopy in the ultraviolet region (2DUV) require computationally demanding multiconfigurational approaches that can resolve doubly excited and charge transfer states, the spectroscopic fingerprints of coupled UV-active chromophores. Here, we propose an efficient approach to reduce the computational cost of accurate simulations of 2DUV spectra of benzene, phenol, and their dimer (i.e., the minimal models for studying electronic coupling of UV-chromophores in proteins). We first establish the multiconfigurational recipe with the highest accuracy by comparison with experimental data, providing reference gas-phase transition energies and dipole moments that can be used to construct exciton Hamiltonians involving high-lying excited states. We show that by reducing the active spaces and the number of configuration state functions within restricted active space schemes, the computational cost can be significantly decreased without loss of accuracy in predicting 2DUV spectra. The proposed recipe has been successfully tested on a realistic model proteic system in water. Accounting for line broadening due to thermal and solvent-induced fluctuations allows for direct comparison with experiments.
An M-estimator for reduced-rank system identification.
Chen, Shaojie; Liu, Kai; Yang, Yuguang; Xu, Yuting; Lee, Seonjoo; Lindquist, Martin; Caffo, Brian S; Vogelstein, Joshua T
2017-01-15
High-dimensional time-series data from a wide variety of domains, such as neuroscience, are being generated every day. Fitting statistical models to such data, to enable parameter estimation and time-series prediction, is an important computational primitive. Existing methods, however, are unable to cope with the high-dimensional nature of these data, due to both computational and statistical reasons. We mitigate both kinds of issues by proposing an M-estimator for Reduced-rank System IDentification ( MR. SID). A combination of low-rank approximations, ℓ 1 and ℓ 2 penalties, and some numerical linear algebra tricks, yields an estimator that is computationally efficient and numerically stable. Simulations and real data examples demonstrate the usefulness of this approach in a variety of problems. In particular, we demonstrate that MR. SID can accurately estimate spatial filters, connectivity graphs, and time-courses from native resolution functional magnetic resonance imaging data. MR. SID therefore enables big time-series data to be analyzed using standard methods, readying the field for further generalizations including non-linear and non-Gaussian state-space models.
An M-estimator for reduced-rank system identification
Chen, Shaojie; Liu, Kai; Yang, Yuguang; Xu, Yuting; Lee, Seonjoo; Lindquist, Martin; Caffo, Brian S.; Vogelstein, Joshua T.
2018-01-01
High-dimensional time-series data from a wide variety of domains, such as neuroscience, are being generated every day. Fitting statistical models to such data, to enable parameter estimation and time-series prediction, is an important computational primitive. Existing methods, however, are unable to cope with the high-dimensional nature of these data, due to both computational and statistical reasons. We mitigate both kinds of issues by proposing an M-estimator for Reduced-rank System IDentification ( MR. SID). A combination of low-rank approximations, ℓ1 and ℓ2 penalties, and some numerical linear algebra tricks, yields an estimator that is computationally efficient and numerically stable. Simulations and real data examples demonstrate the usefulness of this approach in a variety of problems. In particular, we demonstrate that MR. SID can accurately estimate spatial filters, connectivity graphs, and time-courses from native resolution functional magnetic resonance imaging data. MR. SID therefore enables big time-series data to be analyzed using standard methods, readying the field for further generalizations including non-linear and non-Gaussian state-space models. PMID:29391659
Computational Process Modeling for Additive Manufacturing
NASA Technical Reports Server (NTRS)
Bagg, Stacey; Zhang, Wei
2014-01-01
Computational Process and Material Modeling of Powder Bed additive manufacturing of IN 718. Optimize material build parameters with reduced time and cost through modeling. Increase understanding of build properties. Increase reliability of builds. Decrease time to adoption of process for critical hardware. Potential to decrease post-build heat treatments. Conduct single-track and coupon builds at various build parameters. Record build parameter information and QM Meltpool data. Refine Applied Optimization powder bed AM process model using data. Report thermal modeling results. Conduct metallography of build samples. Calibrate STK models using metallography findings. Run STK models using AO thermal profiles and report STK modeling results. Validate modeling with additional build. Photodiode Intensity measurements highly linear with power input. Melt Pool Intensity highly correlated to Melt Pool Size. Melt Pool size and intensity increase with power. Applied Optimization will use data to develop powder bed additive manufacturing process model.
Optimization of computations for adjoint field and Jacobian needed in 3D CSEM inversion
NASA Astrophysics Data System (ADS)
Dehiya, Rahul; Singh, Arun; Gupta, Pravin K.; Israil, M.
2017-01-01
We present the features and results of a newly developed code, based on Gauss-Newton optimization technique, for solving three-dimensional Controlled-Source Electromagnetic inverse problem. In this code a special emphasis has been put on representing the operations by block matrices for conjugate gradient iteration. We show how in the computation of Jacobian, the matrix formed by differentiation of system matrix can be made independent of frequency to optimize the operations at conjugate gradient step. The coarse level parallel computing, using OpenMP framework, is used primarily due to its simplicity in implementation and accessibility of shared memory multi-core computing machine to almost anyone. We demonstrate how the coarseness of modeling grid in comparison to source (comp`utational receivers) spacing can be exploited for efficient computing, without compromising the quality of the inverted model, by reducing the number of adjoint calls. It is also demonstrated that the adjoint field can even be computed on a grid coarser than the modeling grid without affecting the inversion outcome. These observations were reconfirmed using an experiment design where the deviation of source from straight tow line is considered. Finally, a real field data inversion experiment is presented to demonstrate robustness of the code.
Multi-resolution voxel phantom modeling: a high-resolution eye model for computational dosimetry
NASA Astrophysics Data System (ADS)
Caracappa, Peter F.; Rhodes, Ashley; Fiedler, Derek
2014-09-01
Voxel models of the human body are commonly used for simulating radiation dose with a Monte Carlo radiation transport code. Due to memory limitations, the voxel resolution of these computational phantoms is typically too large to accurately represent the dimensions of small features such as the eye. Recently reduced recommended dose limits to the lens of the eye, which is a radiosensitive tissue with a significant concern for cataract formation, has lent increased importance to understanding the dose to this tissue. A high-resolution eye model is constructed using physiological data for the dimensions of radiosensitive tissues, and combined with an existing set of whole-body models to form a multi-resolution voxel phantom, which is used with the MCNPX code to calculate radiation dose from various exposure types. This phantom provides an accurate representation of the radiation transport through the structures of the eye. Two alternate methods of including a high-resolution eye model within an existing whole-body model are developed. The accuracy and performance of each method is compared against existing computational phantoms.
Sharing Research Models: Using Software Engineering Practices for Facilitation
Bryant, Stephanie P.; Solano, Eric; Cantor, Susanna; Cooley, Philip C.; Wagener, Diane K.
2011-01-01
Increasingly, researchers are turning to computational models to understand the interplay of important variables on systems’ behaviors. Although researchers may develop models that meet the needs of their investigation, application limitations—such as nonintuitive user interface features and data input specifications—may limit the sharing of these tools with other research groups. By removing these barriers, other research groups that perform related work can leverage these work products to expedite their own investigations. The use of software engineering practices can enable managed application production and shared research artifacts among multiple research groups by promoting consistent models, reducing redundant effort, encouraging rigorous peer review, and facilitating research collaborations that are supported by a common toolset. This report discusses three established software engineering practices— the iterative software development process, object-oriented methodology, and Unified Modeling Language—and the applicability of these practices to computational model development. Our efforts to modify the MIDAS TranStat application to make it more user-friendly are presented as an example of how computational models that are based on research and developed using software engineering practices can benefit a broader audience of researchers. PMID:21687780
Computational Modeling of Inflammation and Wound Healing
Ziraldo, Cordelia; Mi, Qi; An, Gary; Vodovotz, Yoram
2013-01-01
Objective Inflammation is both central to proper wound healing and a key driver of chronic tissue injury via a positive-feedback loop incited by incidental cell damage. We seek to derive actionable insights into the role of inflammation in wound healing in order to improve outcomes for individual patients. Approach To date, dynamic computational models have been used to study the time evolution of inflammation in wound healing. Emerging clinical data on histo-pathological and macroscopic images of evolving wounds, as well as noninvasive measures of blood flow, suggested the need for tissue-realistic, agent-based, and hybrid mechanistic computational simulations of inflammation and wound healing. Innovation We developed a computational modeling system, Simple Platform for Agent-based Representation of Knowledge, to facilitate the construction of tissue-realistic models. Results A hybrid equation–agent-based model (ABM) of pressure ulcer formation in both spinal cord-injured and -uninjured patients was used to identify control points that reduce stress caused by tissue ischemia/reperfusion. An ABM of arterial restenosis revealed new dynamics of cell migration during neointimal hyperplasia that match histological features, but contradict the currently prevailing mechanistic hypothesis. ABMs of vocal fold inflammation were used to predict inflammatory trajectories in individuals, possibly allowing for personalized treatment. Conclusions The intertwined inflammatory and wound healing responses can be modeled computationally to make predictions in individuals, simulate therapies, and gain mechanistic insights. PMID:24527362
NASA Technical Reports Server (NTRS)
Gorospe, George E., Jr.; Daigle, Matthew J.; Sankararaman, Shankar; Kulkarni, Chetan S.; Ng, Eley
2017-01-01
Prognostic methods enable operators and maintainers to predict the future performance for critical systems. However, these methods can be computationally expensive and may need to be performed each time new information about the system becomes available. In light of these computational requirements, we have investigated the application of graphics processing units (GPUs) as a computational platform for real-time prognostics. Recent advances in GPU technology have reduced cost and increased the computational capability of these highly parallel processing units, making them more attractive for the deployment of prognostic software. We present a survey of model-based prognostic algorithms with considerations for leveraging the parallel architecture of the GPU and a case study of GPU-accelerated battery prognostics with computational performance results.
Advanced Computational Methods for Thermal Radiative Heat Transfer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tencer, John; Carlberg, Kevin Thomas; Larsen, Marvin E.
2016-10-01
Participating media radiation (PMR) in weapon safety calculations for abnormal thermal environments are too costly to do routinely. This cost may be s ubstantially reduced by applying reduced order modeling (ROM) techniques. The application of ROM to PMR is a new and unique approach for this class of problems. This approach was investigated by the authors and shown to provide significant reductions in the computational expense associated with typical PMR simulations. Once this technology is migrated into production heat transfer analysis codes this capability will enable the routine use of PMR heat transfer in higher - fidelity simulations of weaponmore » resp onse in fire environments.« less
ERIC Educational Resources Information Center
Lai, K. Robert; Lan, Chung Hsien
2006-01-01
This work presents a novel method for modeling collaborative learning as multi-issue agent negotiation using fuzzy constraints. Agent negotiation is an iterative process, through which, the proposed method aggregates student marks to reduce personal bias. In the framework, students define individual fuzzy membership functions based on their…
Recursive renormalization group theory based subgrid modeling
NASA Technical Reports Server (NTRS)
Zhou, YE
1991-01-01
Advancing the knowledge and understanding of turbulence theory is addressed. Specific problems to be addressed will include studies of subgrid models to understand the effects of unresolved small scale dynamics on the large scale motion which, if successful, might substantially reduce the number of degrees of freedom that need to be computed in turbulence simulation.
“SNP Snappy”: A Strategy for Fast Genome-Wide Association Studies Fitting a Full Mixed Model
Meyer, Karin; Tier, Bruce
2012-01-01
A strategy to reduce computational demands of genome-wide association studies fitting a mixed model is presented. Improvements are achieved by utilizing a large proportion of calculations that remain constant across the multiple analyses for individual markers involved, with estimates obtained without inverting large matrices. PMID:22021386
NASA Astrophysics Data System (ADS)
Liu, Y.; Zheng, L.; Pau, G. S. H.
2016-12-01
A careful assessment of the risk associated with geologic CO2 storage is critical to the deployment of large-scale storage projects. While numerical modeling is an indispensable tool for risk assessment, there has been increasing need in considering and addressing uncertainties in the numerical models. However, uncertainty analyses have been significantly hindered by the computational complexity of the model. As a remedy, reduced-order models (ROM), which serve as computationally efficient surrogates for high-fidelity models (HFM), have been employed. The ROM is constructed at the expense of an initial set of HFM simulations, and afterwards can be relied upon to predict the model output values at minimal cost. The ROM presented here is part of National Risk Assessment Program (NRAP) and intends to predict the water quality change in groundwater in response to hypothetical CO2 and brine leakage. The HFM based on which the ROM is derived is a multiphase flow and reactive transport model, with 3-D heterogeneous flow field and complex chemical reactions including aqueous complexation, mineral dissolution/precipitation, adsorption/desorption via surface complexation and cation exchange. Reduced-order modeling techniques based on polynomial basis expansion, such as polynomial chaos expansion (PCE), are widely used in the literature. However, the accuracy of such ROMs can be affected by the sparse structure of the coefficients of the expansion. Failing to identify vanishing polynomial coefficients introduces unnecessary sampling errors, the accumulation of which deteriorates the accuracy of the ROMs. To address this issue, we treat the PCE as a sparse Bayesian learning (SBL) problem, and the sparsity is obtained by detecting and including only the non-zero PCE coefficients one at a time by iteratively selecting the most contributing coefficients. The computational complexity due to predicting the entire 3-D concentration fields is further mitigated by a dimension reduction procedure-proper orthogonal decomposition (POD). Our numerical results show that utilizing the sparse structure and POD significantly enhances the accuracy and efficiency of the ROMs, laying the basis for further analyses that necessitate a large number of model simulations.
Validation of a reduced-order jet model for subsonic and underexpanded hydrogen jets
Li, Xuefang; Hecht, Ethan S.; Christopher, David M.
2016-01-01
Much effort has been made to model hydrogen releases from leaks during potential failures of hydrogen storage systems. A reduced-order jet model can be used to quickly characterize these flows, with low computational cost. Notional nozzle models are often used to avoid modeling the complex shock structures produced by the underexpanded jets by determining an “effective” source to produce the observed downstream trends. In our work, the mean hydrogen concentration fields were measured in a series of subsonic and underexpanded jets using a planar laser Rayleigh scattering system. Furthermore, we compared the experimental data to a reduced order jet modelmore » for subsonic flows and a notional nozzle model coupled to the jet model for underexpanded jets. The values of some key model parameters were determined by comparisons with the experimental data. Finally, the coupled model was also validated against hydrogen concentrations measurements for 100 and 200 bar hydrogen jets with the predictions agreeing well with data in the literature.« less
Fitting neuron models to spike trains.
Rossant, Cyrille; Goodman, Dan F M; Fontaine, Bertrand; Platkiewicz, Jonathan; Magnusson, Anna K; Brette, Romain
2011-01-01
Computational modeling is increasingly used to understand the function of neural circuits in systems neuroscience. These studies require models of individual neurons with realistic input-output properties. Recently, it was found that spiking models can accurately predict the precisely timed spike trains produced by cortical neurons in response to somatically injected currents, if properly fitted. This requires fitting techniques that are efficient and flexible enough to easily test different candidate models. We present a generic solution, based on the Brian simulator (a neural network simulator in Python), which allows the user to define and fit arbitrary neuron models to electrophysiological recordings. It relies on vectorization and parallel computing techniques to achieve efficiency. We demonstrate its use on neural recordings in the barrel cortex and in the auditory brainstem, and confirm that simple adaptive spiking models can accurately predict the response of cortical neurons. Finally, we show how a complex multicompartmental model can be reduced to a simple effective spiking model.