Sample records for simplified computational model

  1. Comparison between a typical and a simplified model for blast load-induced structural response

    NASA Astrophysics Data System (ADS)

    Abd-Elhamed, A.; Mahmoud, S.

    2017-02-01

    As explosive blasts continue to cause severe damage as well as victims in both civil and military environments. There is a bad need for understanding the behavior of structural elements to such extremely short duration dynamic loads where it is of great concern nowadays. Due to the complexity of the typical blast pressure profile model and in order to reduce the modelling and computational efforts, the simplified triangle model for blast loads profile is used to analyze structural response. This simplified model considers only the positive phase and ignores the suction phase which characterizes the typical one in simulating blast loads. The closed from solution for the equation of motion under blast load as a forcing term modelled either typical or simplified models has been derived. The considered herein two approaches have been compared using the obtained results from simulation response analysis of a building structure under an applied blast load. The computed error in simulating response using the simplified model with respect to the typical one has been computed. In general, both simplified and typical models can perform the dynamic blast-load induced response of building structures. However, the simplified one shows a remarkably different response behavior as compared to the typical one despite its simplicity and the use of only positive phase for simulating the explosive loads. The prediction of the dynamic system responses using the simplified model is not satisfactory due to the obtained larger errors as compared to the system responses obtained using the typical one.

  2. A transfer function type of simplified electrochemical model with modified boundary conditions and Padé approximation for Li-ion battery: Part 1. lithium concentration estimation

    NASA Astrophysics Data System (ADS)

    Yuan, Shifei; Jiang, Lei; Yin, Chengliang; Wu, Hongjie; Zhang, Xi

    2017-06-01

    To guarantee the safety, high efficiency and long lifetime for lithium-ion battery, an advanced battery management system requires a physics-meaningful yet computationally efficient battery model. The pseudo-two dimensional (P2D) electrochemical model can provide physical information about the lithium concentration and potential distributions across the cell dimension. However, the extensive computation burden caused by the temporal and spatial discretization limits its real-time application. In this research, we propose a new simplified electrochemical model (SEM) by modifying the boundary conditions for electrolyte diffusion equations, which significantly facilitates the analytical solving process. Then to obtain a reduced order transfer function, the Padé approximation method is adopted to simplify the derived transcendental impedance solution. The proposed model with the reduced order transfer function can be briefly computable and preserve physical meanings through the presence of parameters such as the solid/electrolyte diffusion coefficients (Ds&De) and particle radius. The simulation illustrates that the proposed simplified model maintains high accuracy for electrolyte phase concentration (Ce) predictions, saying 0.8% and 0.24% modeling error respectively, when compared to the rigorous model under 1C-rate pulse charge/discharge and urban dynamometer driving schedule (UDDS) profiles. Meanwhile, this simplified model yields significantly reduced computational burden, which benefits its real-time application.

  3. iGen: An automated generator of simplified models with provable error bounds.

    NASA Astrophysics Data System (ADS)

    Tang, D.; Dobbie, S.

    2009-04-01

    Climate models employ various simplifying assumptions and parameterisations in order to increase execution speed. However, in order to draw conclusions about the Earths climate from the results of a climate simulation it is necessary to have information about the error that these assumptions and parameterisations introduce. A novel computer program, called iGen, is being developed which automatically generates fast, simplified models by analysing the source code of a slower, high resolution model. The resulting simplified models have provable bounds on error compared to the high resolution model and execute at speeds that are typically orders of magnitude faster. iGen's input is a definition of the prognostic variables of the simplified model, a set of bounds on acceptable error and the source code of a model that captures the behaviour of interest. In the case of an atmospheric model, for example, this would be a global cloud resolving model with very high resolution. Although such a model would execute far too slowly to be used directly in a climate model, iGen never executes it. Instead, it converts the code of the resolving model into a mathematical expression which is then symbolically manipulated and approximated to form a simplified expression. This expression is then converted back into a computer program and output as a simplified model. iGen also derives and reports formal bounds on the error of the simplified model compared to the resolving model. These error bounds are always maintained below the user-specified acceptable error. Results will be presented illustrating the success of iGen's analysis of a number of example models. These extremely encouraging results have lead on to work which is currently underway to analyse a cloud resolving model and so produce an efficient parameterisation of moist convection with formally bounded error.

  4. The Application of a Massively Parallel Computer to the Simulation of Electrical Wave Propagation Phenomena in the Heart Muscle Using Simplified Models

    NASA Technical Reports Server (NTRS)

    Karpoukhin, Mikhii G.; Kogan, Boris Y.; Karplus, Walter J.

    1995-01-01

    The simulation of heart arrhythmia and fibrillation are very important and challenging tasks. The solution of these problems using sophisticated mathematical models is beyond the capabilities of modern super computers. To overcome these difficulties it is proposed to break the whole simulation problem into two tightly coupled stages: generation of the action potential using sophisticated models. and propagation of the action potential using simplified models. The well known simplified models are compared and modified to bring the rate of depolarization and action potential duration restitution closer to reality. The modified method of lines is used to parallelize the computational process. The conditions for the appearance of 2D spiral waves after the application of a premature beat and the subsequent traveling of the spiral wave inside the simulated tissue are studied.

  5. A simplified solar cell array modelling program

    NASA Technical Reports Server (NTRS)

    Hughes, R. D.

    1982-01-01

    As part of the energy conversion/self sufficiency efforts of DSN engineering, it was necessary to have a simplified computer model of a solar photovoltaic (PV) system. This article describes the analysis and simplifications employed in the development of a PV cell array computer model. The analysis of the incident solar radiation, steady state cell temperature and the current-voltage characteristics of a cell array are discussed. A sample cell array was modelled and the results are presented.

  6. Computer models for economic and silvicultural decisions

    Treesearch

    Rosalie J. Ingram

    1989-01-01

    Computer systems can help simplify decisionmaking to manage forest ecosystems. We now have computer models to help make forest management decisions by predicting changes associated with a particular management action. Models also help you evaluate alternatives. To be effective, the computer models must be reliable and appropriate for your situation.

  7. SUITABILITY OF USING IN VITRO AND COMPUTATIONALLY ESTIMATED PARAMETERS IN SIMPLIFIED PHARMACOKINETIC MODELS

    EPA Science Inventory

    A challenge in PBPK model development is estimating the parameters for absorption, distribution, metabolism, and excretion of the parent compound and metabolites of interest. One approach to reduce the number of parameters has been to simplify pharmacokinetic models by lumping p...

  8. Improvement on a simplified model for protein folding simulation.

    PubMed

    Zhang, Ming; Chen, Changjun; He, Yi; Xiao, Yi

    2005-11-01

    Improvements were made on a simplified protein model--the Ramachandran model-to achieve better computer simulation of protein folding. To check the validity of such improvements, we chose the ultrafast folding protein Engrailed Homeodomain as an example and explored several aspects of its folding. The engrailed homeodomain is a mainly alpha-helical protein of 61 residues from Drosophila melanogaster. We found that the simplified model of Engrailed Homeodomain can fold into a global minimum state with a tertiary structure in good agreement with its native structure.

  9. Cloud computing can simplify HIT infrastructure management.

    PubMed

    Glaser, John

    2011-08-01

    Software as a Service (SaaS), built on cloud computing technology, is emerging as the forerunner in IT infrastructure because it helps healthcare providers reduce capital investments. Cloud computing leads to predictable, monthly, fixed operating expenses for hospital IT staff. Outsourced cloud computing facilities are state-of-the-art data centers boasting some of the most sophisticated networking equipment on the market. The SaaS model helps hospitals safeguard against technology obsolescence, minimizes maintenance requirements, and simplifies management.

  10. A user-oriented and computerized model for estimating vehicle ride quality

    NASA Technical Reports Server (NTRS)

    Leatherwood, J. D.; Barker, L. M.

    1984-01-01

    A simplified empirical model and computer program for estimating passenger ride comfort within air and surface transportation systems are described. The model is based on subjective ratings from more than 3000 persons who were exposed to controlled combinations of noise and vibration in the passenger ride quality apparatus. This model has the capability of transforming individual elements of a vehicle's noise and vibration environment into subjective discomfort units and then combining the subjective units to produce a single discomfort index typifying passenger acceptance of the environment. The computational procedures required to obtain discomfort estimates are discussed, and a user oriented ride comfort computer program is described. Examples illustrating application of the simplified model to helicopter and automobile ride environments are presented.

  11. Mathematical Description of Complex Chemical Kinetics and Application to CFD Modeling Codes

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.

    1993-01-01

    A major effort in combustion research at the present time is devoted to the theoretical modeling of practical combustion systems. These include turbojet and ramjet air-breathing engines as well as ground-based gas-turbine power generating systems. The ability to use computational modeling extensively in designing these products not only saves time and money, but also helps designers meet the quite rigorous environmental standards that have been imposed on all combustion devices. The goal is to combine the very complex solution of the Navier-Stokes flow equations with realistic turbulence and heat-release models into a single computer code. Such a computational fluid-dynamic (CFD) code simulates the coupling of fluid mechanics with the chemistry of combustion to describe the practical devices. This paper will focus on the task of developing a simplified chemical model which can predict realistic heat-release rates as well as species composition profiles, and is also computationally rapid. We first discuss the mathematical techniques used to describe a complex, multistep fuel oxidation chemical reaction and develop a detailed mechanism for the process. We then show how this mechanism may be reduced and simplified to give an approximate model which adequately predicts heat release rates and a limited number of species composition profiles, but is computationally much faster than the original one. Only such a model can be incorporated into a CFD code without adding significantly to long computation times. Finally, we present some of the recent advances in the development of these simplified chemical mechanisms.

  12. Mathematical description of complex chemical kinetics and application to CFD modeling codes

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.

    1993-01-01

    A major effort in combustion research at the present time is devoted to the theoretical modeling of practical combustion systems. These include turbojet and ramjet air-breathing engines as well as ground-based gas-turbine power generating systems. The ability to use computational modeling extensively in designing these products not only saves time and money, but also helps designers meet the quite rigorous environmental standards that have been imposed on all combustion devices. The goal is to combine the very complex solution of the Navier-Stokes flow equations with realistic turbulence and heat-release models into a single computer code. Such a computational fluid-dynamic (CFD) code simulates the coupling of fluid mechanics with the chemistry of combustion to describe the practical devices. This paper will focus on the task of developing a simplified chemical model which can predict realistic heat-release rates as well as species composition profiles, and is also computationally rapid. We first discuss the mathematical techniques used to describe a complex, multistep fuel oxidation chemical reaction and develop a detailed mechanism for the process. We then show how this mechanism may be reduced and simplified to give an approximate model which adequately predicts heat release rates and a limited number of species composition profiles, but is computationally much faster than the original one. Only such a model can be incorporated into a CFD code without adding significantly to long computation times. Finally, we present some of the recent advances in the development of these simplified chemical mechanisms.

  13. Electric Power Distribution System Model Simplification Using Segment Substitution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat

    Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). In contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less

  14. Two tradeoffs between economy and reliability in loss of load probability constrained unit commitment

    NASA Astrophysics Data System (ADS)

    Liu, Yuan; Wang, Mingqiang; Ning, Xingyao

    2018-02-01

    Spinning reserve (SR) should be scheduled considering the balance between economy and reliability. To address the computational intractability cursed by the computation of loss of load probability (LOLP), many probabilistic methods use simplified formulations of LOLP to improve the computational efficiency. Two tradeoffs embedded in the SR optimization model are not explicitly analyzed in these methods. In this paper, two tradeoffs including primary tradeoff and secondary tradeoff between economy and reliability in the maximum LOLP constrained unit commitment (UC) model are explored and analyzed in a small system and in IEEE-RTS System. The analysis on the two tradeoffs can help in establishing new efficient simplified LOLP formulations and new SR optimization models.

  15. A Fast Method for Embattling Optimization of Ground-Based Radar Surveillance Network

    NASA Astrophysics Data System (ADS)

    Jiang, H.; Cheng, H.; Zhang, Y.; Liu, J.

    A growing number of space activities have created an orbital debris environment that poses increasing impact risks to existing space systems and human space flight. For the safety of in-orbit spacecraft, a lot of observation facilities are needed to catalog space objects, especially in low earth orbit. Surveillance of Low earth orbit objects are mainly rely on ground-based radar, due to the ability limitation of exist radar facilities, a large number of ground-based radar need to build in the next few years in order to meet the current space surveillance demands. How to optimize the embattling of ground-based radar surveillance network is a problem to need to be solved. The traditional method for embattling optimization of ground-based radar surveillance network is mainly through to the detection simulation of all possible stations with cataloged data, and makes a comprehensive comparative analysis of various simulation results with the combinational method, and then selects an optimal result as station layout scheme. This method is time consuming for single simulation and high computational complexity for the combinational analysis, when the number of stations increases, the complexity of optimization problem will be increased exponentially, and cannot be solved with traditional method. There is no better way to solve this problem till now. In this paper, target detection procedure was simplified. Firstly, the space coverage of ground-based radar was simplified, a space coverage projection model of radar facilities in different orbit altitudes was built; then a simplified objects cross the radar coverage model was established according to the characteristics of space objects orbit motion; after two steps simplification, the computational complexity of the target detection was greatly simplified, and simulation results shown the correctness of the simplified results. In addition, the detection areas of ground-based radar network can be easily computed with the simplified model, and then optimized the embattling of ground-based radar surveillance network with the artificial intelligent algorithm, which can greatly simplifies the computational complexities. Comparing with the traditional method, the proposed method greatly improved the computational efficiency.

  16. Research study on stabilization and control: Modern sampled-data control theory. Continuous and discrete describing function analysis of the LST system. [with emphasis on the control moment gyroscope control loop

    NASA Technical Reports Server (NTRS)

    Kuo, B. C.; Singh, G.

    1974-01-01

    The dynamics of the Large Space Telescope (LST) control system were studied in order to arrive at a simplified model for computer simulation without loss of accuracy. The frictional nonlinearity of the Control Moment Gyroscope (CMG) Control Loop was analyzed in a model to obtain data for the following: (1) a continuous describing function for the gimbal friction nonlinearity; (2) a describing function of the CMG nonlinearity using an analytical torque equation; and (3) the discrete describing function and function plots for CMG functional linearity. Preliminary computer simulations are shown for the simplified LST system, first without, and then with analytical torque expressions. Transfer functions of the sampled-data LST system are also described. A final computer simulation is presented which uses elements of the simplified sampled-data LST system with analytical CMG frictional torque expressions.

  17. Improved heat transfer modeling of the eye for electromagnetic wave exposures.

    PubMed

    Hirata, Akimasa

    2007-05-01

    This study proposed an improved heat transfer model of the eye for exposure to electromagnetic (EM) waves. Particular attention was paid to the difference from the simplified heat transfer model commonly used in this field. From our computational results, the temperature elevation in the eye calculated with the simplified heat transfer model was largely influenced by the EM absorption outside the eyeball, but not when we used our improved model.

  18. A computationally efficient description of heterogeneous freezing: A simplified version of the Soccer ball model

    NASA Astrophysics Data System (ADS)

    Niedermeier, Dennis; Ervens, Barbara; Clauss, Tina; Voigtländer, Jens; Wex, Heike; Hartmann, Susan; Stratmann, Frank

    2014-01-01

    In a recent study, the Soccer ball model (SBM) was introduced for modeling and/or parameterizing heterogeneous ice nucleation processes. The model applies classical nucleation theory. It allows for a consistent description of both apparently singular and stochastic ice nucleation behavior, by distributing contact angles over the nucleation sites of a particle population assuming a Gaussian probability density function. The original SBM utilizes the Monte Carlo technique, which hampers its usage in atmospheric models, as fairly time-consuming calculations must be performed to obtain statistically significant results. Thus, we have developed a simplified and computationally more efficient version of the SBM. We successfully used the new SBM to parameterize experimental nucleation data of, e.g., bacterial ice nucleation. Both SBMs give identical results; however, the new model is computationally less expensive as confirmed by cloud parcel simulations. Therefore, it is a suitable tool for describing heterogeneous ice nucleation processes in atmospheric models.

  19. Computational reacting gas dynamics

    NASA Technical Reports Server (NTRS)

    Lam, S. H.

    1993-01-01

    In the study of high speed flows at high altitudes, such as that encountered by re-entry spacecrafts, the interaction of chemical reactions and other non-equilibrium processes in the flow field with the gas dynamics is crucial. Generally speaking, problems of this level of complexity must resort to numerical methods for solutions, using sophisticated computational fluid dynamics (CFD) codes. The difficulties introduced by reacting gas dynamics can be classified into three distinct headings: (1) the usually inadequate knowledge of the reaction rate coefficients in the non-equilibrium reaction system; (2) the vastly larger number of unknowns involved in the computation and the expected stiffness of the equations; and (3) the interpretation of the detailed reacting CFD numerical results. The research performed accepts the premise that reacting flows of practical interest in the future will in general be too complex or 'untractable' for traditional analytical developments. The power of modern computers must be exploited. However, instead of focusing solely on the construction of numerical solutions of full-model equations, attention is also directed to the 'derivation' of the simplified model from the given full-model. In other words, the present research aims to utilize computations to do tasks which have traditionally been done by skilled theoreticians: to reduce an originally complex full-model system into an approximate but otherwise equivalent simplified model system. The tacit assumption is that once the appropriate simplified model is derived, the interpretation of the detailed numerical reacting CFD numerical results will become much easier. The approach of the research is called computational singular perturbation (CSP).

  20. Electric Power Distribution System Model Simplification Using Segment Substitution

    DOE PAGES

    Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat; ...

    2017-09-20

    Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). Finally, in contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less

  1. Electric Power Distribution System Model Simplification Using Segment Substitution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat

    Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). Finally, in contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less

  2. Fluid-line math model

    NASA Technical Reports Server (NTRS)

    Kandelman, A.; Nelson, D. J.

    1977-01-01

    Simplified mathematical model simulates large hydraulic systems on either analog or digital computers. Models of pumps, servoactuators, reservoirs, accumulators, and valves are connected generating systems containing six hundred elements.

  3. On equivalent parameter learning in simplified feature space based on Bayesian asymptotic analysis.

    PubMed

    Yamazaki, Keisuke

    2012-07-01

    Parametric models for sequential data, such as hidden Markov models, stochastic context-free grammars, and linear dynamical systems, are widely used in time-series analysis and structural data analysis. Computation of the likelihood function is one of primary considerations in many learning methods. Iterative calculation of the likelihood such as the model selection is still time-consuming though there are effective algorithms based on dynamic programming. The present paper studies parameter learning in a simplified feature space to reduce the computational cost. Simplifying data is a common technique seen in feature selection and dimension reduction though an oversimplified space causes adverse learning results. Therefore, we mathematically investigate a condition of the feature map to have an asymptotically equivalent convergence point of estimated parameters, referred to as the vicarious map. As a demonstration to find vicarious maps, we consider the feature space, which limits the length of data, and derive a necessary length for parameter learning in hidden Markov models. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Simplified Models for the Study of Postbuckled Hat-Stiffened Composite Panels

    NASA Technical Reports Server (NTRS)

    Vescovini, Riccardo; Davila, Carlos G.; Bisagni, Chiara

    2012-01-01

    The postbuckling response and failure of multistringer stiffened panels is analyzed using models with three levels of approximation. The first model uses a relatively coarse mesh to capture the global postbuckling response of a five-stringer panel. The second model can predict the nonlinear response as well as the debonding and crippling failure mechanisms in a single stringer compression specimen (SSCS). The third model consists of a simplified version of the SSCS that is designed to minimize the computational effort. The simplified model is well-suited to perform sensitivity analyses for studying the phenomena that lead to structural collapse. In particular, the simplified model is used to obtain a deeper understanding of the role played by geometric and material modeling parameters such as mesh size, inter-laminar strength, fracture toughness, and fracture mode mixity. Finally, a global/local damage analysis method is proposed in which a detailed local model is used to scan the global model to identify the locations that are most critical for damage tolerance.

  5. [Simplification of crop shortage water index and its application in drought remote sensing monitoring].

    PubMed

    Liu, Anlin; Li, Xingmin; He, Yanbo; Deng, Fengdong

    2004-02-01

    Based on the principle of energy balance, the method for calculating latent evaporation was simplified, and hence, the construction of the drought remote sensing monitoring model of crop water shortage index was also simplified. Since the modified model involved fewer parameters and reduced computing times, it was more suitable for the operation running in the routine services. After collecting the concerned meteorological elements and the NOAA/AVHRR image data, the new model was applied to monitor the spring drought in Guanzhong, Shanxi Province. The results showed that the monitoring results from the new model, which also took more considerations of the effects of the ground coverage conditions and meteorological elements such as wind speed and the water pressure, were much better than the results from the model of vegetation water supply index. From the view of the computing times, service effects and monitoring results, the simplified crop water shortage index model was more suitable for practical use. In addition, the reasons of the abnormal results of CWSI > 1 in some regions in the case studies were also discussed in this paper.

  6. Determination of the Ephemeris Accuracy for AJISAI, LAGEOS and ETALON Satellites, Obtained with A Simplified Numerical Motion Model Using the ILRS Coordinates

    NASA Astrophysics Data System (ADS)

    Kara, I. V.

    This paper describes a simplified numerical model of passive artificial Earth satellite (AES) motion. The model accuracy is determined using the International Laser Ranging Service (ILRS) highprecision coordinates. Those data are freely available on http://ilrs.gsfc.nasa.gov. The differential equations of the AES motion are solved by the Everhart numerical method of 17th and 19th orders with the integration step automatic correction. The comparison between the AES coordinates computed with the motion model and the ILRS coordinates enabled to determine the accuracy of the ephemerides obtained. As a result, the discrepancy of the computed Etalon-1 ephemerides from the ILRS data is about 10'' for a one-year ephemeris.

  7. Software Simplifies the Sharing of Numerical Models

    NASA Technical Reports Server (NTRS)

    2014-01-01

    To ease the sharing of climate models with university students, Goddard Space Flight Center awarded SBIR funding to Reston, Virginia-based Parabon Computation Inc., a company that specializes in cloud computing. The firm developed a software program capable of running climate models over the Internet, and also created an online environment for people to collaborate on developing such models.

  8. Surrogates for numerical simulations; optimization of eddy-promoter heat exchangers

    NASA Technical Reports Server (NTRS)

    Patera, Anthony T.; Patera, Anthony

    1993-01-01

    Although the advent of fast and inexpensive parallel computers has rendered numerous previously intractable calculations feasible, many numerical simulations remain too resource-intensive to be directly inserted in engineering optimization efforts. An attractive alternative to direct insertion considers models for computational systems: the expensive simulation is evoked only to construct and validate a simplified, input-output model; this simplified input-output model then serves as a simulation surrogate in subsequent engineering optimization studies. A simple 'Bayesian-validated' statistical framework for the construction, validation, and purposive application of static computer simulation surrogates is presented. As an example, dissipation-transport optimization of laminar-flow eddy-promoter heat exchangers are considered: parallel spectral element Navier-Stokes calculations serve to construct and validate surrogates for the flowrate and Nusselt number; these surrogates then represent the originating Navier-Stokes equations in the ensuing design process.

  9. Multithreaded Model for Dynamic Load Balancing Parallel Adaptive PDE Computations

    NASA Technical Reports Server (NTRS)

    Chrisochoides, Nikos

    1995-01-01

    We present a multithreaded model for the dynamic load-balancing of numerical, adaptive computations required for the solution of Partial Differential Equations (PDE's) on multiprocessors. Multithreading is used as a means of exploring concurrency in the processor level in order to tolerate synchronization costs inherent to traditional (non-threaded) parallel adaptive PDE solvers. Our preliminary analysis for parallel, adaptive PDE solvers indicates that multithreading can be used an a mechanism to mask overheads required for the dynamic balancing of processor workloads with computations required for the actual numerical solution of the PDE's. Also, multithreading can simplify the implementation of dynamic load-balancing algorithms, a task that is very difficult for traditional data parallel adaptive PDE computations. Unfortunately, multithreading does not always simplify program complexity, often makes code re-usability not an easy task, and increases software complexity.

  10. The influence of wind-tunnel walls on discrete frequency noise

    NASA Technical Reports Server (NTRS)

    Mosher, M.

    1984-01-01

    This paper describes an analytical model that can be used to examine the effects of wind-tunnel walls on discrete frequency noise. First, a complete physical model of an acoustic source in a wind tunnel is described, and a simplified version is then developed. This simplified model retains the important physical processes involved, yet it is more amenable to analysis. Second, the simplified physical model is formulated as a mathematical problem. An inhomogeneous partial differential equation with mixed boundary conditions is set up and then transformed into an integral equation. The integral equation has been solved with a panel program on a computer. Preliminary results from a simple model problem will be shown and compared with the approximate analytic solution.

  11. Single frequency GPS measurements in real-time artificial satellite orbit determination

    NASA Astrophysics Data System (ADS)

    Chiaradia, orbit determination A. P. M.; Kuga, H. K.; Prado, A. F. B. A.

    2003-07-01

    A simplified and compact algorithm with low computational cost providing an accuracy around tens of meters for artificial satellite orbit determination in real-time and on-board is developed in this work. The state estimation method is the extended Kalman filter. The Cowell's method is used to propagate the state vector, through a simple Runge-Kutta numerical integrator of fourth order with fixed step size. The modeled forces are due to the geopotential up to 50th order and degree of JGM-2 model. To time-update the state error covariance matrix, it is considered a simplified force model. In other words, in computing the state transition matrix, the effect of J 2 (Earth flattening) is analytically considered, which unloads dramatically the processing time. In the measurement model, the single frequency GPS pseudorange is used, considering the effects of the ionospheric delay, clock offsets of the GPS and user satellites, and relativistic effects. To validate this model, real live data are used from Topex/Poseidon satellite and the results are compared with the Topex/Poseidon Precision Orbit Ephemeris (POE) generated by NASA/JPL, for several test cases. It is concluded that this compact algorithm enables accuracies of tens of meters with such simplified force model, analytical approach for computing the transition matrix, and a cheap GPS receiver providing single frequency pseudorange measurements.

  12. A neuronal network model with simplified tonotopicity for tinnitus generation and its relief by sound therapy.

    PubMed

    Nagashino, Hirofumi; Kinouchi, Yohsuke; Danesh, Ali A; Pandya, Abhijit S

    2013-01-01

    Tinnitus is the perception of sound in the ears or in the head where no external source is present. Sound therapy is one of the most effective techniques for tinnitus treatment that have been proposed. In order to investigate mechanisms of tinnitus generation and the clinical effects of sound therapy, we have proposed conceptual and computational models with plasticity using a neural oscillator or a neuronal network model. In the present paper, we propose a neuronal network model with simplified tonotopicity of the auditory system as more detailed structure. In this model an integrate-and-fire neuron model is employed and homeostatic plasticity is incorporated. The computer simulation results show that the present model can show the generation of oscillation and its cessation by external input. It suggests that the present framework is promising as a modeling for the tinnitus generation and the effects of sound therapy.

  13. Investigation of Climate Change Impact on Water Resources for an Alpine Basin in Northern Italy: Implications for Evapotranspiration Modeling Complexity

    PubMed Central

    Ravazzani, Giovanni; Ghilardi, Matteo; Mendlik, Thomas; Gobiet, Andreas; Corbari, Chiara; Mancini, Marco

    2014-01-01

    Assessing the future effects of climate change on water availability requires an understanding of how precipitation and evapotranspiration rates will respond to changes in atmospheric forcing. Use of simplified hydrological models is required beacause of lack of meteorological forcings with the high space and time resolutions required to model hydrological processes in mountains river basins, and the necessity of reducing the computational costs. The main objective of this study was to quantify the differences between a simplified hydrological model, which uses only precipitation and temperature to compute the hydrological balance when simulating the impact of climate change, and an enhanced version of the model, which solves the energy balance to compute the actual evapotranspiration. For the meteorological forcing of future scenario, at-site bias-corrected time series based on two regional climate models were used. A quantile-based error-correction approach was used to downscale the regional climate model simulations to a point scale and to reduce its error characteristics. The study shows that a simple temperature-based approach for computing the evapotranspiration is sufficiently accurate for performing hydrological impact investigations of climate change for the Alpine river basin which was studied. PMID:25285917

  14. Investigation of climate change impact on water resources for an Alpine basin in northern Italy: implications for evapotranspiration modeling complexity.

    PubMed

    Ravazzani, Giovanni; Ghilardi, Matteo; Mendlik, Thomas; Gobiet, Andreas; Corbari, Chiara; Mancini, Marco

    2014-01-01

    Assessing the future effects of climate change on water availability requires an understanding of how precipitation and evapotranspiration rates will respond to changes in atmospheric forcing. Use of simplified hydrological models is required because of lack of meteorological forcings with the high space and time resolutions required to model hydrological processes in mountains river basins, and the necessity of reducing the computational costs. The main objective of this study was to quantify the differences between a simplified hydrological model, which uses only precipitation and temperature to compute the hydrological balance when simulating the impact of climate change, and an enhanced version of the model, which solves the energy balance to compute the actual evapotranspiration. For the meteorological forcing of future scenario, at-site bias-corrected time series based on two regional climate models were used. A quantile-based error-correction approach was used to downscale the regional climate model simulations to a point scale and to reduce its error characteristics. The study shows that a simple temperature-based approach for computing the evapotranspiration is sufficiently accurate for performing hydrological impact investigations of climate change for the Alpine river basin which was studied.

  15. Effect of Anatomically Realistic Full-Head Model on Activation of Cortical Neurons in Subdural Cortical Stimulation—A Computational Study

    NASA Astrophysics Data System (ADS)

    Seo, Hyeon; Kim, Donghyeon; Jun, Sung Chan

    2016-06-01

    Electrical brain stimulation (EBS) is an emerging therapy for the treatment of neurological disorders, and computational modeling studies of EBS have been used to determine the optimal parameters for highly cost-effective electrotherapy. Recent notable growth in computing capability has enabled researchers to consider an anatomically realistic head model that represents the full head and complex geometry of the brain rather than the previous simplified partial head model (extruded slab) that represents only the precentral gyrus. In this work, subdural cortical stimulation (SuCS) was found to offer a better understanding of the differential activation of cortical neurons in the anatomically realistic full-head model than in the simplified partial-head models. We observed that layer 3 pyramidal neurons had comparable stimulation thresholds in both head models, while layer 5 pyramidal neurons showed a notable discrepancy between the models; in particular, layer 5 pyramidal neurons demonstrated asymmetry in the thresholds and action potential initiation sites in the anatomically realistic full-head model. Overall, the anatomically realistic full-head model may offer a better understanding of layer 5 pyramidal neuronal responses. Accordingly, the effects of using the realistic full-head model in SuCS are compelling in computational modeling studies, even though this modeling requires substantially more effort.

  16. A simplified computational memory model from information processing.

    PubMed

    Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang

    2016-11-23

    This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view.

  17. Pececillo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlson, Neil; Jibben, Zechariah; Brady, Peter

    2017-06-28

    Pececillo is a proxy-app for the open source Truchas metal processing code (LA-CC-15-097). It implements many of the physics models used in Truchas: free-surface, incompressible Navier-Stokes fluid dynamics (e.g., water waves); heat transport, material phase change, view factor thermal radiation; species advection-diffusion; quasi-static, elastic/plastic solid mechanics with contact; electomagnetics (Maxwell's equations). The models are simplified versions that retain the fundamental computational complexity of the Truchas models while omitting many non-essential features and modeling capabilities. The purpose is to expose Truchas algorithms in a greatly simplified context where computer science problems related to parallel performance on advanced architectures can be moremore » easily investigated. While Pececillo is capable of performing simulations representative of typical Truchas metal casting, welding, and additive manufacturing simulations, it lacks many of the modeling capabilites needed for real applications.« less

  18. Definition of ground test for verification of large space structure control

    NASA Technical Reports Server (NTRS)

    Doane, G. B., III; Glaese, J. R.; Tollison, D. K.; Howsman, T. G.; Curtis, S. (Editor); Banks, B.

    1984-01-01

    Control theory and design, dynamic system modelling, and simulation of test scenarios are the main ideas discussed. The overall effort is the achievement at Marshall Space Flight Center of a successful ground test experiment of a large space structure. A simplified planar model of ground test experiment of a large space structure. A simplified planar model of ground test verification was developed. The elimination from that model of the uncontrollable rigid body modes was also examined. Also studied was the hardware/software of computation speed.

  19. RANS modeling of scalar dispersion from localized sources within a simplified urban-area model

    NASA Astrophysics Data System (ADS)

    Rossi, Riccardo; Capra, Stefano; Iaccarino, Gianluca

    2011-11-01

    The dispersion of a passive scalar downstream a localized source within a simplified urban-like geometry is examined by means of RANS scalar flux models. The computations are conducted under conditions of neutral stability and for three different incoming wind directions (0°, 45°, 90°) at a roughness Reynolds number of Ret = 391. A Reynolds stress transport model is used to close the flow governing equations whereas both the standard eddy-diffusivity closure and algebraic flux models are employed to close the transport equation for the passive scalar. The comparison with a DNS database shows improved reliability from algebraic scalar flux models towards predicting both the mean concentration and the plume structure. Since algebraic flux models do not increase substantially the computational effort, the results indicate that the use of tensorial-diffusivity can be promising tool for dispersion simulations for the urban environment.

  20. Dissipation models for central difference schemes

    NASA Astrophysics Data System (ADS)

    Eliasson, Peter

    1992-12-01

    In this paper different flux limiters are used to construct dissipation models. The flux limiters are usually of Total Variation Diminishing (TVD type and are applied to the characteristic variables for the hyperbolic Euler equations in one, two or three dimensions. A number of simplified dissipation models with a reduced number of limiters are considered to reduce the computational effort. The most simplified methods use only one limiter, the dissipation model by Jameson belongs to this class since the Jameson pressure switch is considered as a limiter, not TVD though. Other one-limiter models with TVD limiters are also investigated. Models in between the most simplified one-limiter models and the full model with limiters on all the different characteristics are considered where different dissipation models are applied to the linear and non-linear characteristcs. In this paper the theory by Yee is extended to a general explicit Runge-Kutta type of schemes.

  1. Computer vision-based method for classification of wheat grains using artificial neural network.

    PubMed

    Sabanci, Kadir; Kayabasi, Ahmet; Toktas, Abdurrahim

    2017-06-01

    A simplified computer vision-based application using artificial neural network (ANN) depending on multilayer perceptron (MLP) for accurately classifying wheat grains into bread or durum is presented. The images of 100 bread and 100 durum wheat grains are taken via a high-resolution camera and subjected to pre-processing. The main visual features of four dimensions, three colors and five textures are acquired using image-processing techniques (IPTs). A total of 21 visual features are reproduced from the 12 main features to diversify the input population for training and testing the ANN model. The data sets of visual features are considered as input parameters of the ANN model. The ANN with four different input data subsets is modelled to classify the wheat grains into bread or durum. The ANN model is trained with 180 grains and its accuracy tested with 20 grains from a total of 200 wheat grains. Seven input parameters that are most effective on the classifying results are determined using the correlation-based CfsSubsetEval algorithm to simplify the ANN model. The results of the ANN model are compared in terms of accuracy rate. The best result is achieved with a mean absolute error (MAE) of 9.8 × 10 -6 by the simplified ANN model. This shows that the proposed classifier based on computer vision can be successfully exploited to automatically classify a variety of grains. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  2. Experimental Determination and Thermodynamic Modeling of Electrical Conductivity of SRS Waste Tank Supernate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pike, J.; Reboul, S.

    2015-06-01

    SRS High Level Waste Tank Farm personnel rely on conductivity probes for detection of incipient overflow conditions in waste tanks. Minimal information is available concerning the sensitivity that must be achieved such that that liquid detection is assured. Overly sensitive electronics results in numerous nuisance alarms for these safety-related instruments. In order to determine the minimum sensitivity required of the probe, Tank Farm Engineering personnel need adequate conductivity data to improve the existing designs. Little or no measurements of liquid waste conductivity exist; however, the liquid phase of the waste consists of inorganic electrolytes for which the conductivity may bemore » calculated. Savannah River Remediation (SRR) Tank Farm Facility Engineering requested SRNL to determine the conductivity of the supernate resident in SRS waste Tank 40 experimentally as well as computationally. In addition, SRNL was requested to develop a correlation, if possible, that would be generally applicable to liquid waste resident in SRS waste tanks. A waste sample from Tank 40 was analyzed for composition and electrical conductivity as shown in Table 4-6, Table 4-7, and Table 4-9. The conductivity for undiluted Tank 40 sample was 0.087 S/cm. The accuracy of OLI Analyzer™ was determined using available literature data. Overall, 95% of computed estimates of electrical conductivity are within ±15% of literature values for component concentrations from 0 to 15 M and temperatures from 0 to 125 °C. Though the computational results are generally in good agreement with the measured data, a small portion of literature data deviates as much as ±76%. A simplified model was created that can be used readily to estimate electrical conductivity of waste solution in computer spreadsheets. The variability of this simplified approach deviates up to 140% from measured values. Generally, this model can be applied to estimate the conductivity within a factor of two. The comparison of the simplified model to pure component literature data suggests that the simplified model will tend to under estimate the electrical conductivity. Comparison of the computed Tank 40 conductivity with the measured conductivity shows good agreement within the range of deviation identified based on pure component literature data.« less

  3. A Simplified Guidance for Target Missiles Used in Ballistic Missile Defence Evaluation

    NASA Astrophysics Data System (ADS)

    Prabhakar, N.; Kumar, I. D.; Tata, S. K.; Vaithiyanathan, V.

    2013-01-01

    A simplified guidance scheme for the target missiles used in Ballistic Missile Defence is presented in this paper. The proposed method has two major components, a Ground Guidance Computation (GGC) and an In-Flight Guidance Computation. The GGC which runs on the ground uses a missile model to generate attitude history in pitch plane and computes launch azimuth of the missile to compensate for the effect of earth rotation. The vehicle follows the pre launch computed attitude (theta) history in pitch plane and also applies the course correction in azimuth plane based on its deviation from the pre launch computed azimuth plane. This scheme requires less computations and counters In-flight disturbances such as wind, gust etc. quite efficiently. The simulation results show that the proposed method provides the satisfactory performance and robustness.

  4. Simplified Modeling of Oxidation of Hydrocarbons

    NASA Technical Reports Server (NTRS)

    Bellan, Josette; Harstad, Kenneth

    2008-01-01

    A method of simplified computational modeling of oxidation of hydrocarbons is undergoing development. This is one of several developments needed to enable accurate computational simulation of turbulent, chemically reacting flows. At present, accurate computational simulation of such flows is difficult or impossible in most cases because (1) the numbers of grid points needed for adequate spatial resolution of turbulent flows in realistically complex geometries are beyond the capabilities of typical supercomputers now in use and (2) the combustion of typical hydrocarbons proceeds through decomposition into hundreds of molecular species interacting through thousands of reactions. Hence, the combination of detailed reaction- rate models with the fundamental flow equations yields flow models that are computationally prohibitive. Hence, further, a reduction of at least an order of magnitude in the dimension of reaction kinetics is one of the prerequisites for feasibility of computational simulation of turbulent, chemically reacting flows. In the present method of simplified modeling, all molecular species involved in the oxidation of hydrocarbons are classified as either light or heavy; heavy molecules are those having 3 or more carbon atoms. The light molecules are not subject to meaningful decomposition, and the heavy molecules are considered to decompose into only 13 specified constituent radicals, a few of which are listed in the table. One constructs a reduced-order model, suitable for use in estimating the release of heat and the evolution of temperature in combustion, from a base comprising the 13 constituent radicals plus a total of 26 other species that include the light molecules and related light free radicals. Then rather than following all possible species through their reaction coordinates, one follows only the reduced set of reaction coordinates of the base. The behavior of the base was examined in test computational simulations of the combustion of heptane in a stirred reactor at various initial pressures ranging from 0.1 to 6 MPa. Most of the simulations were performed for stoichiometric mixtures; some were performed for fuel/oxygen mole ratios of 1/2 and 2.

  5. A Simplified Biosphere Model for Global Climate Studies.

    NASA Astrophysics Data System (ADS)

    Xue, Y.; Sellers, P. J.; Kinter, J. L.; Shukla, J.

    1991-03-01

    The Simple Biosphere Model (SiB) as described in Sellers et al. is a bio-physically based model of land surface-atmosphere interaction. For some general circulation model (GCM) climate studies, further simplifications are desirable to have greater computation efficiency, and more important, to consolidate the parametric representation. Three major reductions in the complexity of SiB have been achieved in the present study.The diurnal variation of surface albedo is computed in SiB by means of a comprehensive yet complex calculation. Since the diurnal cycle is quite regular for each vegetation type, this calculation can be simplified considerably. The effect of root zone soil moisture on stomatal resistance is substantial, but the computation in SiB is complicated and expensive. We have developed approximations, which simulate the effects of reduced soil moisture more simply, keeping the essence of the biophysical concepts used in SiB.The surface stress and the fluxes of heat and moisture between the top of the vegetation canopy and an atmospheric reference level have been parameterized in an off-line version of SiB based upon the studies by Businger et al. and Paulson. We have developed a linear relationship between Richardson number and aero-dynamic resistance. Finally, the second vegetation layer of the original model does not appear explicitly after simplification. Compared to the model of Sellers et al., we have reduced the number of input parameters from 44 to 21. A comparison of results using the reduced parameter biosphere with those from the original formulation in a GCM and a zero-dimensional model shows the simplified version to reproduce the original results quite closely. After simplification, the computational requirement of SiB was reduced by about 55%.

  6. Hybrid simplified spherical harmonics with diffusion equation for light propagation in tissues.

    PubMed

    Chen, Xueli; Sun, Fangfang; Yang, Defu; Ren, Shenghan; Zhang, Qian; Liang, Jimin

    2015-08-21

    Aiming at the limitations of the simplified spherical harmonics approximation (SPN) and diffusion equation (DE) in describing the light propagation in tissues, a hybrid simplified spherical harmonics with diffusion equation (HSDE) based diffuse light transport model is proposed. In the HSDE model, the living body is first segmented into several major organs, and then the organs are divided into high scattering tissues and other tissues. DE and SPN are employed to describe the light propagation in these two kinds of tissues respectively, which are finally coupled using the established boundary coupling condition. The HSDE model makes full use of the advantages of SPN and DE, and abandons their disadvantages, so that it can provide a perfect balance between accuracy and computation time. Using the finite element method, the HSDE is solved for light flux density map on body surface. The accuracy and efficiency of the HSDE are validated with both regular geometries and digital mouse model based simulations. Corresponding results reveal that a comparable accuracy and much less computation time are achieved compared with the SPN model as well as a much better accuracy compared with the DE one.

  7. A simplified model of the source channel of the Leksell GammaKnife tested with PENELOPE.

    PubMed

    Al-Dweri, Feras M O; Lallena, Antonio M; Vilches, Manuel

    2004-06-21

    Monte Carlo simulations using the code PENELOPE have been performed to test a simplified model of the source channel geometry of the Leksell GammaKnife. The characteristics of the radiation passing through the treatment helmets are analysed in detail. We have found that only primary particles emitted from the source with polar angles smaller than 3 degrees with respect to the beam axis are relevant for the dosimetry of the Gamma Knife. The photon trajectories reaching the output helmet collimators at (x, v, z = 236 mm) show strong correlations between rho = (x2 + y2)(1/2) and their polar angle theta, on one side, and between tan(-1)(y/x) and their azimuthal angle phi, on the other. This enables us to propose a simplified model which treats the full source channel as a mathematical collimator. This simplified model produces doses in good agreement with those found for the full geometry. In the region of maximal dose, the relative differences between both calculations are within 3%, for the 18 and 14 mm helmets, and 10%, for the 8 and 4 mm ones. Besides, the simplified model permits a strong reduction (larger than a factor 15) in the computational time.

  8. Simplified aerosol modeling for variational data assimilation

    NASA Astrophysics Data System (ADS)

    Huneeus, N.; Boucher, O.; Chevallier, F.

    2009-11-01

    We have developed a simplified aerosol model together with its tangent linear and adjoint versions for the ultimate aim of optimizing global aerosol and aerosol precursor emission using variational data assimilation. The model was derived from the general circulation model LMDz; it groups together the 24 aerosol species simulated in LMDz into 4 species, namely gaseous precursors, fine mode aerosols, coarse mode desert dust and coarse mode sea salt. The emissions have been kept as in the original model. Modifications, however, were introduced in the computation of aerosol optical depth and in the processes of sedimentation, dry and wet deposition and sulphur chemistry to ensure consistency with the new set of species and their composition. The simplified model successfully manages to reproduce the main features of the aerosol distribution in LMDz. The largest differences in aerosol load are observed for fine mode aerosols and gaseous precursors. Differences between the original and simplified models are mainly associated to the new deposition and sedimentation velocities consistent with the definition of species in the simplified model and the simplification of the sulphur chemistry. Furthermore, simulated aerosol optical depth remains within the variability of monthly AERONET observations for all aerosol types and all sites throughout most of the year. Largest differences are observed over sites with strong desert dust influence. In terms of the daily aerosol variability, the model is less able to reproduce the observed variability from the AERONET data with larger discrepancies in stations affected by industrial aerosols. The simplified model however, closely follows the daily simulation from LMDz. Sensitivity analyses with the tangent linear version show that the simplified sulphur chemistry is the dominant process responsible for the strong non-linearity of the model.

  9. Mathematical Modeling of Diverse Phenomena

    NASA Technical Reports Server (NTRS)

    Howard, J. C.

    1979-01-01

    Tensor calculus is applied to the formulation of mathematical models of diverse phenomena. Aeronautics, fluid dynamics, and cosmology are among the areas of application. The feasibility of combining tensor methods and computer capability to formulate problems is demonstrated. The techniques described are an attempt to simplify the formulation of mathematical models by reducing the modeling process to a series of routine operations, which can be performed either manually or by computer.

  10. A simplified computational memory model from information processing

    PubMed Central

    Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang

    2016-01-01

    This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view. PMID:27876847

  11. A Simple Explanation of Complexation

    ERIC Educational Resources Information Center

    Elliott, J. Richard

    2010-01-01

    The topics of solution thermodynamics, activity coefficients, and complex formation are introduced through computational exercises and sample applications. The presentation is designed to be accessible to freshmen in a chemical engineering computations course. The MOSCED model is simplified to explain complex formation in terms of hydrogen…

  12. Finite element strategies to satisfy clinical and engineering requirements in the field of percutaneous valves.

    PubMed

    Capelli, Claudio; Biglino, Giovanni; Petrini, Lorenza; Migliavacca, Francesco; Cosentino, Daria; Bonhoeffer, Philipp; Taylor, Andrew M; Schievano, Silvia

    2012-12-01

    Finite element (FE) modelling can be a very resourceful tool in the field of cardiovascular devices. To ensure result reliability, FE models must be validated experimentally against physical data. Their clinical application (e.g., patients' suitability, morphological evaluation) also requires fast simulation process and access to results, while engineering applications need highly accurate results. This study shows how FE models with different mesh discretisations can suit clinical and engineering requirements for studying a novel device designed for percutaneous valve implantation. Following sensitivity analysis and experimental characterisation of the materials, the stent-graft was first studied in a simplified geometry (i.e., compliant cylinder) and validated against in vitro data, and then in a patient-specific implantation site (i.e., distensible right ventricular outflow tract). Different meshing strategies using solid, beam and shell elements were tested. Results showed excellent agreement between computational and experimental data in the simplified implantation site. Beam elements were found to be convenient for clinical applications, providing reliable results in less than one hour in a patient-specific anatomical model. Solid elements remain the FE choice for engineering applications, albeit more computationally expensive (>100 times). This work also showed how information on device mechanical behaviour differs when acquired in a simplified model as opposed to a patient-specific model.

  13. Simplified predictive models for CO 2 sequestration performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mishra, Srikanta; Ganesh, Priya; Schuetter, Jared

    CO2 sequestration in deep saline formations is increasingly being considered as a viable strategy for the mitigation of greenhouse gas emissions from anthropogenic sources. In this context, detailed numerical simulation based models are routinely used to understand key processes and parameters affecting pressure propagation and buoyant plume migration following CO2 injection into the subsurface. As these models are data and computation intensive, the development of computationally-efficient alternatives to conventional numerical simulators has become an active area of research. Such simplified models can be valuable assets during preliminary CO2 injection project screening, serve as a key element of probabilistic system assessmentmore » modeling tools, and assist regulators in quickly evaluating geological storage projects. We present three strategies for the development and validation of simplified modeling approaches for CO2 sequestration in deep saline formations: (1) simplified physics-based modeling, (2) statisticallearning based modeling, and (3) reduced-order method based modeling. In the first category, a set of full-physics compositional simulations is used to develop correlations for dimensionless injectivity as a function of the slope of the CO2 fractional-flow curve, variance of layer permeability values, and the nature of vertical permeability arrangement. The same variables, along with a modified gravity number, can be used to develop a correlation for the total storage efficiency within the CO2 plume footprint. Furthermore, the dimensionless average pressure buildup after the onset of boundary effects can be correlated to dimensionless time, CO2 plume footprint, and storativity contrast between the reservoir and caprock. In the second category, statistical “proxy models” are developed using the simulation domain described previously with two approaches: (a) classical Box-Behnken experimental design with a quadratic response surface, and (b) maximin Latin Hypercube sampling (LHS) based design with a multidimensional kriging metamodel fit. For roughly the same number of simulations, the LHS-based metamodel yields a more robust predictive model, as verified by a k-fold cross-validation approach (with data split into training and test sets) as well by validation with an independent dataset. In the third category, a reduced-order modeling procedure is utilized that combines proper orthogonal decomposition (POD) for reducing problem dimensionality with trajectory-piecewise linearization (TPWL) in order to represent system response at new control settings from a limited number of training runs. Significant savings in computational time are observed with reasonable accuracy from the PODTPWL reduced-order model for both vertical and horizontal well problems – which could be important in the context of history matching, uncertainty quantification and optimization problems. The simplified physics and statistical learning based models are also validated using an uncertainty analysis framework. Reference cumulative distribution functions of key model outcomes (i.e., plume radius and reservoir pressure buildup) generated using a 97-run full-physics simulation are successfully validated against the CDF from 10,000 sample probabilistic simulations using the simplified models. The main contribution of this research project is the development and validation of a portfolio of simplified modeling approaches that will enable rapid feasibility and risk assessment for CO2 sequestration in deep saline formations.« less

  14. Assessment of Geometry and In-Flow Effects on Contra-Rotating Open Rotor Broadband Noise Predictions

    NASA Technical Reports Server (NTRS)

    Zawodny, Nikolas S.; Nark, Douglas M.; Boyd, D. Douglas, Jr.

    2015-01-01

    Application of previously formulated semi-analytical models for the prediction of broadband noise due to turbulent rotor wake interactions and rotor blade trailing edges is performed on the historical baseline F31/A31 contra-rotating open rotor configuration. Simplified two-dimensional blade element analysis is performed on cambered NACA 4-digit airfoil profiles, which are meant to serve as substitutes for the actual rotor blade sectional geometries. Rotor in-flow effects such as induced axial and tangential velocities are incorporated into the noise prediction models based on supporting computational fluid dynamics (CFD) results and simplified in-flow velocity models. Emphasis is placed on the development of simplified rotor in-flow models for the purpose of performing accurate noise predictions independent of CFD information. The broadband predictions are found to compare favorably with experimental acoustic results.

  15. Reduced and simplified chemical kinetics for air dissociation using Computational Singular Perturbation

    NASA Technical Reports Server (NTRS)

    Goussis, D. A.; Lam, S. H.; Gnoffo, P. A.

    1990-01-01

    The Computational Singular Perturbation CSP methods is employed (1) in the modeling of a homogeneous isothermal reacting system and (2) in the numerical simulation of the chemical reactions in a hypersonic flowfield. Reduced and simplified mechanisms are constructed. The solutions obtained on the basis of these approximate mechanisms are shown to be in very good agreement with the exact solution based on the full mechanism. Physically meaningful approximations are derived. It is demonstrated that the deduction of these approximations from CSP is independent of the complexity of the problem and requires no intuition or experience in chemical kinetics.

  16. Patient-Specific Simulation of Cardiac Blood Flow From High-Resolution Computed Tomography.

    PubMed

    Lantz, Jonas; Henriksson, Lilian; Persson, Anders; Karlsson, Matts; Ebbers, Tino

    2016-12-01

    Cardiac hemodynamics can be computed from medical imaging data, and results could potentially aid in cardiac diagnosis and treatment optimization. However, simulations are often based on simplified geometries, ignoring features such as papillary muscles and trabeculae due to their complex shape, limitations in image acquisitions, and challenges in computational modeling. This severely hampers the use of computational fluid dynamics in clinical practice. The overall aim of this study was to develop a novel numerical framework that incorporated these geometrical features. The model included the left atrium, ventricle, ascending aorta, and heart valves. The framework used image registration to obtain patient-specific wall motion, automatic remeshing to handle topological changes due to the complex trabeculae motion, and a fast interpolation routine to obtain intermediate meshes during the simulations. Velocity fields and residence time were evaluated, and they indicated that papillary muscles and trabeculae strongly interacted with the blood, which could not be observed in a simplified model. The framework resulted in a model with outstanding geometrical detail, demonstrating the feasibility as well as the importance of a framework that is capable of simulating blood flow in physiologically realistic hearts.

  17. Simplified and refined structural modeling for economical flutter analysis and design

    NASA Technical Reports Server (NTRS)

    Ricketts, R. H.; Sobieszczanski, J.

    1977-01-01

    A coordinated use of two finite-element models of different levels of refinement is presented to reduce the computer cost of the repetitive flutter analysis commonly encountered in structural resizing to meet flutter requirements. One model, termed a refined model (RM), represents a high degree of detail needed for strength-sizing and flutter analysis of an airframe. The other model, called a simplified model (SM), has a relatively much smaller number of elements and degrees-of-freedom. A systematic method of deriving an SM from a given RM is described. The method consists of judgmental and numerical operations to make the stiffness and mass of the SM elements equivalent to the corresponding substructures of RM. The structural data are automatically transferred between the two models. The bulk of analysis is performed on the SM with periodical verifications carried out by analysis of the RM. In a numerical example of a supersonic cruise aircraft with an arrow wing, this approach permitted substantial savings in computer costs and acceleration of the job turn-around.

  18. Simplified path integral for supersymmetric quantum mechanics and type-A trace anomalies

    NASA Astrophysics Data System (ADS)

    Bastianelli, Fiorenzo; Corradini, Olindo; Iacconi, Laura

    2018-05-01

    Particles in a curved space are classically described by a nonlinear sigma model action that can be quantized through path integrals. The latter require a precise regularization to deal with the derivative interactions arising from the nonlinear kinetic term. Recently, for maximally symmetric spaces, simplified path integrals have been developed: they allow to trade the nonlinear kinetic term with a purely quadratic kinetic term (linear sigma model). This happens at the expense of introducing a suitable effective scalar potential, which contains the information on the curvature of the space. The simplified path integral provides a sensible gain in the efficiency of perturbative calculations. Here we extend the construction to models with N = 1 supersymmetry on the worldline, which are applicable to the first quantized description of a Dirac fermion. As an application we use the simplified worldline path integral to compute the type-A trace anomaly of a Dirac fermion in d dimensions up to d = 16.

  19. Personalized mitral valve closure computation and uncertainty analysis from 3D echocardiography.

    PubMed

    Grbic, Sasa; Easley, Thomas F; Mansi, Tommaso; Bloodworth, Charles H; Pierce, Eric L; Voigt, Ingmar; Neumann, Dominik; Krebs, Julian; Yuh, David D; Jensen, Morten O; Comaniciu, Dorin; Yoganathan, Ajit P

    2017-01-01

    Intervention planning is essential for successful Mitral Valve (MV) repair procedures. Finite-element models (FEM) of the MV could be used to achieve this goal, but the translation to the clinical domain is challenging. Many input parameters for the FEM models, such as tissue properties, are not known. In addition, only simplified MV geometry models can be extracted from non-invasive modalities such as echocardiography imaging, lacking major anatomical details such as the complex chordae topology. A traditional approach for FEM computation is to use a simplified model (also known as parachute model) of the chordae topology, which connects the papillary muscle tips to the free-edges and select basal points. Building on the existing parachute model a new and comprehensive MV model was developed that utilizes a novel chordae representation capable of approximating regional connectivity. In addition, a fully automated personalization approach was developed for the chordae rest length, removing the need for tedious manual parameter selection. Based on the MV model extracted during mid-diastole (open MV) the MV geometric configuration at peak systole (closed MV) was computed according to the FEM model. In this work the focus was placed on validating MV closure computation. The method is evaluated on ten in vitro ovine cases, where in addition to echocardiography imaging, high-resolution μCT imaging is available for accurate validation. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. A simplified model of the source channel of the Leksell GammaKnife® tested with PENELOPE

    NASA Astrophysics Data System (ADS)

    Al-Dweri, Feras M. O.; Lallena, Antonio M.; Vilches, Manuel

    2004-06-01

    Monte Carlo simulations using the code PENELOPE have been performed to test a simplified model of the source channel geometry of the Leksell GammaKnife®. The characteristics of the radiation passing through the treatment helmets are analysed in detail. We have found that only primary particles emitted from the source with polar angles smaller than 3° with respect to the beam axis are relevant for the dosimetry of the Gamma Knife. The photon trajectories reaching the output helmet collimators at (x, y, z = 236 mm) show strong correlations between rgr = (x2 + y2)1/2 and their polar angle thgr, on one side, and between tan-1(y/x) and their azimuthal angle phgr, on the other. This enables us to propose a simplified model which treats the full source channel as a mathematical collimator. This simplified model produces doses in good agreement with those found for the full geometry. In the region of maximal dose, the relative differences between both calculations are within 3%, for the 18 and 14 mm helmets, and 10%, for the 8 and 4 mm ones. Besides, the simplified model permits a strong reduction (larger than a factor 15) in the computational time.

  1. A comparison study of one-and two-dimensional hydraulic models for river environments.

    DOT National Transportation Integrated Search

    2017-05-01

    Computer models are used every day to analyze river systems for a wide variety of reasons vital to : the public interest. For decades most hydraulic engineers have been limited to models that simplify the fluid : mechanics to the unidirectional case....

  2. MODELS-3 INSTALLATION PROCEDURES FOR A PERSONAL COMPUTER WITH A NT OPERATING SYSTEM (MODELS-3 VERSION 4.1)

    EPA Science Inventory

    Models-3 is a flexible system designed to simplify the development and use of air quality models and other environmental decision support tools. It is designed for applications ranging from regulatory and policy analysis to understanding the complex interactions of atmospheric...

  3. A 4DCT imaging-based breathing lung model with relative hysteresis

    PubMed Central

    Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.; Lin, Ching-Long

    2016-01-01

    To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for both models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry. PMID:28260811

  4. A 4DCT imaging-based breathing lung model with relative hysteresis

    NASA Astrophysics Data System (ADS)

    Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.; Lin, Ching-Long

    2016-12-01

    To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for both models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry.

  5. Simplified model of mean double step (MDS) in human body movement

    NASA Astrophysics Data System (ADS)

    Dusza, Jacek J.; Wawrzyniak, Zbigniew M.; Mugarra González, C. Fernando

    In this paper we present a simplified and useful model of the human body movement based on the full gait cycle description, called the Mean Double Step (MDS). It enables the parameterization and simplification of the human movement. Furthermore it allows a description of the gait cycle by providing standardized estimators to transform the gait cycle into a periodical movement process. Moreover the method of simplifying the MDS model and its compression are demonstrated. The simplification is achieved by reducing the number of bars of the spectrum and I or by reducing the number of samples describing the MDS both in terms of reducing their computational burden and their resources for the data storage. Our MDS model, which is applicable to the gait cycle method for examining patients, is non-invasive and provides the additional advantage of featuring a functional characterization of the relative or absolute movement of any part of the body.

  6. Computing Dynamics Of A Robot Of 6+n Degrees Of Freedom

    NASA Technical Reports Server (NTRS)

    Quiocho, Leslie J.; Bailey, Robert W.

    1995-01-01

    Improved formulation speeds and simplifies computation of dynamics of robot arm of n rotational degrees of freedom mounted on platform having three translational and three rotational degrees of freedom. Intended for use in dynamical modeling of robotic manipulators attached to such moving bases as spacecraft, aircraft, vessel, or land vehicle. Such modeling important part of simulation and control of robotic motions.

  7. Prediction of pressure drop in fluid tuned mounts using analytical and computational techniques

    NASA Technical Reports Server (NTRS)

    Lasher, William C.; Khalilollahi, Amir; Mischler, John; Uhric, Tom

    1993-01-01

    A simplified model for predicting pressure drop in fluid tuned isolator mounts was developed. The model is based on an exact solution to the Navier-Stokes equations and was made more general through the use of empirical coefficients. The values of these coefficients were determined by numerical simulation of the flow using the commercial computational fluid dynamics (CFD) package FIDAP.

  8. Enhancement of the Computer Lumber Grading Program to Support Polygonal Defects

    Treesearch

    Powsiri Klinkhachorn; R. Kathari; D. Yost; Philip A. Araman

    1993-01-01

    Computer grading of hardwood lumber promises to avoid regrading of the same lumber because of disagreements between the buyer and the seller. However, the first generation of computer programs for hardwood lumber grading simplify the process by modeling defects on the board as rectangles. This speeds up the grading process buy can inadvertently put a board into a lower...

  9. Comparison of rigorous and simple vibrational models for the CO2 gasdynamic laser

    NASA Technical Reports Server (NTRS)

    Monson, D. J.

    1977-01-01

    The accuracy of a simple vibrational model for computing the gain in a CO2 gasdynamic laser is assessed by comparing results computed from it with results computed from a rigorous vibrational model. The simple model is that of Anderson et al. (1971), in which the vibrational kinetics are modeled by grouping the nonequilibrium vibrational degrees of freedom into two modes, to each of which there corresponds an equation describing vibrational relaxation. The two models agree fairly well in the computed gain at low temperatures, but the simple model predicts too high a gain at the higher temperatures of current interest. The sources of error contributing to the overestimation given by the simple model are determined by examining the simplified relaxation equations.

  10. A New Strategy in Observer Modeling for Greenhouse Cucumber Seedling Growth

    PubMed Central

    Qiu, Quan; Zheng, Chenfei; Wang, Wenping; Qiao, Xiaojun; Bai, He; Yu, Jingquan; Shi, Kai

    2017-01-01

    State observer is an essential component in computerized control loops for greenhouse-crop systems. However, the current accomplishments of observer modeling for greenhouse-crop systems mainly focus on mass/energy balance, ignoring physiological responses of crops. As a result, state observers for crop physiological responses are rarely developed, and control operations are typically made based on experience rather than actual crop requirements. In addition, existing observer models require a large number of parameters, leading to heavy computational load and poor application feasibility. To address these problems, we present a new state observer modeling strategy that takes both environmental information and crop physiological responses into consideration during the observer modeling process. Using greenhouse cucumber seedlings as an instance, we sample 10 physiological parameters of cucumber seedlings at different time point during the exponential growth stage, and employ them to build growth state observers together with 8 environmental parameters. Support vector machine (SVM) acts as the mathematical tool for observer modeling. Canonical correlation analysis (CCA) is used to select the dominant environmental and physiological parameters in the modeling process. With the dominant parameters, simplified observer models are built and tested. We conduct contrast experiments with different input parameter combinations on simplified and un-simplified observers. Experimental results indicate that physiological information can improve the prediction accuracies of the growth state observers. Furthermore, the simplified observer models can give equivalent or even better performance than the un-simplified ones, which verifies the feasibility of CCA. The current study can enable state observers to reflect crop requirements and make them feasible for applications with simplified shapes, which is significant for developing intelligent greenhouse control systems for modern greenhouse production. PMID:28848565

  11. A New Strategy in Observer Modeling for Greenhouse Cucumber Seedling Growth.

    PubMed

    Qiu, Quan; Zheng, Chenfei; Wang, Wenping; Qiao, Xiaojun; Bai, He; Yu, Jingquan; Shi, Kai

    2017-01-01

    State observer is an essential component in computerized control loops for greenhouse-crop systems. However, the current accomplishments of observer modeling for greenhouse-crop systems mainly focus on mass/energy balance, ignoring physiological responses of crops. As a result, state observers for crop physiological responses are rarely developed, and control operations are typically made based on experience rather than actual crop requirements. In addition, existing observer models require a large number of parameters, leading to heavy computational load and poor application feasibility. To address these problems, we present a new state observer modeling strategy that takes both environmental information and crop physiological responses into consideration during the observer modeling process. Using greenhouse cucumber seedlings as an instance, we sample 10 physiological parameters of cucumber seedlings at different time point during the exponential growth stage, and employ them to build growth state observers together with 8 environmental parameters. Support vector machine (SVM) acts as the mathematical tool for observer modeling. Canonical correlation analysis (CCA) is used to select the dominant environmental and physiological parameters in the modeling process. With the dominant parameters, simplified observer models are built and tested. We conduct contrast experiments with different input parameter combinations on simplified and un-simplified observers. Experimental results indicate that physiological information can improve the prediction accuracies of the growth state observers. Furthermore, the simplified observer models can give equivalent or even better performance than the un-simplified ones, which verifies the feasibility of CCA. The current study can enable state observers to reflect crop requirements and make them feasible for applications with simplified shapes, which is significant for developing intelligent greenhouse control systems for modern greenhouse production.

  12. Quantum annealing versus classical machine learning applied to a simplified computational biology problem

    PubMed Central

    Li, Richard Y.; Di Felice, Rosa; Rohs, Remo; Lidar, Daniel A.

    2018-01-01

    Transcription factors regulate gene expression, but how these proteins recognize and specifically bind to their DNA targets is still debated. Machine learning models are effective means to reveal interaction mechanisms. Here we studied the ability of a quantum machine learning approach to predict binding specificity. Using simplified datasets of a small number of DNA sequences derived from actual binding affinity experiments, we trained a commercially available quantum annealer to classify and rank transcription factor binding. The results were compared to state-of-the-art classical approaches for the same simplified datasets, including simulated annealing, simulated quantum annealing, multiple linear regression, LASSO, and extreme gradient boosting. Despite technological limitations, we find a slight advantage in classification performance and nearly equal ranking performance using the quantum annealer for these fairly small training data sets. Thus, we propose that quantum annealing might be an effective method to implement machine learning for certain computational biology problems. PMID:29652405

  13. Patient-Specific Computational Modeling of Human Phonation

    NASA Astrophysics Data System (ADS)

    Xue, Qian; Zheng, Xudong; University of Maine Team

    2013-11-01

    Phonation is a common biological process resulted from the complex nonlinear coupling between glottal aerodynamics and vocal fold vibrations. In the past, the simplified symmetric straight geometric models were commonly employed for experimental and computational studies. The shape of larynx lumen and vocal folds are highly three-dimensional indeed and the complex realistic geometry produces profound impacts on both glottal flow and vocal fold vibrations. To elucidate the effect of geometric complexity on voice production and improve the fundamental understanding of human phonation, a full flow-structure interaction simulation is carried out on a patient-specific larynx model. To the best of our knowledge, this is the first patient-specific flow-structure interaction study of human phonation. The simulation results are well compared to the established human data. The effects of realistic geometry on glottal flow and vocal fold dynamics are investigated. It is found that both glottal flow and vocal fold dynamics present a high level of difference from the previous simplified model. This study also paved the important step toward the development of computer model for voice disease diagnosis and surgical planning. The project described was supported by Grant Number ROlDC007125 from the National Institute on Deafness and Other Communication Disorders (NIDCD).

  14. Multiscale methods for gore curvature calculations from FSI modeling of spacecraft parachutes

    NASA Astrophysics Data System (ADS)

    Takizawa, Kenji; Tezduyar, Tayfun E.; Kolesar, Ryan; Boswell, Cody; Kanai, Taro; Montel, Kenneth

    2014-12-01

    There are now some sophisticated and powerful methods for computer modeling of parachutes. These methods are capable of addressing some of the most formidable computational challenges encountered in parachute modeling, including fluid-structure interaction (FSI) between the parachute and air flow, design complexities such as those seen in spacecraft parachutes, and operational complexities such as use in clusters and disreefing. One should be able to extract from a reliable full-scale parachute modeling any data or analysis needed. In some cases, however, the parachute engineers may want to perform quickly an extended or repetitive analysis with methods based on simplified models. Some of the data needed by a simplified model can very effectively be extracted from a full-scale computer modeling that serves as a pilot. A good example of such data is the circumferential curvature of a parachute gore, where a gore is the slice of the parachute canopy between two radial reinforcement cables running from the parachute vent to the skirt. We present the multiscale methods we devised for gore curvature calculation from FSI modeling of spacecraft parachutes. The methods include those based on the multiscale sequentially-coupled FSI technique and using NURBS meshes. We show how the methods work for the fully-open and two reefed stages of the Orion spacecraft main and drogue parachutes.

  15. An Integrated Approach to Teaching Students the Use of Computers in Science.

    ERIC Educational Resources Information Center

    Hood, B. James

    1991-01-01

    Reported is an approach to teaching the use of Macintosh computers to sixth, seventh, and eighth grade students within the context of a simplified model of scientific research including proposal, data collection and analyses, and presentation of findings. Word processing, graphing, statistical, painting, and poster software were sequentially…

  16. Application of computer graphics in the design of custom orthopedic implants.

    PubMed

    Bechtold, J E

    1986-10-01

    Implementation of newly developed computer modelling techniques and computer graphics displays and software have greatly aided the orthopedic design engineer and physician in creating a custom implant with good anatomic conformity in a short turnaround time. Further advances in computerized design and manufacturing will continue to simplify the development of custom prostheses and enlarge their niche in the joint replacement market.

  17. Computational Modeling | Photovoltaic Research | NREL

    Science.gov Websites

    performance of single- and multijunction cells and modules. We anticipate the upcoming completion of our next software package for a simplified electronic design of single- and multicrystalline silicon solar cells

  18. Computational study of single-expansion-ramp nozzles with external burning

    NASA Astrophysics Data System (ADS)

    Yungster, Shaye; Trefny, Charles J.

    1992-04-01

    A computational investigation of the effects of external burning on the performance of single expansion ramp nozzles (SERN) operating at transonic speeds is presented. The study focuses on the effects of external heat addition and introduces a simplified injection and mixing model based on a control volume analysis. This simplified model permits parametric and scaling studies that would have been impossible to conduct with a detailed CFD analysis. The CFD model is validated by comparing the computed pressure distribution and thrust forces, for several nozzle configurations, with experimental data. Specific impulse calculations are also presented which indicate that external burning performance can be superior to other methods of thrust augmentation at transonic speeds. The effects of injection fuel pressure and nozzle pressure ratio on the performance of SERN nozzles with external burning are described. The results show trends similar to those reported in the experimental study, and provide additional information that complements the experimental data, improving our understanding of external burning flowfields. A study of the effect of scale is also presented. The results indicate that combustion kinetics do not make the flowfield sensitive to scale.

  19. Computational study of single-expansion-ramp nozzles with external burning

    NASA Technical Reports Server (NTRS)

    Yungster, Shaye; Trefny, Charles J.

    1992-01-01

    A computational investigation of the effects of external burning on the performance of single expansion ramp nozzles (SERN) operating at transonic speeds is presented. The study focuses on the effects of external heat addition and introduces a simplified injection and mixing model based on a control volume analysis. This simplified model permits parametric and scaling studies that would have been impossible to conduct with a detailed CFD analysis. The CFD model is validated by comparing the computed pressure distribution and thrust forces, for several nozzle configurations, with experimental data. Specific impulse calculations are also presented which indicate that external burning performance can be superior to other methods of thrust augmentation at transonic speeds. The effects of injection fuel pressure and nozzle pressure ratio on the performance of SERN nozzles with external burning are described. The results show trends similar to those reported in the experimental study, and provide additional information that complements the experimental data, improving our understanding of external burning flowfields. A study of the effect of scale is also presented. The results indicate that combustion kinetics do not make the flowfield sensitive to scale.

  20. A transfer function type of simplified electrochemical model with modified boundary conditions and Padé approximation for Li-ion battery: Part 2. Modeling and parameter estimation

    NASA Astrophysics Data System (ADS)

    Yuan, Shifei; Jiang, Lei; Yin, Chengliang; Wu, Hongjie; Zhang, Xi

    2017-06-01

    The electrochemistry-based battery model can provide physics-meaningful knowledge about the lithium-ion battery system with extensive computation burdens. To motivate the development of reduced order battery model, three major contributions have been made throughout this paper: (1) the transfer function type of simplified electrochemical model is proposed to address the current-voltage relationship with Padé approximation method and modified boundary conditions for electrolyte diffusion equations. The model performance has been verified under pulse charge/discharge and dynamic stress test (DST) profiles with the standard derivation less than 0.021 V and the runtime 50 times faster. (2) the parametric relationship between the equivalent circuit model and simplified electrochemical model has been established, which will enhance the comprehension level of two models with more in-depth physical significance and provide new methods for electrochemical model parameter estimation. (3) four simplified electrochemical model parameters: equivalent resistance Req, effective diffusion coefficient in electrolyte phase Deeff, electrolyte phase volume fraction ε and open circuit voltage (OCV), have been identified by the recursive least square (RLS) algorithm with the modified DST profiles under 45, 25 and 0 °C. The simulation results indicate that the proposed model coupled with RLS algorithm can achieve high accuracy for electrochemical parameter identification in dynamic scenarios.

  1. Computational split-field finite-difference time-domain evaluation of simplified tilt-angle models for parallel-aligned liquid-crystal devices

    NASA Astrophysics Data System (ADS)

    Márquez, Andrés; Francés, Jorge; Martínez, Francisco J.; Gallego, Sergi; Álvarez, Mariela L.; Calzado, Eva M.; Pascual, Inmaculada; Beléndez, Augusto

    2018-03-01

    Simplified analytical models with predictive capability enable simpler and faster optimization of the performance in applications of complex photonic devices. We recently demonstrated the most simplified analytical model still showing predictive capability for parallel-aligned liquid crystal on silicon (PA-LCoS) devices, which provides the voltage-dependent retardance for a very wide range of incidence angles and any wavelength in the visible. We further show that the proposed model is not only phenomenological but also physically meaningful, since two of its parameters provide the correct values for important internal properties of these devices related to the birefringence, cell gap, and director profile. Therefore, the proposed model can be used as a means to inspect internal physical properties of the cell. As an innovation, we also show the applicability of the split-field finite-difference time-domain (SF-FDTD) technique for phase-shift and retardance evaluation of PA-LCoS devices under oblique incidence. As a simplified model for PA-LCoS devices, we also consider the exact description of homogeneous birefringent slabs. However, we show that, despite its higher degree of simplification, the proposed model is more robust, providing unambiguous and physically meaningful solutions when fitting its parameters.

  2. A Geostationary Earth Orbit Satellite Model Using Easy Java Simulation

    ERIC Educational Resources Information Center

    Wee, Loo Kang; Goh, Giam Hwee

    2013-01-01

    We develop an Easy Java Simulation (EJS) model for students to visualize geostationary orbits near Earth, modelled using a Java 3D implementation of the EJS 3D library. The simplified physics model is described and simulated using a simple constant angular velocity equation. We discuss four computer model design ideas: (1) a simple and realistic…

  3. Fast intersection detection algorithm for PC-based robot off-line programming

    NASA Astrophysics Data System (ADS)

    Fedrowitz, Christian H.

    1994-11-01

    This paper presents a method for fast and reliable collision detection in complex production cells. The algorithm is part of the PC-based robot off-line programming system of the University of Siegen (Ropsus). The method is based on a solid model which is managed by a simplified constructive solid geometry model (CSG-model). The collision detection problem is divided in two steps. In the first step the complexity of the problem is reduced in linear time. In the second step the remaining solids are tested for intersection. For this the Simplex algorithm, which is known from linear optimization, is used. It computes a point which is common to two convex polyhedra. The polyhedra intersect, if such a point exists. Regarding the simplified geometrical model of Ropsus the algorithm runs also in linear time. In conjunction with the first step a resultant collision detection algorithm is found which requires linear time in all. Moreover it computes the resultant intersection polyhedron using the dual transformation.

  4. An Unified Multiscale Framework for Planar, Surface, and Curve Skeletonization.

    PubMed

    Jalba, Andrei C; Sobiecki, Andre; Telea, Alexandru C

    2016-01-01

    Computing skeletons of 2D shapes, and medial surface and curve skeletons of 3D shapes, is a challenging task. In particular, there is no unified framework that detects all types of skeletons using a single model, and also produces a multiscale representation which allows to progressively simplify, or regularize, all skeleton types. In this paper, we present such a framework. We model skeleton detection and regularization by a conservative mass transport process from a shape's boundary to its surface skeleton, next to its curve skeleton, and finally to the shape center. The resulting density field can be thresholded to obtain a multiscale representation of progressively simplified surface, or curve, skeletons. We detail a numerical implementation of our framework which is demonstrably stable and has high computational efficiency. We demonstrate our framework on several complex 2D and 3D shapes.

  5. Molecular dynamics of conformational substates for a simplified protein model

    NASA Astrophysics Data System (ADS)

    Grubmüller, Helmut; Tavan, Paul

    1994-09-01

    Extended molecular dynamics simulations covering a total of 0.232 μs have been carried out on a simplified protein model. Despite its simplified structure, that model exhibits properties similar to those of more realistic protein models. In particular, the model was found to undergo transitions between conformational substates at a time scale of several hundred picoseconds. The computed trajectories turned out to be sufficiently long as to permit a statistical analysis of that conformational dynamics. To check whether effective descriptions neglecting memory effects can reproduce the observed conformational dynamics, two stochastic models were studied. A one-dimensional Langevin effective potential model derived by elimination of subpicosecond dynamical processes could not describe the observed conformational transition rates. In contrast, a simple Markov model describing the transitions between but neglecting dynamical processes within conformational substates reproduced the observed distribution of first passage times. These findings suggest, that protein dynamics generally does not exhibit memory effects at time scales above a few hundred picoseconds, but confirms the existence of memory effects at a picosecond time scale.

  6. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less

  7. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. Finally, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less

  8. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    NASA Astrophysics Data System (ADS)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; Geraci, Gianluca; Eldred, Michael S.; Vane, Zachary P.; Lacaze, Guilhem; Oefelein, Joseph C.; Najm, Habib N.

    2018-03-01

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the systems stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. These methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.

  9. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    DOE PAGES

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; ...

    2018-02-09

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less

  10. An exact solution of a simplified two-phase plume model. [for solid propellant rocket

    NASA Technical Reports Server (NTRS)

    Wang, S.-Y.; Roberts, B. B.

    1974-01-01

    An exact solution of a simplified two-phase, gas-particle, rocket exhaust plume model is presented. It may be used to make the upper-bound estimation of the heat flux and pressure loads due to particle impingement on the objects existing in the rocket exhaust plume. By including the correction factors to be determined experimentally, the present technique will provide realistic data concerning the heat and aerodynamic loads on these objects for design purposes. Excellent agreement in trend between the best available computer solution and the present exact solution is shown.

  11. A novel technique for presurgical nasoalveolar molding using computer-aided reverse engineering and rapid prototyping.

    PubMed

    Yu, Quan; Gong, Xin; Wang, Guo-Min; Yu, Zhe-Yuan; Qian, Yu-Fen; Shen, Gang

    2011-01-01

    To establish a new method of presurgical nasoalveolar molding (NAM) using computer-aided reverse engineering and rapid prototyping technique in infants with unilateral cleft lip and palate (UCLP). Five infants (2 males and 3 females with mean age of 1.2 w) with complete UCLP were recruited. All patients were subjected to NAM before the cleft lip repair. The upper denture casts were recorded using a three-dimensional laser scanner within 2 weeks after birth in UCLP infants. A digital model was constructed and analyzed to simulate the NAM procedure with reverse engineering software. The digital geometrical data were exported to print the solid model with rapid prototyping system. The whole set of appliances was fabricated based on these solid models. Laser scanning and digital model construction simplified the NAM procedure and estimated the treatment objective. The appliances were fabricated based on the rapid prototyping technique, and for each patient, the complete set of appliances could be obtained at one time. By the end of presurgical NAM treatment, the cleft was narrowed, and the malformation of nasoalveolar segments was aligned normally. We have developed a novel technique of presurgical NAM based on a computer-aided design. The accurate digital denture model of UCLP infants could be obtained with laser scanning. The treatment design and appliance fabrication could be simplified with a computer-aided reverse engineering and rapid prototyping technique.

  12. A survey of numerical models for wind prediction

    NASA Technical Reports Server (NTRS)

    Schonfeld, D.

    1980-01-01

    A literature review is presented of the work done in the numerical modeling of wind flows. Pertinent computational techniques are described, as well as the necessary assumptions used to simplify the governing equations. A steady state model is outlined, based on the data obtained at the Deep Space Communications complex at Goldstone, California.

  13. Application of finite element substructuring to composite micromechanics. M.S. Thesis - Akron Univ., May 1984

    NASA Technical Reports Server (NTRS)

    Caruso, J. J.

    1984-01-01

    Finite element substructuring is used to predict unidirectional fiber composite hygral (moisture), thermal, and mechanical properties. COSMIC NASTRAN and MSC/NASTRAN are used to perform the finite element analysis. The results obtained from the finite element model are compared with those obtained from the simplified composite micromechanics equations. A unidirectional composite structure made of boron/HM-epoxy, S-glass/IMHS-epoxy and AS/IMHS-epoxy are studied. The finite element analysis is performed using three dimensional isoparametric brick elements and two distinct models. The first model consists of a single cell (one fiber surrounded by matrix) to form a square. The second model uses the single cell and substructuring to form a nine cell square array. To compare computer time and results with the nine cell superelement model, another nine cell model is constructed using conventional mesh generation techniques. An independent computer program consisting of the simplified micromechanics equation is developed to predict the hygral, thermal, and mechanical properties for this comparison. The results indicate that advanced techniques can be used advantageously for fiber composite micromechanics.

  14. Computational modeling of the pressurization process in a NASP vehicle propellant tank experimental simulation

    NASA Technical Reports Server (NTRS)

    Sasmal, G. P.; Hochstein, J. I.; Wendl, M. C.; Hardy, T. L.

    1991-01-01

    A multidimensional computational model of the pressurization process in a slush hydrogen propellant storage tank was developed and its accuracy evaluated by comparison to experimental data measured for a 5 ft diameter spherical tank. The fluid mechanic, thermodynamic, and heat transfer processes within the ullage are represented by a finite-volume model. The model was shown to be in reasonable agreement with the experiment data. A parameter study was undertaken to examine the dependence of the pressurization process on initial ullage temperature distribution and pressurant mass flow rate. It is shown that for a given heat flux rate at the ullage boundary, the pressurization process is nearly independent of initial temperature distribution. Significant differences were identified between the ullage temperature and velocity fields predicted for pressurization of slush and those predicted for pressurization of liquid hydrogen. A simplified model of the pressurization process was constructed in search of a dimensionless characterization of the pressurization process. It is shown that the relationship derived from this simplified model collapses all of the pressure history data generated during this study into a single curve.

  15. Progress in Earth System Modeling since the ENIAC Calculation

    NASA Astrophysics Data System (ADS)

    Fung, I.

    2009-05-01

    The success of the first numerical weather prediction experiment on the ENIAC computer in 1950 was hinged on the expansion of the meteorological observing network, which led to theoretical advances in atmospheric dynamics and subsequently the implementation of the simplified equations on the computer. This paper briefly reviews the progress in Earth System Modeling and climate observations, and suggests a strategy to sustain and expand the observations needed to advance climate science and prediction.

  16. Optical properties of light absorbing carbon aggregates mixed with sulfate: assessment of different model geometries for climate forcing calculations.

    PubMed

    Kahnert, Michael; Nousiainen, Timo; Lindqvist, Hannakaisa; Ebert, Martin

    2012-04-23

    Light scattering by light absorbing carbon (LAC) aggregates encapsulated into sulfate shells is computed by use of the discrete dipole method. Computations are performed for a UV, visible, and IR wavelength, different particle sizes, and volume fractions. Reference computations are compared to three classes of simplified model particles that have been proposed for climate modeling purposes. Neither model matches the reference results sufficiently well. Remarkably, more realistic core-shell geometries fall behind homogeneous mixture models. An extended model based on a core-shell-shell geometry is proposed and tested. Good agreement is found for total optical cross sections and the asymmetry parameter. © 2012 Optical Society of America

  17. Simplifying the Reuse and Interoperability of Geoscience Data Sets and Models with Semantic Metadata that is Human-Readable and Machine-actionable

    NASA Astrophysics Data System (ADS)

    Peckham, S. D.

    2017-12-01

    Standardized, deep descriptions of digital resources (e.g. data sets, computational models, software tools and publications) make it possible to develop user-friendly software systems that assist scientists with the discovery and appropriate use of these resources. Semantic metadata makes it possible for machines to take actions on behalf of humans, such as automatically identifying the resources needed to solve a given problem, retrieving them and then automatically connecting them (despite their heterogeneity) into a functioning workflow. Standardized model metadata also helps model users to understand the important details that underpin computational models and to compare the capabilities of different models. These details include simplifying assumptions on the physics, governing equations and the numerical methods used to solve them, discretization of space (the grid) and time (the time-stepping scheme), state variables (input or output), model configuration parameters. This kind of metadata provides a "deep description" of a computational model that goes well beyond other types of metadata (e.g. author, purpose, scientific domain, programming language, digital rights, provenance, execution) and captures the science that underpins a model. A carefully constructed, unambiguous and rules-based schema to address this problem, called the Geoscience Standard Names ontology will be presented that utilizes Semantic Web best practices and technologies. It has also been designed to work across science domains and to be readable by both humans and machines.

  18. A Simplified Approach for Simultaneous Measurements of Wavefront Velocity and Curvature in the Heart Using Activation Times.

    PubMed

    Mazeh, Nachaat; Haines, David E; Kay, Matthew W; Roth, Bradley J

    2013-12-01

    The velocity and curvature of a wave front are important factors governing the propagation of electrical activity through cardiac tissue, particularly during heart arrhythmias of clinical importance such as fibrillation. Presently, no simple computational model exists to determine these values simultaneously. The proposed model uses the arrival times at four or five sites to determine the wave front speed ( v ), direction (θ), and radius of curvature (ROC) ( r 0 ). If the arrival times are measured, then v , θ, and r 0 can be found from differences in arrival times and the distance between these sites. During isotropic conduction, we found good correlation between measured values of the ROC r 0 and the distance from the unipolar stimulus ( r = 0.9043 and p < 0.0001). The conduction velocity (m/s) was correlated ( r = 0.998, p < 0.0001) using our method (mean = 0.2403, SD = 0.0533) and an empirical method (mean = 0.2352, SD = 0.0560). The model was applied to a condition of anisotropy and a complex case of reentry with a high voltage extra stimulus. Again, results show good correlation between our simplified approach and established methods for multiple wavefront morphologies. In conclusion, insignificant measurement errors were observed between this simplified approach and an approach that was more computationally demanding. Accuracy was maintained when the requirement that ε (ε = b/r 0 , ratio of recording site spacing over wave fronts ROC) was between 0.001 and 0.5. The present simplified model can be applied to a variety of clinical conditions to predict behavior of planar, elliptical, and reentrant wave fronts. It may be used to study the genesis and propagation of rotors in human arrhythmias and could lead to rotor mapping using low density endocardial recording electrodes.

  19. Universal Blind Quantum Computation

    NASA Astrophysics Data System (ADS)

    Fitzsimons, Joseph; Kashefi, Elham

    2012-02-01

    Blind Quantum Computing (BQC) allows a client to have a server carry out a quantum computation for them such that the client's inputs, outputs and computation remain private. Recently we proposed a universal unconditionally secure BQC scheme, based on the conceptual framework of the measurement-based quantum computing model, where the client only needs to be able to prepare single qubits in separable states randomly chosen from a finite set and send them to the server, who has the balance of the required quantum computational resources. Here we present a refinement of the scheme which vastly expands the class of quantum circuits which can be directly implemented as a blind computation, by introducing a new class of resource states which we term dotted-complete graph states and expanding the set of single qubit states the client is required to prepare. These two modifications significantly simplify the overall protocol and remove the previously present restriction that only nearest-neighbor circuits could be implemented as blind computations directly. As an added benefit, the refined protocol admits a substantially more intuitive and simplified verification mechanism, allowing the correctness of a blind computation to be verified with arbitrarily small probability of error.

  20. Energy-state formulation of lumped volume dynamic equations with application to a simplified free piston Stirling engine

    NASA Technical Reports Server (NTRS)

    Daniele, C. J.; Lorenzo, C. F.

    1979-01-01

    Lumped volume dynamic equations are derived using an energy state formulation. This technique requires that kinetic and potential energy state functions be written for the physical system being investigated. To account for losses in the system, a Rayleigh dissipation function is formed. Using these functions, a Lagrangian is formed and using Lagrange's equation, the equations of motion for the system are derived. The results of the application of this technique to a lumped volume are used to derive a model for the free piston Stirling engine. The model was simplified and programmed on an analog computer. Results are given comparing the model response with experimental data.

  1. Energy-state formulation of lumped volume dynamic equations with application to a simplified free piston Stirling engine

    NASA Technical Reports Server (NTRS)

    Daniele, C. J.; Lorenzo, C. F.

    1979-01-01

    Lumped volume dynamic equations are derived using an energy-state formulation. This technique requires that kinetic and potential energy state functions be written for the physical system being investigated. To account for losses in the system, a Rayleigh dissipation function is also formed. Using these functions, a Lagrangian is formed and using Lagrange's equation, the equations of motion for the system are derived. The results of the application of this technique to a lumped volume are used to derive a model for the free-piston Stirling engine. The model was simplified and programmed on an analog computer. Results are given comparing the model response with experimental data.

  2. Incompressible Navier-Stokes Computations with Heat Transfer

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Kwak, Dochan; Rogers, Stuart; Kutler, Paul (Technical Monitor)

    1994-01-01

    The existing pseudocompressibility method for the system of incompressible Navier-Stokes equations is extended to heat transfer problems by including the energy equation. The solution method is based on the pseudo compressibility approach and uses an implicit-upwind differencing scheme together with the Gauss-Seidel line relaxation method. Current computations use one-equation Baldwin-Barth turbulence model which is derived from a simplified form of the standard k-epsilon model equations. Both forced and natural convection problems are examined. Numerical results from turbulent reattaching flow behind a backward-facing step will be compared against experimental measurements for the forced convection case. The validity of Boussinesq approximation to simplify the buoyancy force term will be investigated. The natural convective flow structure generated by heat transfer in a vertical rectangular cavity will be studied. The numerical results will be compared by experimental measurements by Morrison and Tran.

  3. Multigrid methods for numerical simulation of laminar diffusion flames

    NASA Technical Reports Server (NTRS)

    Liu, C.; Liu, Z.; Mccormick, S.

    1993-01-01

    This paper documents the result of a computational study of multigrid methods for numerical simulation of 2D diffusion flames. The focus is on a simplified combustion model, which is assumed to be a single step, infinitely fast and irreversible chemical reaction with five species (C3H8, O2, N2, CO2 and H2O). A fully-implicit second-order hybrid scheme is developed on a staggered grid, which is stretched in the streamwise coordinate direction. A full approximation multigrid scheme (FAS) based on line distributive relaxation is developed as a fast solver for the algebraic equations arising at each time step. Convergence of the process for the simplified model problem is more than two-orders of magnitude faster than other iterative methods, and the computational results show good grid convergence, with second-order accuracy, as well as qualitatively agreement with the results of other researchers.

  4. A New Browser-based, Ontology-driven Tool for Generating Standardized, Deep Descriptions of Geoscience Models

    NASA Astrophysics Data System (ADS)

    Peckham, S. D.; Kelbert, A.; Rudan, S.; Stoica, M.

    2016-12-01

    Standardized metadata for models is the key to reliable and greatly simplified coupling in model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System). This model metadata also helps model users to understand the important details that underpin computational models and to compare the capabilities of different models. These details include simplifying assumptions on the physics, governing equations and the numerical methods used to solve them, discretization of space (the grid) and time (the time-stepping scheme), state variables (input or output), model configuration parameters. This kind of metadata provides a "deep description" of a computational model that goes well beyond other types of metadata (e.g. author, purpose, scientific domain, programming language, digital rights, provenance, execution) and captures the science that underpins a model. While having this kind of standardized metadata for each model in a repository opens up a wide range of exciting possibilities, it is difficult to collect this information and a carefully conceived "data model" or schema is needed to store it. Automated harvesting and scraping methods can provide some useful information, but they often result in metadata that is inaccurate or incomplete, and this is not sufficient to enable the desired capabilities. In order to address this problem, we have developed a browser-based tool called the MCM Tool (Model Component Metadata) which runs on notebooks, tablets and smart phones. This tool was partially inspired by the TurboTax software, which greatly simplifies the necessary task of preparing tax documents. It allows a model developer or advanced user to provide a standardized, deep description of a computational geoscience model, including hydrologic models. Under the hood, the tool uses a new ontology for models built on the CSDMS Standard Names, expressed as a collection of RDF files (Resource Description Framework). This ontology is based on core concepts such as variables, objects, quantities, operations, processes and assumptions. The purpose of this talk is to present details of the new ontology and to then demonstrate the MCM Tool for several hydrologic models.

  5. Critical evaluation of Jet-A spray combustion using propane chemical kinetics in gas turbine combustion simulated by KIVA-2

    NASA Technical Reports Server (NTRS)

    Nguyen, H. L.; Ying, S.-J.

    1990-01-01

    Jet-A spray combustion has been evaluated in gas turbine combustion with the use of propane chemical kinetics as the first approximation for the chemical reactions. Here, the numerical solutions are obtained by using the KIVA-2 computer code. The KIVA-2 code is the most developed of the available multidimensional combustion computer programs for application of the in-cylinder combustion dynamics of internal combustion engines. The released version of KIVA-2 assumes that 12 chemical species are present; the code uses an Arrhenius kinetic-controlled combustion model governed by a four-step global chemical reaction and six equilibrium reactions. Researchers efforts involve the addition of Jet-A thermophysical properties and the implementation of detailed reaction mechanisms for propane oxidation. Three different detailed reaction mechanism models are considered. The first model consists of 131 reactions and 45 species. This is considered as the full mechanism which is developed through the study of chemical kinetics of propane combustion in an enclosed chamber. The full mechanism is evaluated by comparing calculated ignition delay times with available shock tube data. However, these detailed reactions occupy too much computer memory and CPU time for the computation. Therefore, it only serves as a benchmark case by which to evaluate other simplified models. Two possible simplified models were tested in the existing computer code KIVA-2 for the same conditions as used with the full mechanism. One model is obtained through a sensitivity analysis using LSENS, the general kinetics and sensitivity analysis program code of D. A. Bittker and K. Radhakrishnan. This model consists of 45 chemical reactions and 27 species. The other model is based on the work published by C. K. Westbrook and F. L. Dryer.

  6. Quantum annealing versus classical machine learning applied to a simplified computational biology problem

    NASA Astrophysics Data System (ADS)

    Li, Richard Y.; Di Felice, Rosa; Rohs, Remo; Lidar, Daniel A.

    2018-03-01

    Transcription factors regulate gene expression, but how these proteins recognize and specifically bind to their DNA targets is still debated. Machine learning models are effective means to reveal interaction mechanisms. Here we studied the ability of a quantum machine learning approach to classify and rank binding affinities. Using simplified data sets of a small number of DNA sequences derived from actual binding affinity experiments, we trained a commercially available quantum annealer to classify and rank transcription factor binding. The results were compared to state-of-the-art classical approaches for the same simplified data sets, including simulated annealing, simulated quantum annealing, multiple linear regression, LASSO, and extreme gradient boosting. Despite technological limitations, we find a slight advantage in classification performance and nearly equal ranking performance using the quantum annealer for these fairly small training data sets. Thus, we propose that quantum annealing might be an effective method to implement machine learning for certain computational biology problems.

  7. Modeling the Proton Radiation Belt With Van Allen Probes Relativistic Electron-Proton Telescope Data

    NASA Technical Reports Server (NTRS)

    Kanekal, S. G.; Li, X.; Baker, D. N.; Selesnick, R. S.; Hoxie, V. C.

    2018-01-01

    An empirical model of the proton radiation belt is constructed from data taken during 2013-2017 by the Relativistic Electron-Proton Telescopes on the Van Allen Probes satellites. The model intensity is a function of time, kinetic energy in the range 18-600 megaelectronvolts, equatorial pitch angle, and L shell of proton guiding centers. Data are selected, on the basis of energy deposits in each of the nine silicon detectors, to reduce background caused by hard proton energy spectra at low L. Instrument response functions are computed by Monte Carlo integration, using simulated proton paths through a simplified structural model, to account for energy loss in shielding material for protons outside the nominal field of view. Overlap of energy channels, their wide angular response, and changing satellite orientation require the model dependencies on all three independent variables be determined simultaneously. This is done by least squares minimization with a customized steepest descent algorithm. Model uncertainty accounts for statistical data error and systematic error in the simulated instrument response. A proton energy spectrum is also computed from data taken during the 8 January 2014 solar event, to illustrate methods for the simpler case of an isotropic and homogeneous model distribution. Radiation belt and solar proton results are compared to intensities computed with a simplified, on-axis response that can provide a good approximation under limited circumstances.

  8. Modeling the Proton Radiation Belt With Van Allen Probes Relativistic Electron-Proton Telescope Data

    NASA Astrophysics Data System (ADS)

    Selesnick, R. S.; Baker, D. N.; Kanekal, S. G.; Hoxie, V. C.; Li, X.

    2018-01-01

    An empirical model of the proton radiation belt is constructed from data taken during 2013-2017 by the Relativistic Electron-Proton Telescopes on the Van Allen Probes satellites. The model intensity is a function of time, kinetic energy in the range 18-600 MeV, equatorial pitch angle, and L shell of proton guiding centers. Data are selected, on the basis of energy deposits in each of the nine silicon detectors, to reduce background caused by hard proton energy spectra at low L. Instrument response functions are computed by Monte Carlo integration, using simulated proton paths through a simplified structural model, to account for energy loss in shielding material for protons outside the nominal field of view. Overlap of energy channels, their wide angular response, and changing satellite orientation require the model dependencies on all three independent variables be determined simultaneously. This is done by least squares minimization with a customized steepest descent algorithm. Model uncertainty accounts for statistical data error and systematic error in the simulated instrument response. A proton energy spectrum is also computed from data taken during the 8 January 2014 solar event, to illustrate methods for the simpler case of an isotropic and homogeneous model distribution. Radiation belt and solar proton results are compared to intensities computed with a simplified, on-axis response that can provide a good approximation under limited circumstances.

  9. Computational Simulation of Acoustic Modes in Rocket Combustors

    NASA Technical Reports Server (NTRS)

    Harper, Brent (Technical Monitor); Merkle, C. L.; Sankaran, V.; Ellis, M.

    2004-01-01

    A combination of computational fluid dynamic analysis and analytical solutions is being used to characterize the dominant modes in liquid rocket engines in conjunction with laboratory experiments. The analytical solutions are based on simplified geometries and flow conditions and are used for careful validation of the numerical formulation. The validated computational model is then extended to realistic geometries and flow conditions to test the effects of various parameters on chamber modes, to guide and interpret companion laboratory experiments in simplified combustors, and to scale the measurements to engine operating conditions. In turn, the experiments are used to validate and improve the model. The present paper gives an overview of the numerical and analytical techniques along with comparisons illustrating the accuracy of the computations as a function of grid resolution. A representative parametric study of the effect of combustor mean flow Mach number and combustor aspect ratio on the chamber modes is then presented for both transverse and longitudinal modes. The results show that higher mean flow Mach numbers drive the modes to lower frequencies. Estimates of transverse wave mechanics in a high aspect ratio combustor are then contrasted with longitudinal modes in a long and narrow combustor to provide understanding of potential experimental simulations.

  10. A simplified fractional order impedance model and parameter identification method for lithium-ion batteries

    PubMed Central

    Yang, Qingxia; Xu, Jun; Cao, Binggang; Li, Xiuqing

    2017-01-01

    Identification of internal parameters of lithium-ion batteries is a useful tool to evaluate battery performance, and requires an effective model and algorithm. Based on the least square genetic algorithm, a simplified fractional order impedance model for lithium-ion batteries and the corresponding parameter identification method were developed. The simplified model was derived from the analysis of the electrochemical impedance spectroscopy data and the transient response of lithium-ion batteries with different states of charge. In order to identify the parameters of the model, an equivalent tracking system was established, and the method of least square genetic algorithm was applied using the time-domain test data. Experiments and computer simulations were carried out to verify the effectiveness and accuracy of the proposed model and parameter identification method. Compared with a second-order resistance-capacitance (2-RC) model and recursive least squares method, small tracing voltage fluctuations were observed. The maximum battery voltage tracing error for the proposed model and parameter identification method is within 0.5%; this demonstrates the good performance of the model and the efficiency of the least square genetic algorithm to estimate the internal parameters of lithium-ion batteries. PMID:28212405

  11. Multi-Scale Computational Models for Electrical Brain Stimulation

    PubMed Central

    Seo, Hyeon; Jun, Sung C.

    2017-01-01

    Electrical brain stimulation (EBS) is an appealing method to treat neurological disorders. To achieve optimal stimulation effects and a better understanding of the underlying brain mechanisms, neuroscientists have proposed computational modeling studies for a decade. Recently, multi-scale models that combine a volume conductor head model and multi-compartmental models of cortical neurons have been developed to predict stimulation effects on the macroscopic and microscopic levels more precisely. As the need for better computational models continues to increase, we overview here recent multi-scale modeling studies; we focused on approaches that coupled a simplified or high-resolution volume conductor head model and multi-compartmental models of cortical neurons, and constructed realistic fiber models using diffusion tensor imaging (DTI). Further implications for achieving better precision in estimating cellular responses are discussed. PMID:29123476

  12. Large deflections and vibrations of a tip pulled beam with variable transversal section

    NASA Astrophysics Data System (ADS)

    Kurka, P.; Izuka, J.; Gonzalez, P.; Teixeira, L. H.

    2016-10-01

    The use of long flexible probes in outdoors exploration vehicles, as opposed to short and rigid arms, is a convenient way to grant easier access to regions of scientific interest such as terrain slopes and cliff sides. Longer and taller arms can also provide information from a wider exploration horizon. The drawback of employing long and flexible exploration probes is the fact that its vibration is not easily controlled in real time operation by means of a simple analytic linear dynamic model. The numerical model required to describe the dynamics of a very long and flexible structure is often very large and of slow computational convergence. The present work proposes a simplified numerical model of a long flexible beam with variable cross section, which is statically deflected by a pulling cable. The paper compares the proposed simplified model with experimental data regarding the static and dynamic characteristics of a beam with variable cross section. The simulations show the effectiveness of the simplified dynamic model employed in an active control loop to suppress tip vibrations of the beam.

  13. Application of a simplified theory of ELF propagation to a simplified worldwide model of the ionosphere

    NASA Astrophysics Data System (ADS)

    Behroozi-Toosi, A. B.; Booker, H. G.

    1980-12-01

    The simplified theory of ELF wave propagation in the earth-ionosphere transmission lines developed by Booker (1980) is applied to a simplified worldwide model of the ionosphere. The theory, which involves the comparison of the local vertical refractive index gradient with the local wavelength in order to classify the altitude into regions of low and high gradient, is used for a model of electron and negative ion profiles in the D and E regions below 150 km. Attention is given to the frequency dependence of ELF propagation at a middle latitude under daytime conditions, the daytime latitude dependence of ELF propagation at the equinox, the effects of sunspot, seasonal and diurnal variations on propagation, nighttime propagation neglecting and including propagation above 100 km, and the effect on daytime ELF propagation of a sudden ionospheric disturbance. The numerical values obtained by the method for the propagation velocity and attenuation rate are shown to be in general agreement with the analytic Naval Ocean Systems Center computer program. It is concluded that the method employed gives more physical insights into propagation processes than any other method, while requiring less effort and providing maximal accuracy.

  14. A simplified scheme for computing radiation transfer in the troposphere

    NASA Technical Reports Server (NTRS)

    Katayama, A.

    1973-01-01

    A scheme is presented, for the heating of clear and cloudy air by solar and infrared radiation transfer, designed for use in tropospheric general circulation models with coarse vertical resolution. A bulk transmission function is defined for the infrared transfer. The interpolation factors, required for computing the bulk transmission function, are parameterized as functions of such physical parameters as the thickness of the layer, the pressure, and the mixing ratio at a reference level. The computation procedure for solar radiation is significantly simplified by the introduction of two basic concepts. The first is that the solar radiation spectrum can be divided into a scattered part, for which Rayleigh scattering is significant but absorption by water vapor is negligible, and an absorbed part for which absorption by water vapor is significant but Rayleigh scattering is negligible. The second concept is that of an equivalent cloud water vapor amount which absorbs the same amount of radiation as the cloud.

  15. A simplified method in comparison with comprehensive interaction incremental dynamic analysis to assess seismic performance of jacket-type offshore platforms

    NASA Astrophysics Data System (ADS)

    Zolfaghari, M. R.; Ajamy, A.; Asgarian, B.

    2015-12-01

    The primary goal of seismic reassessment procedures in oil platform codes is to determine the reliability of a platform under extreme earthquake loading. Therefore, in this paper, a simplified method is proposed to assess seismic performance of existing jacket-type offshore platforms (JTOP) in regions ranging from near-elastic to global collapse. The simplified method curve exploits well agreement between static pushover (SPO) curve and the entire summarized interaction incremental dynamic analysis (CI-IDA) curve of the platform. Although the CI-IDA method offers better understanding and better modelling of the phenomenon, it is a time-consuming and challenging task. To overcome the challenges, the simplified procedure, a fast and accurate approach, is introduced based on SPO analysis. Then, an existing JTOP in the Persian Gulf is presented to illustrate the procedure, and finally a comparison is made between the simplified method and CI-IDA results. The simplified method is very informative and practical for current engineering purposes. It is able to predict seismic performance elasticity to global dynamic instability with reasonable accuracy and little computational effort.

  16. Efficient parallel resolution of the simplified transport equations in mixed-dual formulation

    NASA Astrophysics Data System (ADS)

    Barrault, M.; Lathuilière, B.; Ramet, P.; Roman, J.

    2011-03-01

    A reactivity computation consists of computing the highest eigenvalue of a generalized eigenvalue problem, for which an inverse power algorithm is commonly used. Very fine modelizations are difficult to treat for our sequential solver, based on the simplified transport equations, in terms of memory consumption and computational time. A first implementation of a Lagrangian based domain decomposition method brings to a poor parallel efficiency because of an increase in the power iterations [1]. In order to obtain a high parallel efficiency, we improve the parallelization scheme by changing the location of the loop over the subdomains in the overall algorithm and by benefiting from the characteristics of the Raviart-Thomas finite element. The new parallel algorithm still allows us to locally adapt the numerical scheme (mesh, finite element order). However, it can be significantly optimized for the matching grid case. The good behavior of the new parallelization scheme is demonstrated for the matching grid case on several hundreds of nodes for computations based on a pin-by-pin discretization.

  17. Simplified Phase Diversity algorithm based on a first-order Taylor expansion.

    PubMed

    Zhang, Dong; Zhang, Xiaobin; Xu, Shuyan; Liu, Nannan; Zhao, Luoxin

    2016-10-01

    We present a simplified solution to phase diversity when the observed object is a point source. It utilizes an iterative linearization of the point spread function (PSF) at two or more diverse planes by first-order Taylor expansion to reconstruct the initial wavefront. To enhance the influence of the PSF in the defocal plane which is usually very dim compared to that in the focal plane, we build a new model with the Tikhonov regularization function. The new model cannot only increase the computational speed, but also reduce the influence of the noise. By using the PSFs obtained from Zemax, we reconstruct the wavefront of the Hubble Space Telescope (HST) at the edge of the field of view (FOV) when the telescope is in either the nominal state or the misaligned state. We also set up an experiment, which consists of an imaging system and a deformable mirror, to validate the correctness of the presented model. The result shows that the new model can improve the computational speed with high wavefront detection accuracy.

  18. Fuel Burn Estimation Using Real Track Data

    NASA Technical Reports Server (NTRS)

    Chatterji, Gano B.

    2011-01-01

    A procedure for estimating fuel burned based on actual flight track data, and drag and fuel-flow models is described. The procedure consists of estimating aircraft and wind states, lift, drag and thrust. Fuel-flow for jet aircraft is determined in terms of thrust, true airspeed and altitude as prescribed by the Base of Aircraft Data fuel-flow model. This paper provides a theoretical foundation for computing fuel-flow with most of the information derived from actual flight data. The procedure does not require an explicit model of thrust and calibrated airspeed/Mach profile which are typically needed for trajectory synthesis. To validate the fuel computation method, flight test data provided by the Federal Aviation Administration were processed. Results from this method show that fuel consumed can be estimated within 1% of the actual fuel consumed in the flight test. Next, fuel consumption was estimated with simplified lift and thrust models. Results show negligible difference with respect to the full model without simplifications. An iterative takeoff weight estimation procedure is described for estimating fuel consumption, when takeoff weight is unavailable, and for establishing fuel consumption uncertainty bounds. Finally, the suitability of using radar-based position information for fuel estimation is examined. It is shown that fuel usage could be estimated within 5.4% of the actual value using positions reported in the Airline Situation Display to Industry data with simplified models and iterative takeoff weight computation.

  19. Prediction of NOx emissions from a simplified biodiesel surrogate by applying stochastic simulation algorithms (SSA)

    NASA Astrophysics Data System (ADS)

    Omidvarborna, Hamid; Kumar, Ashok; Kim, Dong-Shik

    2017-03-01

    A stochastic simulation algorithm (SSA) approach is implemented with the components of a simplified biodiesel surrogate to predict NOx (NO and NO2) emission concentrations from the combustion of biodiesel. The main reaction pathways were obtained by simplifying the previously derived skeletal mechanisms, including saturated methyl decenoate (MD), unsaturated methyl 5-decanoate (MD5D), and n-decane (ND). ND was added to match the energy content and the C/H/O ratio of actual biodiesel fuel. The MD/MD5D/ND surrogate model was also equipped with H2/CO/C1 formation mechanisms and a simplified NOx formation mechanism. The predicted model results are in good agreement with a limited number of experimental data at low-temperature combustion (LTC) conditions for three different biodiesel fuels consisting of various ratios of unsaturated and saturated methyl esters. The root mean square errors (RMSEs) of predicted values are 0.0020, 0.0018, and 0.0025 for soybean methyl ester (SME), waste cooking oil (WCO), and tallow oil (TO), respectively. The SSA model showed the potential to predict NOx emission concentrations, when the peak combustion temperature increased through the addition of ultra-low sulphur diesel (ULSD) to biodiesel. The SSA method used in this study demonstrates the possibility of reducing the computational complexity in biodiesel emissions modelling.

  20. SCI Identification (SCIDNT) program user's guide. [maximum likelihood method for linear rotorcraft models

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.

  1. Singular Perturbations and Time-Scale Methods in Control Theory: Survey 1976-1982.

    DTIC Science & Technology

    1982-12-01

    established in the 1960s, when they first became a means for simplified computation of optimal trajectories. It was soon recognized that singular...null-space of P(ao). The asymptotic values of the invariant zeros and associated invariant-zero directions as € O are the values computed from the...49 ’ 49 7. WEAK COUPLING AND TIME SCALES The need for model simplification with a reduction (or distribution) of computational effort is

  2. EOSlib, Version 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woods, Nathan; Menikoff, Ralph

    2017-02-03

    Equilibrium thermodynamics underpins many of the technologies used throughout theoretical physics, yet verification of the various theoretical models in the open literature remains challenging. EOSlib provides a single, consistent, verifiable implementation of these models, in a single, easy-to-use software package. It consists of three parts: a software library implementing various published equation-of-state (EOS) models; a database of fitting parameters for various materials for these models; and a number of useful utility functions for simplifying thermodynamic calculations such as computing Hugoniot curves or Riemann problem solutions. Ready availability of this library will enable reliable code-to- code testing of equation-of-state implementations, asmore » well as a starting point for more rigorous verification work. EOSlib also provides a single, consistent API for its analytic and tabular EOS models, which simplifies the process of comparing models for a particular application.« less

  3. Simplified paraboloid phase model-based phase tracker for demodulation of a single complex fringe.

    PubMed

    He, A; Deepan, B; Quan, C

    2017-09-01

    A regularized phase tracker (RPT) is an effective method for demodulation of single closed-fringe patterns. However, lengthy calculation time, specially designed scanning strategy, and sign-ambiguity problems caused by noise and saddle points reduce its effectiveness, especially for demodulating large and complex fringe patterns. In this paper, a simplified paraboloid phase model-based regularized phase tracker (SPRPT) is proposed. In SPRPT, first and second phase derivatives are pre-determined by the density-direction-combined method and discrete higher-order demodulation algorithm, respectively. Hence, cost function is effectively simplified to reduce the computation time significantly. Moreover, pre-determined phase derivatives improve the robustness of the demodulation of closed, complex fringe patterns. Thus, no specifically designed scanning strategy is needed; nevertheless, it is robust against the sign-ambiguity problem. The paraboloid phase model also assures better accuracy and robustness against noise. Both the simulated and experimental fringe patterns (obtained using electronic speckle pattern interferometry) are used to validate the proposed method, and a comparison of the proposed method with existing RPT methods is carried out. The simulation results show that the proposed method has achieved the highest accuracy with less computational time. The experimental result proves the robustness and the accuracy of the proposed method for demodulation of noisy fringe patterns and its feasibility for static and dynamic applications.

  4. COMPUTING SI AND CCPP USING SPREADSHEET PROGRAMS

    EPA Science Inventory

    Lotus 1-2-3 worksheets for calculating the calcite saturation index (SI) and calcium carbonate precipitation potential of a water sample are described. A simplified worksheet illustrates the principles of the method, and a more complex worksheet suitable for modeling most potabl...

  5. A simplified method for elastic-plastic-creep structural analysis

    NASA Technical Reports Server (NTRS)

    Kaufman, A.

    1984-01-01

    A simplified inelastic analysis computer program (ANSYPM) was developed for predicting the stress-strain history at the critical location of a thermomechanically cycled structure from an elastic solution. The program uses an iterative and incremental procedure to estimate the plastic strains from the material stress-strain properties and a plasticity hardening model. Creep effects are calculated on the basis of stress relaxation at constant strain, creep at constant stress or a combination of stress relaxation and creep accumulation. The simplified method was exercised on a number of problems involving uniaxial and multiaxial loading, isothermal and nonisothermal conditions, dwell times at various points in the cycles, different materials and kinematic hardening. Good agreement was found between these analytical results and nonlinear finite element solutions for these problems. The simplified analysis program used less than 1 percent of the CPU time required for a nonlinear finite element analysis.

  6. A simplified method for elastic-plastic-creep structural analysis

    NASA Technical Reports Server (NTRS)

    Kaufman, A.

    1985-01-01

    A simplified inelastic analysis computer program (ANSYPM) was developed for predicting the stress-strain history at the critical location of a thermomechanically cycled structure from an elastic solution. The program uses an iterative and incremental procedure to estimate the plastic strains from the material stress-strain properties and a plasticity hardening model. Creep effects are calculated on the basis of stress relaxation at constant strain, creep at constant stress or a combination of stress relaxation and creep accumulation. The simplified method was exercised on a number of problems involving uniaxial and multiaxial loading, isothermal and nonisothermal conditions, dwell times at various points in the cycles, different materials and kinematic hardening. Good agreement was found between these analytical results and nonlinear finite element solutions for these problems. The simplified analysis program used less than 1 percent of the CPU time required for a nonlinear finite element analysis.

  7. Thermal Protection System Cavity Heating for Simplified and Actual Geometries Using Computational Fluid Dynamics Simulations with Unstructured Grids

    NASA Technical Reports Server (NTRS)

    McCloud, Peter L.

    2010-01-01

    Thermal Protection System (TPS) Cavity Heating is predicted using Computational Fluid Dynamics (CFD) on unstructured grids for both simplified cavities and actual cavity geometries. Validation was performed using comparisons to wind tunnel experimental results and CFD predictions using structured grids. Full-scale predictions were made for simplified and actual geometry configurations on the Space Shuttle Orbiter in a mission support timeframe.

  8. BCM: toolkit for Bayesian analysis of Computational Models using samplers.

    PubMed

    Thijssen, Bram; Dijkstra, Tjeerd M H; Heskes, Tom; Wessels, Lodewyk F A

    2016-10-21

    Computational models in biology are characterized by a large degree of uncertainty. This uncertainty can be analyzed with Bayesian statistics, however, the sampling algorithms that are frequently used for calculating Bayesian statistical estimates are computationally demanding, and each algorithm has unique advantages and disadvantages. It is typically unclear, before starting an analysis, which algorithm will perform well on a given computational model. We present BCM, a toolkit for the Bayesian analysis of Computational Models using samplers. It provides efficient, multithreaded implementations of eleven algorithms for sampling from posterior probability distributions and for calculating marginal likelihoods. BCM includes tools to simplify the process of model specification and scripts for visualizing the results. The flexible architecture allows it to be used on diverse types of biological computational models. In an example inference task using a model of the cell cycle based on ordinary differential equations, BCM is significantly more efficient than existing software packages, allowing more challenging inference problems to be solved. BCM represents an efficient one-stop-shop for computational modelers wishing to use sampler-based Bayesian statistics.

  9. Indirect detection constraints on s- and t-channel simplified models of dark matter

    NASA Astrophysics Data System (ADS)

    Carpenter, Linda M.; Colburn, Russell; Goodman, Jessica; Linden, Tim

    2016-09-01

    Recent Fermi-LAT observations of dwarf spheroidal galaxies in the Milky Way have placed strong limits on the gamma-ray flux from dark matter annihilation. In order to produce the strongest limit on the dark matter annihilation cross section, the observations of each dwarf galaxy have typically been "stacked" in a joint-likelihood analysis, utilizing optical observations to constrain the dark matter density profile in each dwarf. These limits have typically been computed only for singular annihilation final states, such as b b ¯ or τ+τ- . In this paper, we generalize this approach by producing an independent joint-likelihood analysis to set constraints on models where the dark matter particle annihilates to multiple final-state fermions. We interpret these results in the context of the most popular simplified models, including those with s- and t-channel dark matter annihilation through scalar and vector mediators. We present our results as constraints on the minimum dark matter mass and the mediator sector parameters. Additionally, we compare our simplified model results to those of effective field theory contact interactions in the high-mass limit.

  10. The cost of simplifying air travel when modeling disease spread.

    PubMed

    Lessler, Justin; Kaufman, James H; Ford, Daniel A; Douglas, Judith V

    2009-01-01

    Air travel plays a key role in the spread of many pathogens. Modeling the long distance spread of infectious disease in these cases requires an air travel model. Highly detailed air transportation models can be over determined and computationally problematic. We compared the predictions of a simplified air transport model with those of a model of all routes and assessed the impact of differences on models of infectious disease. Using U.S. ticket data from 2007, we compared a simplified "pipe" model, in which individuals flow in and out of the air transport system based on the number of arrivals and departures from a given airport, to a fully saturated model where all routes are modeled individually. We also compared the pipe model to a "gravity" model where the probability of travel is scaled by physical distance; the gravity model did not differ significantly from the pipe model. The pipe model roughly approximated actual air travel, but tended to overestimate the number of trips between small airports and underestimate travel between major east and west coast airports. For most routes, the maximum number of false (or missed) introductions of disease is small (<1 per day) but for a few routes this rate is greatly underestimated by the pipe model. If our interest is in large scale regional and national effects of disease, the simplified pipe model may be adequate. If we are interested in specific effects of interventions on particular air routes or the time for the disease to reach a particular location, a more complex point-to-point model will be more accurate. For many problems a hybrid model that independently models some frequently traveled routes may be the best choice. Regardless of the model used, the effect of simplifications and sensitivity to errors in parameter estimation should be analyzed.

  11. A simplified model for dynamics of cell rolling and cell-surface adhesion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cimrák, Ivan, E-mail: ivan.cimrak@fri.uniza.sk

    2015-03-10

    We propose a three dimensional model for the adhesion and rolling of biological cells on surfaces. We study cells moving in shear flow above a wall to which they can adhere via specific receptor-ligand bonds based on receptors from selectin as well as integrin family. The computational fluid dynamics are governed by the lattice-Boltzmann method. The movement and the deformation of the cells is described by the immersed boundary method. Both methods are fully coupled by implementing a two-way fluid-structure interaction. The adhesion mechanism is modelled by adhesive bonds including stochastic rules for their creation and rupture. We explore amore » simplified model with dissociation rate independent of the length of the bonds. We demonstrate that this model is able to resemble the mesoscopic properties, such as velocity of rolling cells.« less

  12. Temperature and solute-transport simulation in streamflow using a Lagrangian reference frame

    USGS Publications Warehouse

    Jobson, Harvey E.

    1980-01-01

    A computer program for simulating one-dimensional, unsteady temperature and solute transport in a river has been developed and documented for general use. The solution approach to the convective-diffusion equation uses a moving reference frame (Lagrangian) which greatly simplifies the mathematics of the solution procedure and dramatically reduces errors caused by numerical dispersion. The model documentation is presented as a series of four programs of increasing complexity. The conservative transport model can be used to route a single conservative substance. The simplified temperature model is used to predict water temperature in rivers when only temperature and windspeed data are available. The complete temperature model is highly accurate but requires rather complete meteorological data. Finally, the 10-parameter model can be used to route as many as 10 interacting constituents through a river reach. (USGS)

  13. A simplified gross thrust computing technique for an afterburning turbofan engine

    NASA Technical Reports Server (NTRS)

    Hamer, M. J.; Kurtenbach, F. J.

    1978-01-01

    A simplified gross thrust computing technique extended to the F100-PW-100 afterburning turbofan engine is described. The technique uses measured total and static pressures in the engine tailpipe and ambient static pressure to compute gross thrust. Empirically evaluated calibration factors account for three-dimensional effects, the effects of friction and mass transfer, and the effects of simplifying assumptions for solving the equations. Instrumentation requirements and the sensitivity of computed thrust to transducer errors are presented. NASA altitude facility tests on F100 engines (computed thrust versus measured thrust) are presented, and calibration factors obtained on one engine are shown to be applicable to the second engine by comparing the computed gross thrust. It is concluded that this thrust method is potentially suitable for flight test application and engine maintenance on production engines with a minimum amount of instrumentation.

  14. A computerized model for integrating the physical environmental factors into metropolitan landscape planning

    Treesearch

    Julius Gy Fabos; Kimball H. Ferris

    1977-01-01

    This paper justifies and illustrates (in simplified form) a landscape planning approach to the environmental management of the metropolitan landscape. The model utilizes a computerized assessment and mapping system, which exhibits a recent advancement in computer technology that allows for greater accuracy and the weighting of different values when mapping at the...

  15. Simplification of the Kalman filter for meteorological data assimilation

    NASA Technical Reports Server (NTRS)

    Dee, Dick P.

    1991-01-01

    The paper proposes a new statistical method of data assimilation that is based on a simplification of the Kalman filter equations. The forecast error covariance evolution is approximated simply by advecting the mass-error covariance field, deriving the remaining covariances geostrophically, and accounting for external model-error forcing only at the end of each forecast cycle. This greatly reduces the cost of computation of the forecast error covariance. In simulations with a linear, one-dimensional shallow-water model and data generated artificially, the performance of the simplified filter is compared with that of the Kalman filter and the optimal interpolation (OI) method. The simplified filter produces analyses that are nearly optimal, and represents a significant improvement over OI.

  16. The role of the antecedent soil moisture condition on the distributed hydrologic modelling of the Toce alpine basin floods.

    NASA Astrophysics Data System (ADS)

    Ravazzani, G.; Montaldo, N.; Mancini, M.; Rosso, R.

    2003-04-01

    Event-based hydrologic models need the antecedent soil moisture condition, as critical boundary initial condition for flood simulation. Land-surface models (LSMs) have been developed to simulate mass and energy transfers, and to update the soil moisture condition through time from the solution of water and energy balance equations. They are recently used in distributed hydrologic modeling for flood prediction systems. Recent developments have made LSMs more complex by inclusion of more processes and controlling variables, increasing parameter number and uncertainty of their estimates. This also led to increasing of computational burden and parameterization of the distributed hydrologic models. In this study we investigate: 1) the role of soil moisture initial conditions in the modeling of Alpine basin floods; 2) the adequate complexity level of LSMs for the distributed hydrologic modeling of Alpine basin floods. The Toce basin is the case study; it is located in the North Piedmont (Italian Alps), and it has a total drainage area of 1534 km2 at Candoglia section. Three distributed hydrologic models of different level of complexity are developed and compared: two (TDLSM and SDLSM) are continuous models, one (FEST02) is an event model based on the simplified SCS-CN method for rainfall abstractions. In the TDLSM model a two-layer LSM computes both saturation and infiltration excess runoff, and simulates the evolution of the water table spatial distribution using the topographic index; in the SDLSM model a simplified one-layer distributed LSM only computes hortonian runoff, and doesn’t simulate the water table dynamic. All the three hydrologic models simulate the surface runoff propagation through the Muskingum-Cunge method. TDLSM and SDLSM models have been applied for the two-year (1996 and 1997) simulation period, during which two major floods occurred in the November 1996 and in the June 1997. The models have been calibrated and tested comparing simulated and observed hydrographs at Candoglia. Sensitivity analysis of the models to significant LSM parameters were also performed. The performances of the three models in the simulation of the two major floods are compared. Interestingly, the results indicate that the SDLSM model is able to sufficiently well predict the major floods of this Alpine basin; indeed, this model is a good compromise between the over-parameterized and too complex TDLSM model and the over-simplified FEST02 model.

  17. Design for inadvertent damage in composite laminates

    NASA Technical Reports Server (NTRS)

    Singhal, Surendra N.; Chamis, Christos C.

    1992-01-01

    Simplified predictive methods and models to computationally simulate durability and damage in polymer matrix composite materials/structures are described. The models include (1) progressive fracture, (2) progressively damaged structural behavior, (3) progressive fracture in aggressive environments, (4) stress concentrations, and (5) impact resistance. Several examples are included to illustrate applications of the models and to identify significant parameters and sensitivities. Comparisons with limited experimental data are made.

  18. Simplified dynamic analysis to evaluate liquefaction-induced lateral deformation of earth slopes: a computational fluid dynamics approach

    NASA Astrophysics Data System (ADS)

    Jafarian, Yaser; Ghorbani, Ali; Ahmadi, Omid

    2014-09-01

    Lateral deformation of liquefiable soil is a cause of much damage during earthquakes, reportedly more than other forms of liquefaction-induced ground failures. Researchers have presented studies in which the liquefied soil is considered as viscous fluid. In this manner, the liquefied soil behaves as non-Newtonian fluid, whose viscosity decreases as the shear strain rate increases. The current study incorporates computational fluid dynamics to propose a simplified dynamic analysis for the liquefaction-induced lateral deformation of earth slopes. The numerical procedure involves a quasi-linear elastic model for small to moderate strains and a Bingham fluid model for large strain states during liquefaction. An iterative procedure is considered to estimate the strain-compatible shear stiffness of soil. The post-liquefaction residual strength of soil is considered as the initial Bingham viscosity. Performance of the numerical procedure is examined by using the results of centrifuge model and shaking table tests together with some field observations of lateral ground deformation. The results demonstrate that the proposed procedure predicts the time history of lateral ground deformation with a reasonable degree of precision.

  19. A 4DCT imaging-based breathing lung model with relative hysteresis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.

    To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for bothmore » models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry. - Highlights: • We developed a breathing human lung CFD model based on 4D-dynamic CT images. • The 4DCT-based breathing lung model is able to capture lung relative hysteresis. • A new boundary condition for lung model based on one static CT image was proposed. • The difference between lung models based on 4D and static CT images was quantified.« less

  20. Statistical image reconstruction from correlated data with applications to PET

    PubMed Central

    Alessio, Adam; Sauer, Ken; Kinahan, Paul

    2008-01-01

    Most statistical reconstruction methods for emission tomography are designed for data modeled as conditionally independent Poisson variates. In reality, due to scanner detectors, electronics and data processing, correlations are introduced into the data resulting in dependent variates. In general, these correlations are ignored because they are difficult to measure and lead to computationally challenging statistical reconstruction algorithms. This work addresses the second concern, seeking to simplify the reconstruction of correlated data and provide a more precise image estimate than the conventional independent methods. In general, correlated variates have a large non-diagonal covariance matrix that is computationally challenging to use as a weighting term in a reconstruction algorithm. This work proposes two methods to simplify the use of a non-diagonal covariance matrix as the weighting term by (a) limiting the number of dimensions in which the correlations are modeled and (b) adopting flexible, yet computationally tractable, models for correlation structure. We apply and test these methods with simple simulated PET data and data processed with the Fourier rebinning algorithm which include the one-dimensional correlations in the axial direction and the two-dimensional correlations in the transaxial directions. The methods are incorporated into a penalized weighted least-squares 2D reconstruction and compared with a conventional maximum a posteriori approach. PMID:17921576

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marinov, N.M.; Westbrook, C.K.; Cloutman, L.D.

    Work being carried out at LLNL has concentrated on studies of the role of chemical kinetics in a variety of problems related to hydrogen combustion in practical combustion systems, with an emphasis on vehicle propulsion. Use of hydrogen offers significant advantages over fossil fuels, and computer modeling provides advantages when used in concert with experimental studies. Many numerical {open_quotes}experiments{close_quotes} can be carried out quickly and efficiently, reducing the cost and time of system development, and many new and speculative concepts can be screened to identify those with sufficient promise to pursue experimentally. This project uses chemical kinetic and fluid dynamicmore » computational modeling to examine the combustion characteristics of systems burning hydrogen, either as the only fuel or mixed with natural gas. Oxidation kinetics are combined with pollutant formation kinetics, including formation of oxides of nitrogen but also including air toxics in natural gas combustion. We have refined many of the elementary kinetic reaction steps in the detailed reaction mechanism for hydrogen oxidation. To extend the model to pressures characteristic of internal combustion engines, it was necessary to apply theoretical pressure falloff formalisms for several key steps in the reaction mechanism. We have continued development of simplified reaction mechanisms for hydrogen oxidation, we have implemented those mechanisms into multidimensional computational fluid dynamics models, and we have used models of chemistry and fluid dynamics to address selected application problems. At the present time, we are using computed high pressure flame, and auto-ignition data to further refine the simplified kinetics models that are then to be used in multidimensional fluid mechanics models. Detailed kinetics studies have investigated hydrogen flames and ignition of hydrogen behind shock waves, intended to refine the detailed reactions mechanisms.« less

  2. Aneesur Rahman Prize Talk

    NASA Astrophysics Data System (ADS)

    Frenkel, Daan

    2007-03-01

    During the past decade there has been a unique synergy between theory, experiment and simulation in Soft Matter Physics. In colloid science, computer simulations that started out as studies of highly simplified model systems, have acquired direct experimental relevance because experimental realizations of these simple models can now be synthesized. Whilst many numerical predictions concerning the phase behavior of colloidal systems have been vindicated by experiments, the jury is still out on others. In my talk I will discuss some of the recent technical developments, new findings and open questions in computational soft-matter science.

  3. User's guide for a large signal computer model of the helical traveling wave tube

    NASA Technical Reports Server (NTRS)

    Palmer, Raymond W.

    1992-01-01

    The use is described of a successful large-signal, two-dimensional (axisymmetric), deformable disk computer model of the helical traveling wave tube amplifier, an extensively revised and operationally simplified version. We also discuss program input and output and the auxiliary files necessary for operation. Included is a sample problem and its input data and output results. Interested parties may now obtain from the author the FORTRAN source code, auxiliary files, and sample input data on a standard floppy diskette, the contents of which are described herein.

  4. Simplified jet-A kinetic mechanism for combustor application

    NASA Technical Reports Server (NTRS)

    Lee, Chi-Ming; Kundu, Krishna; Ghorashi, Bahman

    1993-01-01

    Successful modeling of combustion and emissions in gas turbine engine combustors requires an adequate description of the reaction mechanism. For hydrocarbon oxidation, detailed mechanisms are only available for the simplest types of hydrocarbons such as methane, ethane, acetylene, and propane. These detailed mechanisms contain a large number of chemical species participating simultaneously in many elementary kinetic steps. Current computational fluid dynamic (CFD) models must include fuel vaporization, fuel-air mixing, chemical reactions, and complicated boundary geometries. To simulate these conditions a very sophisticated computer model is required, which requires large computer memory capacity and long run times. Therefore, gas turbine combustion modeling has frequently been simplified by using global reaction mechanisms, which can predict only the quantities of interest: heat release rates, flame temperature, and emissions. Jet fuels are wide-boiling-range hydrocarbons with ranges extending through those of gasoline and kerosene. These fuels are chemically complex, often containing more than 300 components. Jet fuel typically can be characterized as containing 70 vol pct paraffin compounds and 25 vol pct aromatic compounds. A five-step Jet-A fuel mechanism which involves pyrolysis and subsequent oxidation of paraffin and aromatic compounds is presented here. This mechanism is verified by comparing with Jet-A fuel ignition delay time experimental data, and species concentrations obtained from flametube experiments. This five-step mechanism appears to be better than the current one- and two-step mechanisms.

  5. Simplified methods for real-time prediction of storm surge uncertainty: The city of Venice case study

    NASA Astrophysics Data System (ADS)

    Mel, Riccardo; Viero, Daniele Pietro; Carniello, Luca; Defina, Andrea; D'Alpaos, Luigi

    2014-09-01

    Providing reliable and accurate storm surge forecasts is important for a wide range of problems related to coastal environments. In order to adequately support decision-making processes, it also become increasingly important to be able to estimate the uncertainty associated with the storm surge forecast. The procedure commonly adopted to do this uses the results of a hydrodynamic model forced by a set of different meteorological forecasts; however, this approach requires a considerable, if not prohibitive, computational cost for real-time application. In the present paper we present two simplified methods for estimating the uncertainty affecting storm surge prediction with moderate computational effort. In the first approach we use a computationally fast, statistical tidal model instead of a hydrodynamic numerical model to estimate storm surge uncertainty. The second approach is based on the observation that the uncertainty in the sea level forecast mainly stems from the uncertainty affecting the meteorological fields; this has led to the idea to estimate forecast uncertainty via a linear combination of suitable meteorological variances, directly extracted from the meteorological fields. The proposed methods were applied to estimate the uncertainty in the storm surge forecast in the Venice Lagoon. The results clearly show that the uncertainty estimated through a linear combination of suitable meteorological variances nicely matches the one obtained using the deterministic approach and overcomes some intrinsic limitations in the use of a statistical tidal model.

  6. Propulsive efficiency of frog swimming with different feet and swimming patterns

    PubMed Central

    Jizhuang, Fan; Wei, Zhang; Bowen, Yuan; Gangfeng, Liu

    2017-01-01

    ABSTRACT Aquatic and terrestrial animals have different swimming performances and mechanical efficiencies based on their different swimming methods. To explore propulsion in swimming frogs, this study calculated mechanical efficiencies based on data describing aquatic and terrestrial webbed-foot shapes and swimming patterns. First, a simplified frog model and dynamic equation were established, and hydrodynamic forces on the foot were computed according to computational fluid dynamic calculations. Then, a two-link mechanism was used to stand in for the diverse and complicated hind legs found in different frog species, in order to simplify the input work calculation. Joint torques were derived based on the virtual work principle to compute the efficiency of foot propulsion. Finally, two feet and swimming patterns were combined to compute propulsive efficiency. The aquatic frog demonstrated a propulsive efficiency (43.11%) between those of drag-based and lift-based propulsions, while the terrestrial frog efficiency (29.58%) fell within the range of drag-based propulsion. The results illustrate the main factor of swimming patterns for swimming performance and efficiency. PMID:28302669

  7. Analyzing Power Supply and Demand on the ISS

    NASA Technical Reports Server (NTRS)

    Thomas, Justin; Pham, Tho; Halyard, Raymond; Conwell, Steve

    2006-01-01

    Station Power and Energy Evaluation Determiner (SPEED) is a Java application program for analyzing the supply and demand aspects of the electrical power system of the International Space Station (ISS). SPEED can be executed on any computer that supports version 1.4 or a subsequent version of the Java Runtime Environment. SPEED includes an analysis module, denoted the Simplified Battery Solar Array Model, which is a simplified engineering model of the ISS primary power system. This simplified model makes it possible to perform analyses quickly. SPEED also includes a user-friendly graphical-interface module, an input file system, a parameter-configuration module, an analysis-configuration-management subsystem, and an output subsystem. SPEED responds to input information on trajectory, shadowing, attitude, and pointing in either a state-of-charge mode or a power-availability mode. In the state-of-charge mode, SPEED calculates battery state-of-charge profiles, given a time-varying power-load profile. In the power-availability mode, SPEED determines the time-varying total available solar array and/or battery power output, given a minimum allowable battery state of charge.

  8. A multiple-time-scale turbulence model based on variable partitioning of turbulent kinetic energy spectrum

    NASA Technical Reports Server (NTRS)

    Kim, S.-W.; Chen, C.-P.

    1988-01-01

    The paper presents a multiple-time-scale turbulence model of a single point closure and a simplified split-spectrum method. Consideration is given to a class of turbulent boundary layer flows and of separated and/or swirling elliptic turbulent flows. For the separated and/or swirling turbulent flows, the present turbulence model yielded significantly improved computational results over those obtained with the standard k-epsilon turbulence model.

  9. Fiber Composite Sandwich Thermostructural Behavior: Computational Simulation

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Aiello, R. A.; Murthy, P. L. N.

    1986-01-01

    Several computational levels of progressive sophistication/simplification are described to computationally simulate composite sandwich hygral, thermal, and structural behavior. The computational levels of sophistication include: (1) three-dimensional detailed finite element modeling of the honeycomb, the adhesive and the composite faces; (2) three-dimensional finite element modeling of the honeycomb assumed to be an equivalent continuous, homogeneous medium, the adhesive and the composite faces; (3) laminate theory simulation where the honeycomb (metal or composite) is assumed to consist of plies with equivalent properties; and (4) derivations of approximate, simplified equations for thermal and mechanical properties by simulating the honeycomb as an equivalent homogeneous medium. The approximate equations are combined with composite hygrothermomechanical and laminate theories to provide a simple and effective computational procedure for simulating the thermomechanical/thermostructural behavior of fiber composite sandwich structures.

  10. Turbulent Dispersion Modelling in a Complex Urban Environment - Data Analysis and Model Development

    DTIC Science & Technology

    2010-02-01

    Technology Laboratory (Dstl) is used as a benchmark for comparison. Comparisons are also made with some more practically oriented computational fluid dynamics...predictions. To achieve clarity in the range of approaches available for practical models of con- taminant dispersion in urban areas, an overview of...complexity of those methods is simplified to a degree that allows straightforward practical implementation and application. Using these results as a

  11. Simplified methods for computing total sediment discharge with the modified Einstein procedure

    USGS Publications Warehouse

    Colby, Bruce R.; Hubbell, David Wellington

    1961-01-01

    A procedure was presented in 1950 by H. A. Einstein for computing the total discharge of sediment particles of sizes that are in appreciable quantities in the stream bed. This procedure was modified by the U.S. Geological Survey and adapted to computing the total sediment discharge of a stream on the basis of samples of bed sediment, depth-integrated samples of suspended sediment, streamflow measurements, and water temperature. This paper gives simplified methods for computing total sediment discharge by the modified Einstein procedure. Each of four homographs appreciably simplifies a major step in the computations. Within the stated limitations, use of the homographs introduces much less error than is present in either the basic data or the theories on which the computations of total sediment discharge are based. The results are nearly as accurate mathematically as those that could be obtained from the longer and more complex arithmetic and algebraic computations of the Einstein procedure.

  12. Conversion of IVA Human Computer Model to EVA Use and Evaluation and Comparison of the Result to Existing EVA Models

    NASA Technical Reports Server (NTRS)

    Hamilton, George S.; Williams, Jermaine C.

    1998-01-01

    This paper describes the methods, rationale, and comparative results of the conversion of an intravehicular (IVA) 3D human computer model (HCM) to extravehicular (EVA) use and compares the converted model to an existing model on another computer platform. The task of accurately modeling a spacesuited human figure in software is daunting: the suit restricts the human's joint range of motion (ROM) and does not have joints collocated with human joints. The modeling of the variety of materials needed to construct a space suit (e. g. metal bearings, rigid fiberglass torso, flexible cloth limbs and rubber coated gloves) attached to a human figure is currently out of reach of desktop computer hardware and software. Therefore a simplified approach was taken. The HCM's body parts were enlarged and the joint ROM was restricted to match the existing spacesuit model. This basic approach could be used to model other restrictive environments in industry such as chemical or fire protective clothing. In summary, the approach provides a moderate fidelity, usable tool which will run on current notebook computers.

  13. Quantifying the predictive consequences of model error with linear subspace analysis

    USGS Publications Warehouse

    White, Jeremy T.; Doherty, John E.; Hughes, Joseph D.

    2014-01-01

    All computer models are simplified and imperfect simulators of complex natural systems. The discrepancy arising from simplification induces bias in model predictions, which may be amplified by the process of model calibration. This paper presents a new method to identify and quantify the predictive consequences of calibrating a simplified computer model. The method is based on linear theory, and it scales efficiently to the large numbers of parameters and observations characteristic of groundwater and petroleum reservoir models. The method is applied to a range of predictions made with a synthetic integrated surface-water/groundwater model with thousands of parameters. Several different observation processing strategies and parameterization/regularization approaches are examined in detail, including use of the Karhunen-Loève parameter transformation. Predictive bias arising from model error is shown to be prediction specific and often invisible to the modeler. The amount of calibration-induced bias is influenced by several factors, including how expert knowledge is applied in the design of parameterization schemes, the number of parameters adjusted during calibration, how observations and model-generated counterparts are processed, and the level of fit with observations achieved through calibration. Failure to properly implement any of these factors in a prediction-specific manner may increase the potential for predictive bias in ways that are not visible to the calibration and uncertainty analysis process.

  14. A fast analytical undulator model for realistic high-energy FEL simulations

    NASA Astrophysics Data System (ADS)

    Tatchyn, R.; Cremer, T.

    1997-02-01

    A number of leading FEL simulation codes used for modeling gain in the ultralong undulators required for SASE saturation in the <100 Å range employ simplified analytical models both for field and error representations. Although it is recognized that both the practical and theoretical validity of such codes could be enhanced by incorporating realistic undulator field calculations, the computational cost of doing this can be prohibitive, especially for point-to-point integration of the equations of motion through each undulator period. In this paper we describe a simple analytical model suitable for modeling realistic permanent magnet (PM), hybrid/PM, and non-PM undulator structures, and discuss selected techniques for minimizing computation time.

  15. Causal learning with local computations.

    PubMed

    Fernbach, Philip M; Sloman, Steven A

    2009-05-01

    The authors proposed and tested a psychological theory of causal structure learning based on local computations. Local computations simplify complex learning problems via cues available on individual trials to update a single causal structure hypothesis. Structural inferences from local computations make minimal demands on memory, require relatively small amounts of data, and need not respect normative prescriptions as inferences that are principled locally may violate those principles when combined. Over a series of 3 experiments, the authors found (a) systematic inferences from small amounts of data; (b) systematic inference of extraneous causal links; (c) influence of data presentation order on inferences; and (d) error reduction through pretraining. Without pretraining, a model based on local computations fitted data better than a Bayesian structural inference model. The data suggest that local computations serve as a heuristic for learning causal structure. Copyright 2009 APA, all rights reserved.

  16. CFD Assessment of Forward Booster Separation Motor Ignition Overpressure on ET XT 718 Ice/Frost Ramp

    NASA Technical Reports Server (NTRS)

    Tejnil, Edward; Rogers, Stuart E.

    2012-01-01

    Computational fluid dynamics assessment of the forward booster separation motor ignition over-pressure was performed on the space shuttle external tank X(sub T) 718 ice/frost ramp using the flow solver OVERFLOW. The main objective of this study was the investigation of the over-pressure during solid rocket booster separation and its affect on the local pressure and air-load environments. Delta pressure and plume impingement were investigated as a possible contributing factor to the cause of the debris loss on shuttle missions STS-125 and STS-127. A simplified computational model of the Space Shuttle Launch Vehicle was developed consisting of just the external tank and the solid rocket boosters with separation motor nozzles and plumes. The simplified model was validated by comparison to full fidelity computational model of the Space Shuttle without the separation motors. Quasi steady-state plume solutions were used to calibrate the thrust of the separation motors. Time-accurate simulations of the firing of the booster-separation motors were performed. Parametric studies of the time-step size and the number of sub-iterations were used to find the best converged solution. The computed solutions were compared to previous OVERFLOW steady-state runs of the separation motors with reaction control system jets and to ground test data. The results indicated that delta pressure from the overpressure was small and within design limits, and thus was unlikely to have contributed to the foam losses.

  17. Receiving water quality assessment: comparison between simplified and detailed integrated urban modelling approaches.

    PubMed

    Mannina, Giorgio; Viviani, Gaspare

    2010-01-01

    Urban water quality management often requires use of numerical models allowing the evaluation of the cause-effect relationship between the input(s) (i.e. rainfall, pollutant concentrations on catchment surface and in sewer system) and the resulting water quality response. The conventional approach to the system (i.e. sewer system, wastewater treatment plant and receiving water body), considering each component separately, does not enable optimisation of the whole system. However, recent gains in understanding and modelling make it possible to represent the system as a whole and optimise its overall performance. Indeed, integrated urban drainage modelling is of growing interest for tools to cope with Water Framework Directive requirements. Two different approaches can be employed for modelling the whole urban drainage system: detailed and simplified. Each has its advantages and disadvantages. Specifically, detailed approaches can offer a higher level of reliability in the model results, but can be very time consuming from the computational point of view. Simplified approaches are faster but may lead to greater model uncertainty due to an over-simplification. To gain insight into the above problem, two different modelling approaches have been compared with respect to their uncertainty. The first urban drainage integrated model approach uses the Saint-Venant equations and the 1D advection-dispersion equations, for the quantity and for the quality aspects, respectively. The second model approach consists of the simplified reservoir model. The analysis used a parsimonious bespoke model developed in previous studies. For the uncertainty analysis, the Generalised Likelihood Uncertainty Estimation (GLUE) procedure was used. Model reliability was evaluated on the basis of capacity of globally limiting the uncertainty. Both models have a good capability to fit the experimental data, suggesting that all adopted approaches are equivalent both for quantity and quality. The detailed model approach is more robust and presents less uncertainty in terms of uncertainty bands. On the other hand, the simplified river water quality model approach shows higher uncertainty and may be unsuitable for receiving water body quality assessment.

  18. GPU COMPUTING FOR PARTICLE TRACKING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nishimura, Hiroshi; Song, Kai; Muriki, Krishna

    2011-03-25

    This is a feasibility study of using a modern Graphics Processing Unit (GPU) to parallelize the accelerator particle tracking code. To demonstrate the massive parallelization features provided by GPU computing, a simplified TracyGPU program is developed for dynamic aperture calculation. Performances, issues, and challenges from introducing GPU are also discussed. General purpose Computation on Graphics Processing Units (GPGPU) bring massive parallel computing capabilities to numerical calculation. However, the unique architecture of GPU requires a comprehensive understanding of the hardware and programming model to be able to well optimize existing applications. In the field of accelerator physics, the dynamic aperture calculationmore » of a storage ring, which is often the most time consuming part of the accelerator modeling and simulation, can benefit from GPU due to its embarrassingly parallel feature, which fits well with the GPU programming model. In this paper, we use the Tesla C2050 GPU which consists of 14 multi-processois (MP) with 32 cores on each MP, therefore a total of 448 cores, to host thousands ot threads dynamically. Thread is a logical execution unit of the program on GPU. In the GPU programming model, threads are grouped into a collection of blocks Within each block, multiple threads share the same code, and up to 48 KB of shared memory. Multiple thread blocks form a grid, which is executed as a GPU kernel. A simplified code that is a subset of Tracy++ [2] is developed to demonstrate the possibility of using GPU to speed up the dynamic aperture calculation by having each thread track a particle.« less

  19. Integrated Electronic Warfare System Advanced Development Model (ADM); Appendix 1 - Functional Requirement Specification.

    DTIC Science & Technology

    1977-10-01

    APPROVED DATE FUNCTION APPROVED jDATE WRITER J . K-olanek 2/6/76 REVISIONS CHK DESCRIPTION REV CHK DESCRIPTION IREV REVISION jJ ~ ~ ~~~ _ II SHEET NO...DOCUMENT (CDBDD) 45 5.5 COMPUTER PROGRAM PACKAGE (CPP)- j 45 5.6 COMPUTER PROGRAM OPERATOR’S MANUAL (CPOM) 45 5.7 COMPUTER PROGRAM TEST PLAN (CPTPL) 45...I LIST OF FIGURES Number Page 1 JEWS Simplified Block Diagram 4 2 System Controller Architecture 5 SIZE CODE IDENT NO DRAWING NO. A 49956 SCALE REV J

  20. Inverse kinematics of a dual linear actuator pitch/roll heliostat

    NASA Astrophysics Data System (ADS)

    Freeman, Joshua; Shankar, Balakrishnan; Sundaram, Ganesh

    2017-06-01

    This work presents a simple, computationally efficient inverse kinematics solution for a pitch/roll heliostat using two linear actuators. The heliostat design and kinematics have been developed, modeled and tested using computer simulation software. A physical heliostat prototype was fabricated to validate the theoretical computations and data. Pitch/roll heliostats have numerous advantages including reduced cost potential and reduced space requirements, with a primary disadvantage being the significantly more complicated kinematics, which are solved here. Novel methods are applied to simplify the inverse kinematics problem which could be applied to other similar problems.

  1. Vertically-integrated Approaches for Carbon Sequestration Modeling

    NASA Astrophysics Data System (ADS)

    Bandilla, K.; Celia, M. A.; Guo, B.

    2015-12-01

    Carbon capture and sequestration (CCS) is being considered as an approach to mitigate anthropogenic CO2 emissions from large stationary sources such as coal fired power plants and natural gas processing plants. Computer modeling is an essential tool for site design and operational planning as it allows prediction of the pressure response as well as the migration of both CO2 and brine in the subsurface. Many processes, such as buoyancy, hysteresis, geomechanics and geochemistry, can have important impacts on the system. While all of the processes can be taken into account simultaneously, the resulting models are computationally very expensive and require large numbers of parameters which are often uncertain or unknown. In many cases of practical interest, the computational and data requirements can be reduced by choosing a smaller domain and/or by neglecting or simplifying certain processes. This leads to a series of models with different complexity, ranging from coupled multi-physics, multi-phase three-dimensional models to semi-analytical single-phase models. Under certain conditions the three-dimensional equations can be integrated in the vertical direction, leading to a suite of two-dimensional multi-phase models, termed vertically-integrated models. These models are either solved numerically or simplified further (e.g., assumption of vertical equilibrium) to allow analytical or semi-analytical solutions. This presentation focuses on how different vertically-integrated models have been applied to the simulation of CO2 and brine migration during CCS projects. Several example sites, such as the Illinois Basin and the Wabamun Lake region of the Alberta Basin, are discussed to show how vertically-integrated models can be used to gain understanding of CCS operations.

  2. A geostationary Earth orbit satellite model using Easy Java Simulation

    NASA Astrophysics Data System (ADS)

    Wee, Loo Kang; Hwee Goh, Giam

    2013-01-01

    We develop an Easy Java Simulation (EJS) model for students to visualize geostationary orbits near Earth, modelled using a Java 3D implementation of the EJS 3D library. The simplified physics model is described and simulated using a simple constant angular velocity equation. We discuss four computer model design ideas: (1) a simple and realistic 3D view and associated learning in the real world; (2) comparative visualization of permanent geostationary satellites; (3) examples of non-geostationary orbits of different rotation senses, periods and planes; and (4) an incorrect physics model for conceptual discourse. General feedback from the students has been relatively positive, and we hope teachers will find the computer model useful in their own classes.

  3. Effects of shock on hypersonic boundary layer stability

    NASA Astrophysics Data System (ADS)

    Pinna, F.; Rambaud, P.

    2013-06-01

    The design of hypersonic vehicles requires the estimate of the laminar to turbulent transition location for an accurate sizing of the thermal protection system. Linear stability theory is a fast scientific way to study the problem. Recent improvements in computational capabilities allow computing the flow around a full vehicle instead of using only simplified boundary layer equations. In this paper, the effect of the shock is studied on a mean flow provided by steady Computational Fluid Dynamics (CFD) computations and simplified boundary layer calculations.

  4. Modeling the evolution of channel shape: Balancing computational efficiency with hydraulic fidelity

    USGS Publications Warehouse

    Wobus, C.W.; Kean, J.W.; Tucker, G.E.; Anderson, R. Scott

    2008-01-01

    The cross-sectional shape of a natural river channel controls the capacity of the system to carry water off a landscape, to convey sediment derived from hillslopes, and to erode its bed and banks. Numerical models that describe the response of a landscape to changes in climate or tectonics therefore require formulations that can accommodate evolution of channel cross-sectional geometry. However, fully two-dimensional (2-D) flow models are too computationally expensive to implement in large-scale landscape evolution models, while available simple empirical relationships between width and discharge do not adequately capture the dynamics of channel adjustment. We have developed a simplified 2-D numerical model of channel evolution in a cohesive, detachment-limited substrate subject to steady, unidirectional flow. Erosion is assumed to be proportional to boundary shear stress, which is calculated using an approximation of the flow field in which log-velocity profiles are assumed to apply along vectors that are perpendicular to the local channel bed. Model predictions of the velocity structure, peak boundary shear stress, and equilibrium channel shape compare well with predictions of a more sophisticated but more computationally demanding ray-isovel model. For example, the mean velocities computed by the two models are consistent to within ???3%, and the predicted peak shear stress is consistent to within ???7%. Furthermore, the shear stress distributions predicted by our model compare favorably with available laboratory measurements for prescribed channel shapes. A modification to our simplified code in which the flow includes a high-velocity core allows the model to be extended to estimate shear stress distributions in channels with large width-to-depth ratios. Our model is efficient enough to incorporate into large-scale landscape evolution codes and can be used to examine how channels adjust both cross-sectional shape and slope in response to tectonic and climatic forcing. Copyright 2008 by the American Geophysical Union.

  5. Simplified realistic human head model for simulating Tumor Treating Fields (TTFields).

    PubMed

    Wenger, Cornelia; Bomzon, Ze'ev; Salvador, Ricardo; Basser, Peter J; Miranda, Pedro C

    2016-08-01

    Tumor Treating Fields (TTFields) are alternating electric fields in the intermediate frequency range (100-300 kHz) of low-intensity (1-3 V/cm). TTFields are an anti-mitotic treatment against solid tumors, which are approved for Glioblastoma Multiforme (GBM) patients. These electric fields are induced non-invasively by transducer arrays placed directly on the patient's scalp. Cell culture experiments showed that treatment efficacy is dependent on the induced field intensity. In clinical practice, a software called NovoTalTM uses head measurements to estimate the optimal array placement to maximize the electric field delivery to the tumor. Computational studies predict an increase in the tumor's electric field strength when adapting transducer arrays to its location. Ideally, a personalized head model could be created for each patient, to calculate the electric field distribution for the specific situation. Thus, the optimal transducer layout could be inferred from field calculation rather than distance measurements. Nonetheless, creating realistic head models of patients is time-consuming and often needs user interaction, because automated image segmentation is prone to failure. This study presents a first approach to creating simplified head models consisting of convex hulls of the tissue layers. The model is able to account for anisotropic conductivity in the cortical tissues by using a tensor representation estimated from Diffusion Tensor Imaging. The induced electric field distribution is compared in the simplified and realistic head models. The average field intensities in the brain and tumor are generally slightly higher in the realistic head model, with a maximal ratio of 114% for a simplified model with reasonable layer thicknesses. Thus, the present pipeline is a fast and efficient means towards personalized head models with less complexity involved in characterizing tissue interfaces, while enabling accurate predictions of electric field distribution.

  6. Numerical Approximation of Elasticity Tensor Associated With Green-Naghdi Rate.

    PubMed

    Liu, Haofei; Sun, Wei

    2017-08-01

    Objective stress rates are often used in commercial finite element (FE) programs. However, deriving a consistent tangent modulus tensor (also known as elasticity tensor or material Jacobian) associated with the objective stress rates is challenging when complex material models are utilized. In this paper, an approximation method for the tangent modulus tensor associated with the Green-Naghdi rate of the Kirchhoff stress is employed to simplify the evaluation process. The effectiveness of the approach is demonstrated through the implementation of two user-defined fiber-reinforced hyperelastic material models. Comparisons between the approximation method and the closed-form analytical method demonstrate that the former can simplify the material Jacobian evaluation with satisfactory accuracy while retaining its computational efficiency. Moreover, since the approximation method is independent of material models, it can facilitate the implementation of complex material models in FE analysis using shell/membrane elements in abaqus.

  7. Simplified estimation of age-specific reference intervals for skewed data.

    PubMed

    Wright, E M; Royston, P

    1997-12-30

    Age-specific reference intervals are commonly used in medical screening and clinical practice, where interest lies in the detection of extreme values. Many different statistical approaches have been published on this topic. The advantages of a parametric method are that they necessarily produce smooth centile curves, the entire density is estimated and an explicit formula is available for the centiles. The method proposed here is a simplified version of a recent approach proposed by Royston and Wright. Basic transformations of the data and multiple regression techniques are combined to model the mean, standard deviation and skewness. Using these simple tools, which are implemented in almost all statistical computer packages, age-specific reference intervals may be obtained. The scope of the method is illustrated by fitting models to several real data sets and assessing each model using goodness-of-fit techniques.

  8. The role of nonlinear torsional contributions on the stability of flexural-torsional oscillations of open-cross section beams

    NASA Astrophysics Data System (ADS)

    Di Egidio, Angelo; Contento, Alessandro; Vestroni, Fabrizio

    2015-12-01

    An open-cross section thin-walled beam model, already developed by the authors, has been conveniently simplified while maintaining the capacity of accounting for the significant nonlinear warping effects. For a technical range of geometrical and mechanical characteristics of the beam, the response is characterized by the torsional curvature prevailing over the flexural ones. A Galerkin discretization is performed by using a suitable expansion of displacements based on shape functions. The attention is focused on the dynamic response of the beam to a harmonic force, applied at the free end of the cantilever beam. The excitation is directed along the symmetry axis of the beam section. The stability of the one-component oscillations has been investigated using the analytical model, showing the importance of the internal resonances due to the nonlinear warping coupling terms. Comparison with the results provided by a computational finite element model has been performed. The good agreement among the results of the analytical and the computational models confirms the effectiveness of the simplified model of a nonlinear open-cross section thin-walled beam and overall the important role of the warping and of the torsional elongation in the study of the one-component dynamic oscillations and their stability.

  9. PLYMAP : a computer simulation model of the rotary peeled softwood plywood manufacturing process

    Treesearch

    Henry Spelter

    1990-01-01

    This report documents a simulation model of the plywood manufacturing process. Its purpose is to enable a user to make quick estimates of the economic impact of a particular process change within a mill. The program was designed to simulate the processing of plywood within a relatively simplified mill design. Within that limitation, however, it allows a wide range of...

  10. Comparison of the Calculations Results of Heat Exchange Between a Single-Family Building and the Ground Obtained with the Quasi-Stationary and 3-D Transient Models. Part 2: Intermittent and Reduced Heating Mode

    NASA Astrophysics Data System (ADS)

    Staszczuk, Anna

    2017-03-01

    The paper provides comparative results of calculations of heat exchange between ground and typical residential buildings using simplified (quasi-stationary) and more accurate (transient, three-dimensional) methods. Such characteristics as building's geometry, basement hollow and construction of ground touching assemblies were considered including intermittent and reduced heating mode. The calculations with simplified methods were conducted in accordance with currently valid norm: PN-EN ISO 13370:2008. Thermal performance of buildings. Heat transfer via the ground. Calculation methods. Comparative estimates concerning transient, 3-D, heat flow were performed with computer software WUFI®plus. The differences of heat exchange obtained using more exact and simplified methods have been specified as a result of the analysis.

  11. Modified optimal control pilot model for computer-aided design and analysis

    NASA Technical Reports Server (NTRS)

    Davidson, John B.; Schmidt, David K.

    1992-01-01

    This paper presents the theoretical development of a modified optimal control pilot model based upon the optimal control model (OCM) of the human operator developed by Kleinman, Baron, and Levison. This model is input compatible with the OCM and retains other key aspects of the OCM, such as a linear quadratic solution for the pilot gains with inclusion of control rate in the cost function, a Kalman estimator, and the ability to account for attention allocation and perception threshold effects. An algorithm designed for each implementation in current dynamic systems analysis and design software is presented. Example results based upon the analysis of a tracking task using three basic dynamic systems are compared with measured results and with similar analyses performed with the OCM and two previously proposed simplified optimal pilot models. The pilot frequency responses and error statistics obtained with this modified optimal control model are shown to compare more favorably to the measured experimental results than the other previously proposed simplified models evaluated.

  12. Multi-phase CFD modeling of solid sorbent carbon capture system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryan, E. M.; DeCroix, D.; Breault, R.

    2013-07-01

    Computational fluid dynamics (CFD) simulations are used to investigate a low temperature post-combustion carbon capture reactor. The CFD models are based on a small scale solid sorbent carbon capture reactor design from ADA-ES and Southern Company. The reactor is a fluidized bed design based on a silica-supported amine sorbent. CFD models using both Eulerian–Eulerian and Eulerian–Lagrangian multi-phase modeling methods are developed to investigate the hydrodynamics and adsorption of carbon dioxide in the reactor. Models developed in both FLUENT® and BARRACUDA are presented to explore the strengths and weaknesses of state of the art CFD codes for modeling multi-phase carbon capturemore » reactors. The results of the simulations show that the FLUENT® Eulerian–Lagrangian simulations (DDPM) are unstable for the given reactor design; while the BARRACUDA Eulerian–Lagrangian model is able to simulate the system given appropriate simplifying assumptions. FLUENT® Eulerian–Eulerian simulations also provide a stable solution for the carbon capture reactor given the appropriate simplifying assumptions.« less

  13. Multi-Phase CFD Modeling of Solid Sorbent Carbon Capture System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryan, Emily M.; DeCroix, David; Breault, Ronald W.

    2013-07-30

    Computational fluid dynamics (CFD) simulations are used to investigate a low temperature post-combustion carbon capture reactor. The CFD models are based on a small scale solid sorbent carbon capture reactor design from ADA-ES and Southern Company. The reactor is a fluidized bed design based on a silica-supported amine sorbent. CFD models using both Eulerian-Eulerian and Eulerian-Lagrangian multi-phase modeling methods are developed to investigate the hydrodynamics and adsorption of carbon dioxide in the reactor. Models developed in both FLUENT® and BARRACUDA are presented to explore the strengths and weaknesses of state of the art CFD codes for modeling multi-phase carbon capturemore » reactors. The results of the simulations show that the FLUENT® Eulerian-Lagrangian simulations (DDPM) are unstable for the given reactor design; while the BARRACUDA Eulerian-Lagrangian model is able to simulate the system given appropriate simplifying assumptions. FLUENT® Eulerian-Eulerian simulations also provide a stable solution for the carbon capture reactor given the appropriate simplifying assumptions.« less

  14. Models for integrated and differential scattering optical properties of encapsulated light absorbing carbon aggregates.

    PubMed

    Kahnert, Michael; Nousiainen, Timo; Lindqvist, Hannakaisa

    2013-04-08

    Optical properties of light absorbing carbon (LAC) aggregates encapsulated in a shell of sulfate are computed for realistic model geometries based on field measurements. Computations are performed for wavelengths from the UV-C to the mid-IR. Both climate- and remote sensing-relevant optical properties are considered. The results are compared to commonly used simplified model geometries, none of which gives a realistic representation of the distribution of the LAC mass within the host material and, as a consequence, fail to predict the optical properties accurately. A new core-gray shell model is introduced, which accurately reproduces the size- and wavelength dependence of the integrated and differential optical properties.

  15. On a computational model of building thermal dynamic response

    NASA Astrophysics Data System (ADS)

    Jarošová, Petra; Vala, Jiří

    2016-07-01

    Development and exploitation of advanced materials, structures and technologies in civil engineering, both for buildings with carefully controlled interior temperature and for common residential houses, together with new European and national directives and technical standards, stimulate the development of rather complex and robust, but sufficiently simple and inexpensive computational tools, supporting their design and optimization of energy consumption. This paper demonstrates the possibility of consideration of such seemingly contradictory requirements, using the simplified non-stationary thermal model of a building, motivated by the analogy with the analysis of electric circuits; certain semi-analytical forms of solutions come from the method of lines.

  16. A Simplified Baseband Prefilter Model with Adaptive Kalman Filter for Ultra-Tight COMPASS/INS Integration

    PubMed Central

    Luo, Yong; Wu, Wenqi; Babu, Ravindra; Tang, Kanghua; Luo, Bing

    2012-01-01

    COMPASS is an indigenously developed Chinese global navigation satellite system and will share many features in common with GPS (Global Positioning System). Since the ultra-tight GPS/INS (Inertial Navigation System) integration shows its advantage over independent GPS receivers in many scenarios, the federated ultra-tight COMPASS/INS integration has been investigated in this paper, particularly, by proposing a simplified prefilter model. Compared with a traditional prefilter model, the state space of this simplified system contains only carrier phase, carrier frequency and carrier frequency rate tracking errors. A two-quadrant arctangent discriminator output is used as a measurement. Since the code tracking error related parameters were excluded from the state space of traditional prefilter models, the code/carrier divergence would destroy the carrier tracking process, and therefore an adaptive Kalman filter algorithm tuning process noise covariance matrix based on state correction sequence was incorporated to compensate for the divergence. The federated ultra-tight COMPASS/INS integration was implemented with a hardware COMPASS intermediate frequency (IF), and INS's accelerometers and gyroscopes signal sampling system. Field and simulation test results showed almost similar tracking and navigation performances for both the traditional prefilter model and the proposed system; however, the latter largely decreased the computational load. PMID:23012564

  17. Optimal short-range trajectories for helicopters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slater, G.L.; Erzberger, H.

    1982-12-01

    An optimal flight path algorithm using a simplified altitude state model and a priori climb cruise descent flight profile was developed and applied to determine minimum fuel and minimum cost trajectories for a helicopter flying a fixed range trajectory. In addition, a method was developed for obtaining a performance model in simplified form which is based on standard flight manual data and which is applicable to the computation of optimal trajectories. The entire performance optimization algorithm is simple enough that on line trajectory optimization is feasible with a relatively small computer. The helicopter model used is the Silorsky S-61N. Themore » results show that for this vehicle the optimal flight path and optimal cruise altitude can represent a 10% fuel saving on a minimum fuel trajectory. The optimal trajectories show considerable variability because of helicopter weight, ambient winds, and the relative cost trade off between time and fuel. In general, reasonable variations from the optimal velocities and cruise altitudes do not significantly degrade the optimal cost. For fuel optimal trajectories, the optimum cruise altitude varies from the maximum (12,000 ft) to the minimum (0 ft) depending on helicopter weight.« less

  18. Computational Modeling of Liquid and Gaseous Control Valves

    NASA Technical Reports Server (NTRS)

    Daines, Russell; Ahuja, Vineet; Hosangadi, Ashvin; Shipman, Jeremy; Moore, Arden; Sulyma, Peter

    2005-01-01

    In this paper computational modeling efforts undertaken at NASA Stennis Space Center in support of rocket engine component testing are discussed. Such analyses include structurally complex cryogenic liquid valves and gas valves operating at high pressures and flow rates. Basic modeling and initial successes are documented, and other issues that make valve modeling at SSC somewhat unique are also addressed. These include transient behavior, valve stall, and the determination of flow patterns in LOX valves. Hexahedral structured grids are used for valves that can be simplifies through the use of axisymmetric approximation. Hybrid unstructured methodology is used for structurally complex valves that have disparate length scales and complex flow paths that include strong swirl, local recirculation zones/secondary flow effects. Hexahedral (structured), unstructured, and hybrid meshes are compared for accuracy and computational efficiency. Accuracy is determined using verification and validation techniques.

  19. Computer program for a four-cylinder-Stirling-engine controls simulation

    NASA Technical Reports Server (NTRS)

    Daniels, C. J.; Lorenzo, C. F.

    1982-01-01

    A four cylinder Stirling engine, transient engine simulation computer program is presented. The program is intended for controls analysis. The associated engine model was simplified to shorten computer calculation time. The model includes engine mechanical drive dynamics and vehicle load effects. The computer program also includes subroutines that allow: (1) acceleration of the engine by addition of hydrogen to the system, and (2) braking of the engine by short circuiting of the working spaces. Subroutines to calculate degraded engine performance (e.g., due to piston ring and piston rod leakage) are provided. Input data required to run the program are described and flow charts are provided. The program is modular to allow easy modification of individual routines. Examples of steady state and transient results are presented.

  20. Numerical investigation of cryogen re-gasification in a plate heat exchanger

    NASA Astrophysics Data System (ADS)

    Malecha, Ziemowit; Płuszka, Paweł; Brenk, Arkadiusz

    2017-12-01

    The efficient re-gasification of cryogen is a crucial process in many cryogenic installations. It is especially important in the case of LNG evaporators used in stationary and mobile applications (e.g. marine and land transport). Other gases, like nitrogen or argon can be obtained at highest purity after re-gasification from their liquid states. Plate heat exchangers (PHE) are characterized by a high efficiency. Application of PHE for liquid gas vaporization processes can be beneficial. PHE design and optimization can be significantly supported by numerical modelling. Such calculations are very challenging due to very high computational demands and complexity related to phase change modelling. In the present work, a simplified mathematical model of a two phase flow with phase change was introduced. To ensure fast calculations a simplified two-dimensional (2D) numerical model of a real PHE was developed. It was validated with experimental measurements and finally used for LNG re-gasification modelling. The proposed numerical model showed to be orders of magnitude faster than its full 3D original.

  1. Model Comparison for Electron Thermal Transport

    NASA Astrophysics Data System (ADS)

    Moses, Gregory; Chenhall, Jeffrey; Cao, Duc; Delettrez, Jacques

    2015-11-01

    Four electron thermal transport models are compared for their ability to accurately and efficiently model non-local behavior in ICF simulations. Goncharov's transport model has accurately predicted shock timing in implosion simulations but is computationally slow and limited to 1D. The iSNB (implicit Schurtz Nicolai Busquet electron thermal transport method of Cao et al. uses multigroup diffusion to speed up the calculation. Chenhall has expanded upon the iSNB diffusion model to a higher order simplified P3 approximation and a Monte Carlo transport model, to bridge the gap between the iSNB and Goncharov models while maintaining computational efficiency. Comparisons of the above models for several test problems will be presented. This work was supported by Sandia National Laboratory - Albuquerque and the University of Rochester Laboratory for Laser Energetics.

  2. Analysis of mode-locked and intracavity frequency-doubled Nd:YAG laser

    NASA Technical Reports Server (NTRS)

    Siegman, A. E.; Heritier, J.-M.

    1980-01-01

    The paper presents analytical and computer studies of the CW mode-locked and intracavity frequency-doubled Nd:YAG laser which provide new insight into the operation, including the detuning behavior, of this type of laser. Computer solutions show that the steady-state pulse shape for this laser is much closer to a truncated cosine than to a Gaussian; there is little spectral broadening for on-resonance operation; and the chirp is negligible. This leads to a simplified analytical model carried out entirely in the time domain, with atomic linewidth effects ignored. Simple analytical results for on-resonance pulse shape, pulse width, signal intensity, and harmonic conversion efficiency in terms of basic laser parameters are derived from this model. A simplified physical description of the detuning behavior is also developed. Agreement is found with experimental studies showing that the pulsewidth decreases as the modulation frequency is detuned off resonance; the harmonic power output initially increases and then decreases; and the pulse shape develops a sharp-edged asymmetry of opposite sense for opposite signs of detuning.

  3. Simplified subsurface modelling: data assimilation and violated model assumptions

    NASA Astrophysics Data System (ADS)

    Erdal, Daniel; Lange, Natascha; Neuweiler, Insa

    2017-04-01

    Integrated models are gaining more and more attention in hydrological modelling as they can better represent the interaction between different compartments. Naturally, these models come along with larger numbers of unknowns and requirements on computational resources compared to stand-alone models. If large model domains are to be represented, e.g. on catchment scale, the resolution of the numerical grid needs to be reduced or the model itself needs to be simplified. Both approaches lead to a reduced ability to reproduce the present processes. This lack of model accuracy may be compensated by using data assimilation methods. In these methods observations are used to update the model states, and optionally model parameters as well, in order to reduce the model error induced by the imposed simplifications. What is unclear is whether these methods combined with strongly simplified models result in completely data-driven models or if they can even be used to make adequate predictions of the model state for times when no observations are available. In the current work we consider the combined groundwater and unsaturated zone, which can be modelled in a physically consistent way using 3D-models solving the Richards equation. For use in simple predictions, however, simpler approaches may be considered. The question investigated here is whether a simpler model, in which the groundwater is modelled as a horizontal 2D-model and the unsaturated zones as a few sparse 1D-columns, can be used within an Ensemble Kalman filter to give predictions of groundwater levels and unsaturated fluxes. This is tested under conditions where the feedback between the two model-compartments are large (e.g. shallow groundwater table) and the simplification assumptions are clearly violated. Such a case may be a steep hill-slope or pumping wells, creating lateral fluxes in the unsaturated zone, or strong heterogeneous structures creating unaccounted flows in both the saturated and unsaturated compartments. Under such circumstances, direct modelling using a simplified model will not provide good results. However, a more data driven (e.g. grey box) approach, driven by the filter, may still provide an improved understanding of the system. Comparisons between full 3D simulations and simplified filter driven models will be shown and the resulting benefits and drawbacks will be discussed.

  4. Updated Lagrangian finite element formulations of various biological soft tissue non-linear material models: a comprehensive procedure and review.

    PubMed

    Townsend, Molly T; Sarigul-Klijn, Nesrin

    2016-01-01

    Simplified material models are commonly used in computational simulation of biological soft tissue as an approximation of the complicated material response and to minimize computational resources. However, the simulation of complex loadings, such as long-duration tissue swelling, necessitates complex models that are not easy to formulate. This paper strives to offer the updated Lagrangian formulation comprehensive procedure of various non-linear material models for the application of finite element analysis of biological soft tissues including a definition of the Cauchy stress and the spatial tangential stiffness. The relationships between water content, osmotic pressure, ionic concentration and the pore pressure stress of the tissue are discussed with the merits of these models and their applications.

  5. Delayed ripple counter simplifies square-root computation

    NASA Technical Reports Server (NTRS)

    Cliff, R.

    1965-01-01

    Ripple subtract technique simplifies the logic circuitry required in a binary computing device to derive the square root of a number. Successively higher numbers are subtracted from a register containing the number out of which the square root is to be extracted. The last number subtracted will be the closest integer to the square root of the number.

  6. Towards a standard design model for quad-rotors: A review of current models, their accuracy and a novel simplified model

    NASA Astrophysics Data System (ADS)

    Amezquita-Brooks, Luis; Liceaga-Castro, Eduardo; Gonzalez-Sanchez, Mario; Garcia-Salazar, Octavio; Martinez-Vazquez, Daniel

    2017-11-01

    Applications based on quad-rotor-vehicles (QRV) are becoming increasingly wide-spread. Many of these applications require accurate mathematical representations for control design, simulation and estimation. However, there is no consensus on a standardized model for these purposes. In this article a review of the most common elements included in QRV models reported in the literature is presented. This survey shows that some elements are recurrent for typical non-aerobatic QRV applications; in particular, for control design and high-performance simulation. By synthesising the common features of the reviewed models a standard generic model SGM is proposed. The SGM is cast as a typical state-space model without memory-less transformations, a structure which is useful for simulation and controller design. The survey also shows that many QRV applications use simplified representations, which may be considered simplifications of the SGM here proposed. In order to assess the effectiveness of the simplified models, a comprehensive comparison based on digital simulations is presented. With this comparison, it is possible to determine the accuracy of each model under particular operating ranges. Such information is useful for the selection of a model according to a particular application. In addition to the models found in the literature, in this article a novel simplified model is derived. The main characteristics of this model are that its inner dynamics are linear, it has low complexity and it has a high level of accuracy in all the studied operating ranges, a characteristic found only in more complex representations. To complement the article the main elements of the SGM are evaluated with the aid of experimental data and the computational complexity of all surveyed models is briefly analysed. Finally, the article presents a discussion on how the structural characteristics of the models are useful to suggest particular QRV control structures.

  7. Conference on Complex Turbulent Flows: Comparison of Computation and Experiment, Stanford University, Stanford, CA, September 14-18, 1981, Proceedings. Volume 2 - Taxonomies, reporters' summaries, evaluation, and conclusions

    NASA Technical Reports Server (NTRS)

    Kline, S. J. (Editor); Cantwell, B. J. (Editor); Lilley, G. M.

    1982-01-01

    Computational techniques for simulating turbulent flows were explored, together with the results of experimental investigations. Particular attention was devoted to the possibility of defining a universal closure model, applicable for all turbulence situations; however, conclusions were drawn that zonal models, describing localized structures, were the most promising techniques to date. The taxonomy of turbulent flows was summarized, as were algebraic, differential, integral, and partial differential methods for numerical depiction of turbulent flows. Numerous comparisons of theoretically predicted and experimentally obtained data for wall pressure distributions, velocity profiles, turbulent kinetic energy profiles, Reynolds shear stress profiles, and flows around transonic airfoils were presented. Simplifying techniques for reducing the necessary computational time for modeling complex flowfields were surveyed, together with the industrial requirements and applications of computational fluid dynamics techniques.

  8. Parachute Drag Model

    NASA Technical Reports Server (NTRS)

    Cuthbert, Peter

    2010-01-01

    DTV-SIM is a computer program that implements a mathematical model of the flight dynamics of a missile-shaped drop test vehicle (DTV) equipped with a multistage parachute system that includes two simultaneously deployed drogue parachutes and three main parachutes deployed subsequently and simultaneously by use of pilot parachutes. DTV-SIM was written to support air-drop tests of the DTV/parachute system, which serves a simplified prototype of a proposed crew capsule/parachute landing system.

  9. Controlled Studies of Whistler Wave Interactions with Energetic Particles in Radiation Belts

    DTIC Science & Technology

    2009-07-01

    the IGRF geomagnetic field and PIM ionosphere /plasmasphere models . Those simulations demonstrate that on this particular evening 28.5 kHz whistler...a simplified slab model of ionospheric plasmas, we can compute the transmission coefficient and, subsequently, estimate that -15% of the incident...with inner radiation belts as well as the ionospheric effects caused by precipitated energetic electrons. The whistler waves used in our experiments

  10. Mathematical neuroscience: from neurons to circuits to systems.

    PubMed

    Gutkin, Boris; Pinto, David; Ermentrout, Bard

    2003-01-01

    Applications of mathematics and computational techniques to our understanding of neuronal systems are provided. Reduction of membrane models to simplified canonical models demonstrates how neuronal spike-time statistics follow from simple properties of neurons. Averaging over space allows one to derive a simple model for the whisker barrel circuit and use this to explain and suggest several experiments. Spatio-temporal pattern formation methods are applied to explain the patterns seen in the early stages of drug-induced visual hallucinations.

  11. Computing induced velocity perturbations due to a helicopter fuselage in a free stream

    NASA Technical Reports Server (NTRS)

    Berry, John D.; Althoff, Susan L.

    1989-01-01

    The velocity field of a representative helicopter fuselage in a free stream is computed. Perturbation velocities due to the fuselage are computed in a plan above the location of the helicopter rotor (rotor removed). The velocity perturbations computed by a source-panel model of the fuselage are compared with experimental measurements taken with a laser velocimeter. Three paneled fuselage models are studied: fuselage shape, fuselage shape with hub shape, and a body of revolution. The velocity perturbations computed for both fuselage shape models agree well with the measured velocity field except in the close vicinity of the rotor hub. In the hub region, without knowing the extent of separation, modeling of the effective source shape is difficult. The effects of the fuselage perturbations are not well-predicted with a simplified ellipsoid fuselage. The velocity perturbations due to the fuselage at the plane of the measurements have magnitudes of less than 8 percent of free-stream velocity. The velocity perturbations computed by the panel method are tabulated for the same locations at which previously reported rotor-inflow velocity measurements were made.

  12. Orbital-selective Mott phases of a one-dimensional three-orbital Hubbard model studied using computational techniques

    DOE PAGES

    Liu, Guangkun; Kaushal, Nitin; Liu, Shaozhi; ...

    2016-06-24

    A recently introduced one-dimensional three-orbital Hubbard model displays orbital-selective Mott phases with exotic spin arrangements such as spin block states [J. Rincón et al., Phys. Rev. Lett. 112, 106405 (2014)]. In this paper we show that the constrained-path quantum Monte Carlo (CPQMC) technique can accurately reproduce the phase diagram of this multiorbital one-dimensional model, paving the way to future CPQMC studies in systems with more challenging geometries, such as ladders and planes. The success of this approach relies on using the Hartree-Fock technique to prepare the trial states needed in CPQMC. In addition, we study a simplified version of themore » model where the pair-hopping term is neglected and the Hund coupling is restricted to its Ising component. The corresponding phase diagrams are shown to be only mildly affected by the absence of these technically difficult-to-implement terms. This is confirmed by additional density matrix renormalization group and determinant quantum Monte Carlo calculations carried out for the same simplified model, with the latter displaying only mild fermion sign problems. Lastly, we conclude that these methods are able to capture quantitatively the rich physics of the several orbital-selective Mott phases (OSMP) displayed by this model, thus enabling computational studies of the OSMP regime in higher dimensions, beyond static or dynamic mean-field approximations.« less

  13. Design of Linear Accelerator (LINAC) tanks for proton therapy via Particle Swarm Optimization (PSO) and Genetic Algorithm (GA) approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castellano, T.; De Palma, L.; Laneve, D.

    2015-07-01

    A homemade computer code for designing a Side- Coupled Linear Accelerator (SCL) is written. It integrates a simplified model of SCL tanks with the Particle Swarm Optimization (PSO) algorithm. The computer code main aim is to obtain useful guidelines for the design of Linear Accelerator (LINAC) resonant cavities. The design procedure, assisted via the aforesaid approach seems very promising, allowing future improvements towards the optimization of actual accelerating geometries. (authors)

  14. Distributed geospatial model sharing based on open interoperability standards

    USGS Publications Warehouse

    Feng, Min; Liu, Shuguang; Euliss, Ned H.; Fang, Yin

    2009-01-01

    Numerous geospatial computational models have been developed based on sound principles and published in journals or presented in conferences. However modelers have made few advances in the development of computable modules that facilitate sharing during model development or utilization. Constraints hampering development of model sharing technology includes limitations on computing, storage, and connectivity; traditional stand-alone and closed network systems cannot fully support sharing and integrating geospatial models. To address this need, we have identified methods for sharing geospatial computational models using Service Oriented Architecture (SOA) techniques and open geospatial standards. The service-oriented model sharing service is accessible using any tools or systems compliant with open geospatial standards, making it possible to utilize vast scientific resources available from around the world to solve highly sophisticated application problems. The methods also allow model services to be empowered by diverse computational devices and technologies, such as portable devices and GRID computing infrastructures. Based on the generic and abstract operations and data structures required for Web Processing Service (WPS) standards, we developed an interactive interface for model sharing to help reduce interoperability problems for model use. Geospatial computational models are shared on model services, where the computational processes provided by models can be accessed through tools and systems compliant with WPS. We developed a platform to help modelers publish individual models in a simplified and efficient way. Finally, we illustrate our technique using wetland hydrological models we developed for the prairie pothole region of North America.

  15. RANS computations of tip vortex cavitation

    NASA Astrophysics Data System (ADS)

    Decaix, Jean; Balarac, Guillaume; Dreyer, Matthieu; Farhat, Mohamed; Münch, Cécile

    2015-12-01

    The present study is related to the development of the tip vortex cavitation in Kaplan turbines. The investigation is carried out on a simplified test case consisting of a NACA0009 blade with a gap between the blade tip and the side wall. Computations with and without cavitation are performed using a R ANS modelling and a transport equation for the liquid volume fraction. Compared with experimental data, the R ANS computations turn out to be able to capture accurately the development of the tip vortex. The simulations have also highlighted the influence of cavitation on the tip vortex trajectory.

  16. Multifidelity-CMA: a multifidelity approach for efficient personalisation of 3D cardiac electromechanical models.

    PubMed

    Molléro, Roch; Pennec, Xavier; Delingette, Hervé; Garny, Alan; Ayache, Nicholas; Sermesant, Maxime

    2018-02-01

    Personalised computational models of the heart are of increasing interest for clinical applications due to their discriminative and predictive abilities. However, the simulation of a single heartbeat with a 3D cardiac electromechanical model can be long and computationally expensive, which makes some practical applications, such as the estimation of model parameters from clinical data (the personalisation), very slow. Here we introduce an original multifidelity approach between a 3D cardiac model and a simplified "0D" version of this model, which enables to get reliable (and extremely fast) approximations of the global behaviour of the 3D model using 0D simulations. We then use this multifidelity approximation to speed-up an efficient parameter estimation algorithm, leading to a fast and computationally efficient personalisation method of the 3D model. In particular, we show results on a cohort of 121 different heart geometries and measurements. Finally, an exploitable code of the 0D model with scripts to perform parameter estimation will be released to the community.

  17. 2-3D nonlocal transport model in magnetized laser plasmas.

    NASA Astrophysics Data System (ADS)

    Nicolaï, Philippe; Feugeas, Jean-Luc; Schurtz, Guy

    2004-11-01

    We present a model of nonlocal transport for multidimensional radiation magneto-hydrodynamics codes. This model, based on simplified Fokker-Planck equations, aims at extending the formulae of G Schurtz,Ph.Nicolaï and M. Busquet [Phys. Plasmas,7,4238 (2000)] to magnetized plasmas.The improvements concern various points as the electric field effects on nonlocal transport or conversely the kinetic effects on E field. However the main purpose of this work is to generalize the previous model by including magnetic field effects. A complete system of nonlocal equations is derived from kinetic equations with self-consistent E and B fields. These equations are analyzed and simplified in order to be implemented into large laser fusion codes and coupled to other relevent physics. Finally, our model allows to obtain the deformation of the electron distribution function due to nonlocal effects. This deformation leads to a non-maxwellian function which could be used to compute the influence on other physical processes.

  18. Energy-density field approach for low- and medium-frequency vibroacoustic analysis of complex structures using a statistical computational model

    NASA Astrophysics Data System (ADS)

    Kassem, M.; Soize, C.; Gagliardini, L.

    2009-06-01

    In this paper, an energy-density field approach applied to the vibroacoustic analysis of complex industrial structures in the low- and medium-frequency ranges is presented. This approach uses a statistical computational model. The analyzed system consists of an automotive vehicle structure coupled with its internal acoustic cavity. The objective of this paper is to make use of the statistical properties of the frequency response functions of the vibroacoustic system observed from previous experimental and numerical work. The frequency response functions are expressed in terms of a dimensionless matrix which is estimated using the proposed energy approach. Using this dimensionless matrix, a simplified vibroacoustic model is proposed.

  19. The Collaborative Seismic Earth Model: Generation 1

    NASA Astrophysics Data System (ADS)

    Fichtner, Andreas; van Herwaarden, Dirk-Philip; Afanasiev, Michael; SimutÄ--, SaulÄ--; Krischer, Lion; ćubuk-Sabuncu, Yeşim; Taymaz, Tuncay; Colli, Lorenzo; Saygin, Erdinc; Villaseñor, Antonio; Trampert, Jeannot; Cupillard, Paul; Bunge, Hans-Peter; Igel, Heiner

    2018-05-01

    We present a general concept for evolutionary, collaborative, multiscale inversion of geophysical data, specifically applied to the construction of a first-generation Collaborative Seismic Earth Model. This is intended to address the limited resources of individual researchers and the often limited use of previously accumulated knowledge. Model evolution rests on a Bayesian updating scheme, simplified into a deterministic method that honors today's computational restrictions. The scheme is able to harness distributed human and computing power. It furthermore handles conflicting updates, as well as variable parameterizations of different model refinements or different inversion techniques. The first-generation Collaborative Seismic Earth Model comprises 12 refinements from full seismic waveform inversion, ranging from regional crustal- to continental-scale models. A global full-waveform inversion ensures that regional refinements translate into whole-Earth structure.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Childs, Andrew M.; Center for Theoretical Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139; Leung, Debbie W.

    We present unified, systematic derivations of schemes in the two known measurement-based models of quantum computation. The first model (introduced by Raussendorf and Briegel, [Phys. Rev. Lett. 86, 5188 (2001)]) uses a fixed entangled state, adaptive measurements on single qubits, and feedforward of the measurement results. The second model (proposed by Nielsen, [Phys. Lett. A 308, 96 (2003)] and further simplified by Leung, [Int. J. Quant. Inf. 2, 33 (2004)]) uses adaptive two-qubit measurements that can be applied to arbitrary pairs of qubits, and feedforward of the measurement results. The underlying principle of our derivations is a variant of teleportationmore » introduced by Zhou, Leung, and Chuang, [Phys. Rev. A 62, 052316 (2000)]. Our derivations unify these two measurement-based models of quantum computation and provide significantly simpler schemes.« less

  1. Evidence Accumulation and Change Rate Inference in Dynamic Environments.

    PubMed

    Radillo, Adrian E; Veliz-Cuba, Alan; Josić, Krešimir; Kilpatrick, Zachary P

    2017-06-01

    In a constantly changing world, animals must account for environmental volatility when making decisions. To appropriately discount older, irrelevant information, they need to learn the rate at which the environment changes. We develop an ideal observer model capable of inferring the present state of the environment along with its rate of change. Key to this computation is an update of the posterior probability of all possible change point counts. This computation can be challenging, as the number of possibilities grows rapidly with time. However, we show how the computations can be simplified in the continuum limit by a moment closure approximation. The resulting low-dimensional system can be used to infer the environmental state and change rate with accuracy comparable to the ideal observer. The approximate computations can be performed by a neural network model via a rate-correlation-based plasticity rule. We thus show how optimal observers accumulate evidence in changing environments and map this computation to reduced models that perform inference using plausible neural mechanisms.

  2. Validation of simplified centre of mass models during gait in individuals with chronic stroke.

    PubMed

    Huntley, Andrew H; Schinkel-Ivy, Alison; Aqui, Anthony; Mansfield, Avril

    2017-10-01

    The feasibility of using a multiple segment (full-body) kinematic model in clinical gait assessment is difficult when considering obstacles such as time and cost constraints. While simplified gait models have been explored in healthy individuals, no such work to date has been conducted in a stroke population. The aim of this study was to quantify the errors of simplified kinematic models for chronic stroke gait assessment. Sixteen individuals with chronic stroke (>6months), outfitted with full body kinematic markers, performed a series of gait trials. Three centre of mass models were computed: (i) 13-segment whole-body model, (ii) 3 segment head-trunk-pelvis model, and (iii) 1 segment pelvis model. Root mean squared error differences were compared between models, along with correlations to measures of stroke severity. Error differences revealed that, while both models were similar in the mediolateral direction, the head-trunk-pelvis model had less error in the anteroposterior direction and the pelvis model had less error in the vertical direction. There was some evidence that the head-trunk-pelvis model error is influenced in the mediolateral direction for individuals with more severe strokes, as a few significant correlations were observed between the head-trunk-pelvis model and measures of stroke severity. These findings demonstrate the utility and robustness of the pelvis model for clinical gait assessment in individuals with chronic stroke. Low error in the mediolateral and vertical directions is especially important when considering potential stability analyses during gait for this population, as lateral stability has been previously linked to fall risk. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Optimisation and evaluation of pre-design models for offshore wind turbines with jacket support structures and their influence on integrated load simulations

    NASA Astrophysics Data System (ADS)

    Schafhirt, S.; Kaufer, D.; Cheng, P. W.

    2014-12-01

    In recent years many advanced load simulation tools, allowing an aero-servo-hydroelastic analyses of an entire offshore wind turbine, have been developed and verified. Nowadays, even an offshore wind turbine with a complex support structure such as a jacket can be analysed. However, the computational effort rises significantly with an increasing level of details. This counts especially for offshore wind turbines with lattice support structures, since those models do naturally have a higher number of nodes and elements than simpler monopile structures. During the design process multiple load simulations are demanded to obtain an optimal solution. In the view of pre-design tasks it is crucial to apply load simulations which keep the simulation quality and the computational effort in balance. The paper will introduce a reference wind turbine model consisting of the REpower5M wind turbine and a jacket support structure with a high level of detail. In total twelve variations of this reference model are derived and presented. Main focus is to simplify the models of the support structure and the foundation. The reference model and the simplified models are simulated with the coupled simulation tool Flex5-Poseidon and analysed regarding frequencies, fatigue loads, and ultimate loads. A model has been found which reaches an adequate increase of simulation speed while holding the results in an acceptable range compared to the reference results.

  4. A Simplified Model for Detonation Based Pressure-Gain Combustors

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel E.

    2010-01-01

    A time-dependent model is presented which simulates the essential physics of a detonative or otherwise constant volume, pressure-gain combustor for gas turbine applications. The model utilizes simple, global thermodynamic relations to determine an assumed instantaneous and uniform post-combustion state in one of many envisioned tubes comprising the device. A simple, second order, non-upwinding computational fluid dynamic algorithm is then used to compute the (continuous) flowfield properties during the blowdown and refill stages of the periodic cycle which each tube undergoes. The exhausted flow is averaged to provide mixed total pressure and enthalpy which may be used as a cycle performance metric for benefits analysis. The simplicity of the model allows for nearly instantaneous results when implemented on a personal computer. The results compare favorably with higher resolution numerical codes which are more difficult to configure, and more time consuming to operate.

  5. Finite volume model for two-dimensional shallow environmental flow

    USGS Publications Warehouse

    Simoes, F.J.M.

    2011-01-01

    This paper presents the development of a two-dimensional, depth integrated, unsteady, free-surface model based on the shallow water equations. The development was motivated by the desire of balancing computational efficiency and accuracy by selective and conjunctive use of different numerical techniques. The base framework of the discrete model uses Godunov methods on unstructured triangular grids, but the solution technique emphasizes the use of a high-resolution Riemann solver where needed, switching to a simpler and computationally more efficient upwind finite volume technique in the smooth regions of the flow. Explicit time marching is accomplished with strong stability preserving Runge-Kutta methods, with additional acceleration techniques for steady-state computations. A simplified mass-preserving algorithm is used to deal with wet/dry fronts. Application of the model is made to several benchmark cases that show the interplay of the diverse solution techniques.

  6. A simplified fuel control approach for low cost aircraft gas turbines

    NASA Technical Reports Server (NTRS)

    Gold, H.

    1973-01-01

    Reduction in the complexity of gas turbine fuel controls without loss of control accuracy, reliability, or effectiveness as a method for reducing engine costs is discussed. A description and analysis of hydromechanical approach are presented. A computer simulation of the control mechanism is given and performance of a physical model in engine test is reported.

  7. Reduction method with system analysis for multiobjective optimization-based design

    NASA Technical Reports Server (NTRS)

    Azarm, S.; Sobieszczanski-Sobieski, J.

    1993-01-01

    An approach for reducing the number of variables and constraints, which is combined with System Analysis Equations (SAE), for multiobjective optimization-based design is presented. In order to develop a simplified analysis model, the SAE is computed outside an optimization loop and then approximated for use by an operator. Two examples are presented to demonstrate the approach.

  8. Teaching Anatomy and Physiology Using Computer-Based, Stereoscopic Images

    ERIC Educational Resources Information Center

    Perry, Jamie; Kuehn, David; Langlois, Rick

    2007-01-01

    Learning real three-dimensional (3D) anatomy for the first time can be challenging. Two-dimensional drawings and plastic models tend to over-simplify the complexity of anatomy. The approach described uses stereoscopy to create 3D images of the process of cadaver dissection and to demonstrate the underlying anatomy related to the speech mechanisms.…

  9. Shot-by-shot Spectrum Model for Rod-pinch, Pulsed Radiography Machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, William Monford

    A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thusmore » allowing for rapid optimization of the model across many shots. “Goodness of fit” is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays (“MCNPX”) model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. In conclusion, improvements to the model, specifically for application to other geometries, are discussed.« less

  10. Shot-by-shot Spectrum Model for Rod-pinch, Pulsed Radiography Machines

    DOE PAGES

    Wood, William Monford

    2018-02-07

    A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thusmore » allowing for rapid optimization of the model across many shots. “Goodness of fit” is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays (“MCNPX”) model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. In conclusion, improvements to the model, specifically for application to other geometries, are discussed.« less

  11. Shot-by-shot spectrum model for rod-pinch, pulsed radiography machines

    NASA Astrophysics Data System (ADS)

    Wood, Wm M.

    2018-02-01

    A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thus allowing for rapid optimization of the model across many shots. "Goodness of fit" is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays ("MCNPX") model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. Improvements to the model, specifically for application to other geometries, are discussed.

  12. Simplified energy-balance model for pragmatic multi-dimensional device simulation

    NASA Astrophysics Data System (ADS)

    Chang, Duckhyun; Fossum, Jerry G.

    1997-11-01

    To pragmatically account for non-local carrier heating and hot-carrier effects such as velocity overshoot and impact ionization in multi-dimensional numerical device simulation, a new simplified energy-balance (SEB) model is developed and implemented in FLOODS[16] as a pragmatic option. In the SEB model, the energy-relaxation length is estimated from a pre-process drift-diffusion simulation using the carrier-velocity distribution predicted throughout the device domain, and is used without change in a subsequent simpler hydrodynamic (SHD) simulation. The new SEB model was verified by comparison of two-dimensional SHD and full HD DC simulations of a submicron MOSFET. The SHD simulations yield detailed distributions of carrier temperature, carrier velocity, and impact-ionization rate, which agree well with the full HD simulation results obtained with FLOODS. The most noteworthy feature of the new SEB/SHD model is its computational efficiency, which results from reduced Newton iteration counts caused by the enhanced linearity. Relative to full HD, SHD simulation times can be shorter by as much as an order of magnitude since larger voltage steps for DC sweeps and larger time steps for transient simulations can be used. The improved computational efficiency can enable pragmatic three-dimensional SHD device simulation as well, for which the SEB implementation would be straightforward as it is in FLOODS or any robust HD simulator.

  13. Simulation of wave propagation inside a human eye: acoustic eye model (AEM)

    NASA Astrophysics Data System (ADS)

    Požar, T.; Halilovič, M.; Horvat, D.; Petkovšek, R.

    2018-02-01

    The design and development of the acoustic eye model (AEM) is reported. The model consists of a computer-based simulation that describes the propagation of mechanical disturbance inside a simplified model of a human eye. The capabilities of the model are illustrated with examples, using different laser-induced initial loading conditions in different geometrical configurations typically occurring in ophthalmic medical procedures. The potential of the AEM is to predict the mechanical response of the treated eye tissue in advance, thus complementing other preliminary procedures preceding medical treatments.

  14. Modelling of thick composites using a layerwise laminate theory

    NASA Technical Reports Server (NTRS)

    Robbins, D. H., Jr.; Reddy, J. N.

    1993-01-01

    The layerwise laminate theory of Reddy (1987) is used to develop a layerwise, two-dimensional, displacement-based, finite element model of laminated composite plates that assumes a piecewise continuous distribution of the tranverse strains through the laminate thickness. The resulting layerwise finite element model is capable of computing interlaminar stresses and other localized effects with the same level of accuracy as a conventional 3D finite element model. Although the total number of degrees of freedom are comparable in both models, the layerwise model maintains a 2D-type data structure that provides several advantages over a conventional 3D finite element model, e.g. simplified input data, ease of mesh alteration, and faster element stiffness matrix formulation. Two sample problems are provided to illustrate the accuracy of the present model in computing interlaminar stresses for laminates in bending and extension.

  15. Coniferous canopy BRF simulation based on 3-D realistic scene.

    PubMed

    Wang, Xin-Yun; Guo, Zhi-Feng; Qin, Wen-Han; Sun, Guo-Qing

    2011-09-01

    It is difficulties for the computer simulation method to study radiation regime at large-scale. Simplified coniferous model was investigated in the present study. It makes the computer simulation methods such as L-systems and radiosity-graphics combined method (RGM) more powerful in remote sensing of heterogeneous coniferous forests over a large-scale region. L-systems is applied to render 3-D coniferous forest scenarios, and RGM model was used to calculate BRF (bidirectional reflectance factor) in visible and near-infrared regions. Results in this study show that in most cases both agreed well. Meanwhile at a tree and forest level, the results are also good.

  16. Coniferous Canopy BRF Simulation Based on 3-D Realistic Scene

    NASA Technical Reports Server (NTRS)

    Wang, Xin-yun; Guo, Zhi-feng; Qin, Wen-han; Sun, Guo-qing

    2011-01-01

    It is difficulties for the computer simulation method to study radiation regime at large-scale. Simplified coniferous model was investigate d in the present study. It makes the computer simulation methods such as L-systems and radiosity-graphics combined method (RGM) more powerf ul in remote sensing of heterogeneous coniferous forests over a large -scale region. L-systems is applied to render 3-D coniferous forest scenarios: and RGM model was used to calculate BRF (bidirectional refle ctance factor) in visible and near-infrared regions. Results in this study show that in most cases both agreed well. Meanwhiie at a tree and forest level. the results are also good.

  17. Numerical Computation of Flame Spread over a Thin Solid in Forced Concurrent Flow with Gas-phase Radiation

    NASA Technical Reports Server (NTRS)

    Jiang, Ching-Biau; T'ien, James S.

    1994-01-01

    Excerpts from a paper describing the numerical examination of concurrent-flow flame spread over a thin solid in purely forced flow with gas-phase radiation are presented. The computational model solves the two-dimensional, elliptic, steady, and laminar conservation equations for mass, momentum, energy, and chemical species. Gas-phase combustion is modeled via a one-step, second order finite rate Arrhenius reaction. Gas-phase radiation considering gray non-scattering medium is solved by a S-N discrete ordinates method. A simplified solid phase treatment assumes a zeroth order pyrolysis relation and includes radiative interaction between the surface and the gas phase.

  18. Modeling inelastic phonon scattering in atomic- and molecular-wire junctions

    NASA Astrophysics Data System (ADS)

    Paulsson, Magnus; Frederiksen, Thomas; Brandbyge, Mads

    2005-11-01

    Computationally inexpensive approximations describing electron-phonon scattering in molecular-scale conductors are derived from the nonequilibrium Green’s function method. The accuracy is demonstrated with a first-principles calculation on an atomic gold wire. Quantitative agreement between the full nonequilibrium Green’s function calculation and the newly derived expressions is obtained while simplifying the computational burden by several orders of magnitude. In addition, analytical models provide intuitive understanding of the conductance including nonequilibrium heating and provide a convenient way of parameterizing the physics. This is exemplified by fitting the expressions to the experimentally observed conductances through both an atomic gold wire and a hydrogen molecule.

  19. The influence of computational assumptions on analysing abdominal aortic aneurysm haemodynamics.

    PubMed

    Ene, Florentina; Delassus, Patrick; Morris, Liam

    2014-08-01

    The variation in computational assumptions for analysing abdominal aortic aneurysm haemodynamics can influence the desired output results and computational cost. Such assumptions for abdominal aortic aneurysm modelling include static/transient pressures, steady/transient flows and rigid/compliant walls. Six computational methods and these various assumptions were simulated and compared within a realistic abdominal aortic aneurysm model with and without intraluminal thrombus. A full transient fluid-structure interaction was required to analyse the flow patterns within the compliant abdominal aortic aneurysms models. Rigid wall computational fluid dynamics overestimates the velocity magnitude by as much as 40%-65% and the wall shear stress by 30%-50%. These differences were attributed to the deforming walls which reduced the outlet volumetric flow rate for the transient fluid-structure interaction during the majority of the systolic phase. Static finite element analysis accurately approximates the deformations and von Mises stresses when compared with transient fluid-structure interaction. Simplifying the modelling complexity reduces the computational cost significantly. In conclusion, the deformation and von Mises stress can be approximately found by static finite element analysis, while for compliant models a full transient fluid-structure interaction analysis is required for acquiring the fluid flow phenomenon. © IMechE 2014.

  20. The Landlab v1.0 OverlandFlow component: a Python tool for computing shallow-water flow across watersheds

    NASA Astrophysics Data System (ADS)

    Adams, Jordan M.; Gasparini, Nicole M.; Hobley, Daniel E. J.; Tucker, Gregory E.; Hutton, Eric W. H.; Nudurupati, Sai S.; Istanbulluoglu, Erkan

    2017-04-01

    Representation of flowing water in landscape evolution models (LEMs) is often simplified compared to hydrodynamic models, as LEMs make assumptions reducing physical complexity in favor of computational efficiency. The Landlab modeling framework can be used to bridge the divide between complex runoff models and more traditional LEMs, creating a new type of framework not commonly used in the geomorphology or hydrology communities. Landlab is a Python-language library that includes tools and process components that can be used to create models of Earth-surface dynamics over a range of temporal and spatial scales. The Landlab OverlandFlow component is based on a simplified inertial approximation of the shallow water equations, following the solution of de Almeida et al.(2012). This explicit two-dimensional hydrodynamic algorithm simulates a flood wave across a model domain, where water discharge and flow depth are calculated at all locations within a structured (raster) grid. Here, we illustrate how the OverlandFlow component contained within Landlab can be applied as a simplified event-based runoff model and how to couple the runoff model with an incision model operating on decadal timescales. Examples of flow routing on both real and synthetic landscapes are shown. Hydrographs from a single storm at multiple locations in the Spring Creek watershed, Colorado, USA, are illustrated, along with a map of shear stress applied on the land surface by flowing water. The OverlandFlow component can also be coupled with the Landlab DetachmentLtdErosion component to illustrate how the non-steady flow routing regime impacts incision across a watershed. The hydrograph and incision results are compared to simulations driven by steady-state runoff. Results from the coupled runoff and incision model indicate that runoff dynamics can impact landscape relief and channel concavity, suggesting that, on landscape evolution timescales, the OverlandFlow model may lead to differences in simulated topography in comparison with traditional methods. The exploratory test cases described within demonstrate how the OverlandFlow component can be used in both hydrologic and geomorphic applications.

  1. Identification of quasi-steady compressor characteristics from transient data

    NASA Technical Reports Server (NTRS)

    Nunes, K. B.; Rock, S. M.

    1984-01-01

    The principal goal was to demonstrate that nonlinear compressor map parameters, which govern an in-stall response, can be identified from test data using parameter identification techniques. The tasks included developing and then applying an identification procedure to data generated by NASA LeRC on a hybrid computer. Two levels of model detail were employed. First was a lumped compressor rig model; second was a simplified turbofan model. The main outputs are the tools and procedures generated to accomplish the identification.

  2. Modular chassis simplifies packaging and interconnecting of circuit boards

    NASA Technical Reports Server (NTRS)

    Arens, W. E.; Boline, K. G.

    1964-01-01

    A system of modular chassis structures has simplified the design for mounting a number of printed circuit boards. This design is structurally adaptable to computer and industrial control system applications.

  3. Simulation study of a new inverse-pinch high Coulomb transfer switch

    NASA Technical Reports Server (NTRS)

    Choi, S. H.

    1984-01-01

    A simulation study of a simplified model of a high coulomb transfer switch is performed. The switch operates in an inverse pinch geometry formed by an all metal chamber, which greatly reduces hot spot formations on the electrode surfaces. Advantages of the switch over the conventional switches are longer useful life, higher current capability and lower inductance, which improves the characteristics required for a high repetition rate switch. The simulation determines the design parameters by analytical computations and comparison with the experimentally measured risetime, current handling capability, electrode damage, and hold-off voltages. The parameters of initial switch design can be determined for the anticipated switch performance. Results are in agreement with the experiment results. Although the model is simplified, the switch characteristics such as risetime, current handling capability, electrode damages, and hold-off voltages are accurately determined.

  4. Fast Learning for Immersive Engagement in Energy Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bush, Brian W; Bugbee, Bruce; Gruchalla, Kenny M

    The fast computation which is critical for immersive engagement with and learning from energy simulations would be furthered by developing a general method for creating rapidly computed simplified versions of NREL's computation-intensive energy simulations. Created using machine learning techniques, these 'reduced form' simulations can provide statistically sound estimates of the results of the full simulations at a fraction of the computational cost with response times - typically less than one minute of wall-clock time - suitable for real-time human-in-the-loop design and analysis. Additionally, uncertainty quantification techniques can document the accuracy of the approximate models and their domain of validity. Approximationmore » methods are applicable to a wide range of computational models, including supply-chain models, electric power grid simulations, and building models. These reduced-form representations cannot replace or re-implement existing simulations, but instead supplement them by enabling rapid scenario design and quality assurance for large sets of simulations. We present an overview of the framework and methods we have implemented for developing these reduced-form representations.« less

  5. A multibody knee model with discrete cartilage prediction of tibio-femoral contact mechanics.

    PubMed

    Guess, Trent M; Liu, Hongzeng; Bhashyam, Sampath; Thiagarajan, Ganesh

    2013-01-01

    Combining musculoskeletal simulations with anatomical joint models capable of predicting cartilage contact mechanics would provide a valuable tool for studying the relationships between muscle force and cartilage loading. As a step towards producing multibody musculoskeletal models that include representation of cartilage tissue mechanics, this research developed a subject-specific multibody knee model that represented the tibia plateau cartilage as discrete rigid bodies that interacted with the femur through deformable contacts. Parameters for the compliant contact law were derived using three methods: (1) simplified Hertzian contact theory, (2) simplified elastic foundation contact theory and (3) parameter optimisation from a finite element (FE) solution. The contact parameters and contact friction were evaluated during a simulated walk in a virtual dynamic knee simulator, and the resulting kinematics were compared with measured in vitro kinematics. The effects on predicted contact pressures and cartilage-bone interface shear forces during the simulated walk were also evaluated. The compliant contact stiffness parameters had a statistically significant effect on predicted contact pressures as well as all tibio-femoral motions except flexion-extension. The contact friction was not statistically significant to contact pressures, but was statistically significant to medial-lateral translation and all rotations except flexion-extension. The magnitude of kinematic differences between model formulations was relatively small, but contact pressure predictions were sensitive to model formulation. The developed multibody knee model was computationally efficient and had a computation time 283 times faster than a FE simulation using the same geometries and boundary conditions.

  6. Recent advances in QM/MM free energy calculations using reference potentials.

    PubMed

    Duarte, Fernanda; Amrein, Beat A; Blaha-Nelson, David; Kamerlin, Shina C L

    2015-05-01

    Recent years have seen enormous progress in the development of methods for modeling (bio)molecular systems. This has allowed for the simulation of ever larger and more complex systems. However, as such complexity increases, the requirements needed for these models to be accurate and physically meaningful become more and more difficult to fulfill. The use of simplified models to describe complex biological systems has long been shown to be an effective way to overcome some of the limitations associated with this computational cost in a rational way. Hybrid QM/MM approaches have rapidly become one of the most popular computational tools for studying chemical reactivity in biomolecular systems. However, the high cost involved in performing high-level QM calculations has limited the applicability of these approaches when calculating free energies of chemical processes. In this review, we present some of the advances in using reference potentials and mean field approximations to accelerate high-level QM/MM calculations. We present illustrative applications of these approaches and discuss challenges and future perspectives for the field. The use of physically-based simplifications has shown to effectively reduce the cost of high-level QM/MM calculations. In particular, lower-level reference potentials enable one to reduce the cost of expensive free energy calculations, thus expanding the scope of problems that can be addressed. As was already demonstrated 40 years ago, the usage of simplified models still allows one to obtain cutting edge results with substantially reduced computational cost. This article is part of a Special Issue entitled Recent developments of molecular dynamics. Copyright © 2014. Published by Elsevier B.V.

  7. Leveraging Mechanism Simplicity and Strategic Averaging to Identify Signals from Highly Heterogeneous Spatial and Temporal Ozone Data

    NASA Astrophysics Data System (ADS)

    Brown-Steiner, B.; Selin, N. E.; Prinn, R. G.; Monier, E.; Garcia-Menendez, F.; Tilmes, S.; Emmons, L. K.; Lamarque, J. F.; Cameron-Smith, P. J.

    2017-12-01

    We summarize two methods to aid in the identification of ozone signals from underlying spatially and temporally heterogeneous data in order to help research communities avoid the sometimes burdensome computational costs of high-resolution high-complexity models. The first method utilizes simplified chemical mechanisms (a Reduced Hydrocarbon Mechanism and a Superfast Mechanism) alongside a more complex mechanism (MOZART-4) within CESM CAM-Chem to extend the number of simulated meteorological years (or add additional members to an ensemble) for a given modeling problem. The Reduced Hydrocarbon mechanism is twice as fast, and the Superfast mechanism is three times faster than the MOZART-4 mechanism. We show that simplified chemical mechanisms are largely capable of simulating surface ozone across the globe as well as the more complex chemical mechanisms, and where they are not capable, a simple standardized anomaly emulation approach can correct for their inadequacies. The second method uses strategic averaging over both temporal and spatial scales to filter out the highly heterogeneous noise that underlies ozone observations and simulations. This method allows for a selection of temporal and spatial averaging scales that match a particular signal strength (between 0.5 and 5 ppbv), and enables the identification of regions where an ozone signal can rise above the ozone noise over a given region and a given period of time. In conjunction, these two methods can be used to "scale down" chemical mechanism complexity and quantitatively determine spatial and temporal scales that could enable research communities to utilize simplified representations of atmospheric chemistry and thereby maximize their productivity and efficiency given computational constraints. While this framework is here applied to ozone data, it could also be applied to a broad range of geospatial data sets (observed or modeled) that have spatial and temporal coverage.

  8. Long-term safety assessment of trench-type surface repository at Chernobyl, Ukraine - computer model and comparison with results from simplified models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haverkamp, B.; Krone, J.; Shybetskyi, I.

    2013-07-01

    The Radioactive Waste Disposal Facility (RWDF) Buryakovka was constructed in 1986 as part of the intervention measures after the accident at Chernobyl NPP (ChNPP). Today, the surface repository for solid low and intermediate level waste (LILW) is still being operated but its maximum capacity is nearly reached. Long-existing plans for increasing the capacity of the facility shall be implemented in the framework of the European Commission INSC Programme (Instrument for Nuclear Safety Co-operation). Within the first phase of this project, DBE Technology GmbH prepared a safety analysis report of the facility in its current state (SAR) and a preliminary safetymore » analysis report (PSAR) for a future extended facility based on the planned enlargement. In addition to a detailed mathematical model, also simplified models have been developed to verify results of the former one and enhance confidence in the results. Comparison of the results show that - depending on the boundary conditions - simplifications like modeling the multi trench repository as one generic trench might have very limited influence on the overall results compared to the general uncertainties associated with respective long-term calculations. In addition to their value in regard to verification of more complex models which is important to increase confidence in the overall results, such simplified models can also offer the possibility to carry out time consuming calculations like probabilistic calculations or detailed sensitivity analysis in an economic manner. (authors)« less

  9. Mathematical and computational model for the analysis of micro hybrid rocket motor

    NASA Astrophysics Data System (ADS)

    Stoia-Djeska, Marius; Mingireanu, Florin

    2012-11-01

    The hybrid rockets use a two-phase propellant system. In the present work we first develop a simplified model of the coupling of the hybrid combustion process with the complete unsteady flow, starting from the combustion port and ending with the nozzle. The physical and mathematical model are adapted to the simulations of micro hybrid rocket motors. The flow model is based on the one-dimensional Euler equations with source terms. The flow equations and the fuel regression rate law are solved in a coupled manner. The platform of the numerical simulations is an implicit fourth-order Runge-Kutta second order cell-centred finite volume method. The numerical results obtained with this model show a good agreement with published experimental and numerical results. The computational model developed in this work is simple, computationally efficient and offers the advantage of taking into account a large number of functional and constructive parameters that are used by the engineers.

  10. Neuron Bifurcations in an Analog Electronic Burster

    NASA Astrophysics Data System (ADS)

    Savino, Guillermo V.; Formigli, Carlos M.

    2007-05-01

    Although bursting electrical activity is typical in some brain neurons and biological excitable systems, its functions and mechanisms of generation are yet unknown. In modeling such complex oscillations, analog electronic models are faster than mathematical ones, whether phenomenologically or theoretically based. We show experimentally that bursting oscillator circuits can be greatly simplified by using the nonlinear characteristics of two bipolar transistors. Since our circuit qualitatively mimics Hodgkin and Huxley model neurons bursting activity, and bifurcations originating neuro-computational properties, it is not only a caricature but a realistic model.

  11. Structural Changes in Lipid Vesicles Generated by the Shock Blast Waves: Coarse-Grained Molecular Dynamics Simulation

    DTIC Science & Technology

    2013-11-01

    duration, or shock-pulse shape. Used in this computational study is a coarse-grained model of the lipid vesicle as a simplified model of a cell...Figures iv List of Tables iv 1. Introduction 1 2. Model and Methods 3 3. Results and Discussion 6 3.1 Simulation of the Blast Waves with Low Peak...realistic detail but to focus on a simple model of the major constituent of a cell membrane, the phospholipid bilayer. In this work, we studied the

  12. Collisional transport across the magnetic field in drift-fluid models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madsen, J., E-mail: jmad@fysik.dtu.dk; Naulin, V.; Nielsen, A. H.

    2016-03-15

    Drift ordered fluid models are widely applied in studies of low-frequency turbulence in the edge and scrape-off layer regions of magnetically confined plasmas. Here, we show how collisional transport across the magnetic field is self-consistently incorporated into drift-fluid models without altering the drift-fluid energy integral. We demonstrate that the inclusion of collisional transport in drift-fluid models gives rise to diffusion of particle density, momentum, and pressures in drift-fluid turbulence models and, thereby, obviates the customary use of artificial diffusion in turbulence simulations. We further derive a computationally efficient, two-dimensional model, which can be time integrated for several turbulence de-correlation timesmore » using only limited computational resources. The model describes interchange turbulence in a two-dimensional plane perpendicular to the magnetic field located at the outboard midplane of a tokamak. The model domain has two regions modeling open and closed field lines. The model employs a computational expedient model for collisional transport. Numerical simulations show good agreement between the full and the simplified model for collisional transport.« less

  13. Pair mobility functions for rigid spheres in concentrated colloidal dispersions: Stresslet and straining motion couplings

    NASA Astrophysics Data System (ADS)

    Su, Yu; Swan, James W.; Zia, Roseanna N.

    2017-03-01

    Accurate modeling of particle interactions arising from hydrodynamic, entropic, and other microscopic forces is essential to understanding and predicting particle motion and suspension behavior in complex and biological fluids. The long-range nature of hydrodynamic interactions can be particularly challenging to capture. In dilute dispersions, pair-level interactions are sufficient and can be modeled in detail by analytical relations derived by Jeffrey and Onishi [J. Fluid Mech. 139, 261-290 (1984)] and Jeffrey [Phys. Fluids A 4, 16-29 (1992)]. In more concentrated dispersions, analytical modeling of many-body hydrodynamic interactions quickly becomes intractable, leading to the development of simplified models. These include mean-field approaches that smear out particle-scale structure and essentially assume that long-range hydrodynamic interactions are screened by crowding, as particle mobility decays at high concentrations. Toward the development of an accurate and simplified model for the hydrodynamic interactions in concentrated suspensions, we recently computed a set of effective pair of hydrodynamic functions coupling particle motion to a hydrodynamic force and torque at volume fractions up to 50% utilizing accelerated Stokesian dynamics and a fast stochastic sampling technique [Zia et al., J. Chem. Phys. 143, 224901 (2015)]. We showed that the hydrodynamic mobility in suspensions of colloidal spheres is not screened, and the power law decay of the hydrodynamic functions persists at all concentrations studied. In the present work, we extend these mobility functions to include the couplings of particle motion and straining flow to the hydrodynamic stresslet. The couplings computed in these two articles constitute a set of orthogonal coupling functions that can be utilized to compute equilibrium properties in suspensions at arbitrary concentration and are readily applied to solve many-body hydrodynamic interactions analytically.

  14. Particle Transport through Scattering Regions with Clear Layers and Inclusions

    NASA Astrophysics Data System (ADS)

    Bal, Guillaume

    2002-08-01

    This paper introduces generalized diffusion models for the transport of particles in scattering media with nonscattering inclusions. Classical diffusion is known as a good approximation of transport only in scattering media. Based on asymptotic expansions and the coupling of transport and diffusion models, generalized diffusion equations with nonlocal interface conditions are proposed which offer a computationally cheap, yet accurate, alternative to solving the full phase-space transport equations. The paper shows which computational model should be used depending on the size and shape of the nonscattering inclusions in the simplified setting of two space dimensions. An important application is the treatment of clear layers in near-infrared (NIR) spectroscopy, an imaging technique based on the propagation of NIR photons in human tissues.

  15. Combined inverse-forward artificial neural networks for fast and accurate estimation of the diffusion coefficients of cartilage based on multi-physics models.

    PubMed

    Arbabi, Vahid; Pouran, Behdad; Weinans, Harrie; Zadpoor, Amir A

    2016-09-06

    Analytical and numerical methods have been used to extract essential engineering parameters such as elastic modulus, Poisson׳s ratio, permeability and diffusion coefficient from experimental data in various types of biological tissues. The major limitation associated with analytical techniques is that they are often only applicable to problems with simplified assumptions. Numerical multi-physics methods, on the other hand, enable minimizing the simplified assumptions but require substantial computational expertise, which is not always available. In this paper, we propose a novel approach that combines inverse and forward artificial neural networks (ANNs) which enables fast and accurate estimation of the diffusion coefficient of cartilage without any need for computational modeling. In this approach, an inverse ANN is trained using our multi-zone biphasic-solute finite-bath computational model of diffusion in cartilage to estimate the diffusion coefficient of the various zones of cartilage given the concentration-time curves. Robust estimation of the diffusion coefficients, however, requires introducing certain levels of stochastic variations during the training process. Determining the required level of stochastic variation is performed by coupling the inverse ANN with a forward ANN that receives the diffusion coefficient as input and returns the concentration-time curve as output. Combined together, forward-inverse ANNs enable computationally inexperienced users to obtain accurate and fast estimation of the diffusion coefficients of cartilage zones. The diffusion coefficients estimated using the proposed approach are compared with those determined using direct scanning of the parameter space as the optimization approach. It has been shown that both approaches yield comparable results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. A Simple Introduction to Gröbner Basis Methods in String Phenomenology

    NASA Astrophysics Data System (ADS)

    Gray, James

    In this talk I give an elementary introduction to the key algorithm used in recent applications of computational algebraic geometry to the subject of string phenomenology. I begin with a simple description of the algorithm itself and then give 3 examples of its use in physics. I describe how it can be used to obtain constraints on flux parameters, how it can simplify the equations describing vacua in 4d string models and lastly how it can be used to compute the vacuum space of the electroweak sector of the MSSM.

  17. Nuclear Engineering Computer Modules, Thermal-Hydraulics, TH-2: Liquid Metal Fast Breeder Reactors.

    ERIC Educational Resources Information Center

    Reihman, Thomas C.

    This learning module is concerned with the temperature field, the heat transfer rates, and the coolant pressure drop in typical liquid metal fast breeder reactor (LMFBR) fuel assemblies. As in all of the modules of this series, emphasis is placed on developing the theory and demonstrating the use with a simplified model. The heart of the module is…

  18. Nuclear Engineering Computer Modules, Thermal-Hydraulics, TH-1: Pressurized Water Reactors.

    ERIC Educational Resources Information Center

    Reihman, Thomas C.

    This learning module is concerned with the temperature field, the heat transfer rates, and the coolant pressure drop in typical pressurized water reactor (PWR) fuel assemblies. As in all of the modules of this series, emphasis is placed on developing the theory and demonstrating its use with a simplified model. The heart of the module is the PWR…

  19. A Landing Gear Noise Reduction Study Based on Computational Simulations

    NASA Technical Reports Server (NTRS)

    Khorrami, Mehdi R.; Lockard, David P.

    2006-01-01

    Landing gear is one of the more prominent airframe noise sources. Techniques that diminish gear noise and suppress its radiation to the ground are highly desirable. Using a hybrid computational approach, this paper investigates the noise reduction potential of devices added to a simplified main landing gear model without small scale geometric details. The Ffowcs Williams and Hawkings equation is used to predict the noise at far-field observer locations from surface pressure data provided by unsteady CFD calculations. Because of the simplified nature of the model, most of the flow unsteadiness is restricted to low frequencies. The wheels, gear boxes, and oleo appear to be the primary sources of unsteadiness at these frequencies. The addition of fairings around the gear boxes and wheels, and the attachment of a splitter plate on the downstream side of the oleo significantly reduces the noise over a wide range of frequencies, but a dramatic increase in noise is observed at one frequency. The increased flow velocities, a consequence of the more streamlined bodies, appear to generate extra unsteadiness around other parts giving rise to the additional noise. Nonetheless, the calculations demonstrate the capability of the devices to improve overall landing gear noise.

  20. Turbulent Swirling Flow in Combustor/Exhaust Nozzle Systems

    DTIC Science & Technology

    1991-03-29

    simplify the specifica- tion and generation of the computational mesh as well as efficiently utilize all of the computat;’rnal cells . DUMPSTER was applied to...iteration at each cell in a zone when the k - E model is not activated. LIMPKE ............. This subroutine performs the forward sweep of the LU-SGS...iteration at each cell in a zone when the k-( model is activated. LUDRV .............. This is the controller subroutine that calls the LIMP, UIMP

  1. Evaluation of Enhanced Risk Monitors for Use on Advanced Reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramuhalli, Pradeep; Veeramany, Arun; Bonebrake, Christopher A.

    This study provides an overview of the methodology for integrating time-dependent failure probabilities into nuclear power reactor risk monitors. This prototypic enhanced risk monitor (ERM) methodology was evaluated using a hypothetical probabilistic risk assessment (PRA) model, generated using a simplified design of a liquid-metal-cooled advanced reactor (AR). Component failure data from industry compilation of failures of components similar to those in the simplified AR model were used to initialize the PRA model. Core damage frequency (CDF) over time were computed and analyzed. In addition, a study on alternative risk metrics for ARs was conducted. Risk metrics that quantify the normalizedmore » cost of repairs, replacements, or other operations and management (O&M) actions were defined and used, along with an economic model, to compute the likely economic risk of future actions such as deferred maintenance based on the anticipated change in CDF due to current component condition and future anticipated degradation. Such integration of conventional-risk metrics with alternate-risk metrics provides a convenient mechanism for assessing the impact of O&M decisions on safety and economics of the plant. It is expected that, when integrated with supervisory control algorithms, such integrated-risk monitors will provide a mechanism for real-time control decision-making that ensure safety margins are maintained while operating the plant in an economically viable manner.« less

  2. Computer-aided analysis for the Mechanics of Granular Materials (MGM) experiment, part 2

    NASA Technical Reports Server (NTRS)

    Parker, Joey K.

    1987-01-01

    Computer vision based analysis for the MGM experiment is continued and expanded into new areas. Volumetric strains of granular material triaxial test specimens have been measured from digitized images. A computer-assisted procedure is used to identify the edges of the specimen, and the edges are used in a 3-D model to estimate specimen volume. The results of this technique compare favorably to conventional measurements. A simplified model of the magnification caused by diffraction of light within the water of the test apparatus was also developed. This model yields good results when the distance between the camera and the test specimen is large compared to the specimen height. An algorithm for a more accurate 3-D magnification correction is also presented. The use of composite and RGB (red-green-blue) color cameras is discussed and potentially significant benefits from using an RGB camera are presented.

  3. Photopolarimetry of scattering surfaces and their interpretation by computer model

    NASA Technical Reports Server (NTRS)

    Wolff, M.

    1979-01-01

    Wolff's computer model of a rough planetary surface was simplified and revised. Close adherence to the actual geometry of a pitted surface and the inclusion of a function for diffuse light resulted in a quantitative model comparable to observations by planetary satellites and asteroids. A function is also derived to describe diffuse light emitted from a particulate surface. The function is in terms of the indices of refraction of the surface material, particle size, and viewing angles. Computer-generated plots describe the observable and theoretical light components for the Moon, Mercury, Mars and a spectrum of asteroids. Other plots describe the effects of changing surface material properties. Mathematical results are generated to relate the parameters of the negative polarization branch to the properties of surface pitting. An explanation is offered for the polarization of the rings of Saturn, and the average diameter of ring objects is found to be 30 to 40 centimeters.

  4. Micro-structurally detailed model of a therapeutic hydrogel injectate in a rat biventricular cardiac geometry for computational simulations

    PubMed Central

    Sirry, Mazin S.; Davies, Neil H.; Kadner, Karen; Dubuis, Laura; Saleh, Muhammad G.; Meintjes, Ernesta M.; Spottiswoode, Bruce S.; Zilla, Peter; Franz, Thomas

    2013-01-01

    Biomaterial injection based therapies have showed cautious success in restoration of cardiac function and prevention of adverse remodelling into heart failure after myocardial infarction (MI). However, the underlying mechanisms are not well understood. Computational studies utilised simplified representations of the therapeutic myocardial injectates. Wistar rats underwent experimental infarction followed by immediate injection of polyethylene glycol hydrogel in the infarct region. Hearts were explanted, cryo-sectioned and the region with the injectate histologically analysed. Histological micrographs were used to reconstruct the dispersed hydrogel injectate. Cardiac magnetic resonance imaging (CMRI) data from a healthy rat were used to obtain an end-diastolic biventricular geometry which was subsequently adjusted and combined with the injectate model. The computational geometry of the injectate exhibited microscopic structural details found the in situ. The combination of injectate and cardiac geometry provides realistic geometries for multiscale computational studies of intra-myocardial injectate therapies for the rat model that has been widely used for MI research. PMID:23682845

  5. Hybrid feedforward and feedback controller design for nuclear steam generators over wide range operation using genetic algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Y.; Edwards, R.M.; Lee, K.Y.

    1997-03-01

    In this paper, a simplified model with a lower order is first developed for a nuclear steam generator system and verified against some realistic environments. Based on this simplified model, a hybrid multi-input and multi-out (MIMO) control system, consisting of feedforward control (FFC) and feedback control (FBC), is designed for wide range conditions by using the genetic algorithm (GA) technique. The FFC control, obtained by the GA optimization method, injects an a priori command input into the system to achieve an optimal performance for the designed system, while the GA-based FBC control provides the necessary compensation for any disturbances ormore » uncertainties in a real steam generator. The FBC control is an optimal design of a PI-based control system which would be more acceptable for industrial practices and power plant control system upgrades. The designed hybrid MIMO FFC/FBC control system is first applied to the simplified model and then to a more complicated model with a higher order which is used as a substitute of the real system to test the efficacy of the designed control system. Results from computer simulations show that the designed GA-based hybrid MIMO FFC/FBC control can achieve good responses and robust performances. Hence, it can be considered as a viable alternative to the current control system upgrade.« less

  6. Simplified phenomenology for colored dark sectors

    NASA Astrophysics Data System (ADS)

    El Hedri, Sonia; Kaminska, Anna; de Vries, Maikel; Zurita, Jose

    2017-04-01

    We perform a general study of the relic density and LHC constraints on simplified models where the dark matter coannihilates with a strongly interacting particle X. In these models, the dark matter depletion is driven by the self-annihilation of X to pairs of quarks and gluons through the strong interaction. The phenomenology of these scenarios therefore only depends on the dark matter mass and the mass splitting between dark matter and X as well as the quantum numbers of X. In this paper, we consider simplified models where X can be either a scalar, a fermion or a vector, as well as a color triplet, sextet or octet. We compute the dark matter relic density constraints taking into account Sommerfeld corrections and bound state formation. Furthermore, we examine the restrictions from thermal equilibrium, the lifetime of X and the current and future LHC bounds on X pair production. All constraints are comprehensively presented in the mass splitting versus dark matter mass plane. While the relic density constraints can lead to upper bounds on the dark matter mass ranging from 2 TeV to more than 10 TeV across our models, the prospective LHC bounds range from 800 to 1500 GeV. A full coverage of the strongly coannihilating dark matter parameter space would therefore require hadron colliders with significantly higher center-of-mass energies.

  7. Ferrofluids: Modeling, numerical analysis, and scientific computation

    NASA Astrophysics Data System (ADS)

    Tomas, Ignacio

    This dissertation presents some developments in the Numerical Analysis of Partial Differential Equations (PDEs) describing the behavior of ferrofluids. The most widely accepted PDE model for ferrofluids is the Micropolar model proposed by R.E. Rosensweig. The Micropolar Navier-Stokes Equations (MNSE) is a subsystem of PDEs within the Rosensweig model. Being a simplified version of the much bigger system of PDEs proposed by Rosensweig, the MNSE are a natural starting point of this thesis. The MNSE couple linear velocity u, angular velocity w, and pressure p. We propose and analyze a first-order semi-implicit fully-discrete scheme for the MNSE, which decouples the computation of the linear and angular velocities, is unconditionally stable and delivers optimal convergence rates under assumptions analogous to those used for the Navier-Stokes equations. Moving onto the much more complex Rosensweig's model, we provide a definition (approximation) for the effective magnetizing field h, and explain the assumptions behind this definition. Unlike previous definitions available in the literature, this new definition is able to accommodate the effect of external magnetic fields. Using this definition we setup the system of PDEs coupling linear velocity u, pressure p, angular velocity w, magnetization m, and magnetic potential ϕ We show that this system is energy-stable and devise a numerical scheme that mimics the same stability property. We prove that solutions of the numerical scheme always exist and, under certain simplifying assumptions, that the discrete solutions converge. A notable outcome of the analysis of the numerical scheme for the Rosensweig's model is the choice of finite element spaces that allow the construction of an energy-stable scheme. Finally, with the lessons learned from Rosensweig's model, we develop a diffuse-interface model describing the behavior of two-phase ferrofluid flows and present an energy-stable numerical scheme for this model. For a simplified version of this model and the corresponding numerical scheme we prove (in addition to stability) convergence and existence of solutions as by-product . Throughout this dissertation, we will provide numerical experiments, not only to validate mathematical results, but also to help the reader gain a qualitative understanding of the PDE models analyzed in this dissertation (the MNSE, the Rosenweig's model, and the Two-phase model). In addition, we also provide computational experiments to illustrate the potential of these simple models and their ability to capture basic phenomenological features of ferrofluids, such as the Rosensweig instability for the case of the two-phase model. In this respect, we highlight the incisive numerical experiments with the two-phase model illustrating the critical role of the demagnetizing field to reproduce physically realistic behavior of ferrofluids.

  8. Evolutionary image simplification for lung nodule classification with convolutional neural networks.

    PubMed

    Lückehe, Daniel; von Voigt, Gabriele

    2018-05-29

    Understanding decisions of deep learning techniques is important. Especially in the medical field, the reasons for a decision in a classification task are as crucial as the pure classification results. In this article, we propose a new approach to compute relevant parts of a medical image. Knowing the relevant parts makes it easier to understand decisions. In our approach, a convolutional neural network is employed to learn structures of images of lung nodules. Then, an evolutionary algorithm is applied to compute a simplified version of an unknown image based on the learned structures by the convolutional neural network. In the simplified version, irrelevant parts are removed from the original image. In the results, we show simplified images which allow the observer to focus on the relevant parts. In these images, more than 50% of the pixels are simplified. The simplified pixels do not change the meaning of the images based on the learned structures by the convolutional neural network. An experimental analysis shows the potential of the approach. Besides the examples of simplified images, we analyze the run time development. Simplified images make it easier to focus on relevant parts and to find reasons for a decision. The combination of an evolutionary algorithm employing a learned convolutional neural network is well suited for the simplification task. From a research perspective, it is interesting which areas of the images are simplified and which parts are taken as relevant.

  9. Simplified Predictive Models for CO2 Sequestration Performance Assessment

    NASA Astrophysics Data System (ADS)

    Mishra, Srikanta; RaviGanesh, Priya; Schuetter, Jared; Mooney, Douglas; He, Jincong; Durlofsky, Louis

    2014-05-01

    We present results from an ongoing research project that seeks to develop and validate a portfolio of simplified modeling approaches that will enable rapid feasibility and risk assessment for CO2 sequestration in deep saline formation. The overall research goal is to provide tools for predicting: (a) injection well and formation pressure buildup, and (b) lateral and vertical CO2 plume migration. Simplified modeling approaches that are being developed in this research fall under three categories: (1) Simplified physics-based modeling (SPM), where only the most relevant physical processes are modeled, (2) Statistical-learning based modeling (SLM), where the simulator is replaced with a "response surface", and (3) Reduced-order method based modeling (RMM), where mathematical approximations reduce the computational burden. The system of interest is a single vertical well injecting supercritical CO2 into a 2-D layered reservoir-caprock system with variable layer permeabilities. In the first category (SPM), we use a set of well-designed full-physics compositional simulations to understand key processes and parameters affecting pressure propagation and buoyant plume migration. Based on these simulations, we have developed correlations for dimensionless injectivity as a function of the slope of fractional-flow curve, variance of layer permeability values, and the nature of vertical permeability arrangement. The same variables, along with a modified gravity number, can be used to develop a correlation for the total storage efficiency within the CO2 plume footprint. In the second category (SLM), we develop statistical "proxy models" using the simulation domain described previously with two different approaches: (a) classical Box-Behnken experimental design with a quadratic response surface fit, and (b) maximin Latin Hypercube sampling (LHS) based design with a Kriging metamodel fit using a quadratic trend and Gaussian correlation structure. For roughly the same number of simulations, the LHS-based meta-model yields a more robust predictive model, as verified by a k-fold cross-validation approach. In the third category (RMM), we use a reduced-order modeling procedure that combines proper orthogonal decomposition (POD) for reducing problem dimensionality with trajectory-piecewise linearization (TPWL) for extrapolating system response at new control points from a limited number of trial runs ("snapshots"). We observe significant savings in computational time with very good accuracy from the POD-TPWL reduced order model - which could be important in the context of history matching, uncertainty quantification and optimization problems. The paper will present results from our ongoing investigations, and also discuss future research directions and likely outcomes. This work was supported by U.S. Department of Energy National Energy Technology Laboratory award DE-FE0009051 and Ohio Department of Development grant D-13-02.

  10. Aeroacoustic Analysis of a Simplified Landing Gear

    NASA Technical Reports Server (NTRS)

    Lockard, David P.; Khorrami, Mehdi, R.; Li, Fei

    2004-01-01

    A hybrid approach is used to investigate the noise generated by a simplified landing gear without small scale parts such as hydraulic lines and fasteners. The Ffowcs Williams and Hawkings equation is used to predict the noise at far-field observer locations from flow data provided by an unsteady computational fluid dynamics calculation. A simulation with 13 million grid points has been completed, and comparisons are made between calculations with different turbulence models. Results indicate that the turbulence model has a profound effect on the levels and character of the unsteadiness. Flow data on solid surfaces and a set of permeable surfaces surrounding the gear have been collected. Noise predictions using the porous surfaces appear to be contaminated by errors caused by large wake fluctuations passing through the surfaces. However, comparisons between predictions using the solid surfaces with the near-field CFD solution are in good agreement giving confidence in the far-field results.

  11. A Graphic Overlay Method for Selection of Osteotomy Site in Chronic Radial Head Dislocation: An Evaluation of 3D-printed Bone Models.

    PubMed

    Kim, Hui Taek; Ahn, Tae Young; Jang, Jae Hoon; Kim, Kang Hee; Lee, Sung Jae; Jung, Duk Young

    2017-03-01

    Three-dimensional (3D) computed tomography imaging is now being used to generate 3D models for planning orthopaedic surgery, but the process remains time consuming and expensive. For chronic radial head dislocation, we have designed a graphic overlay approach that employs selected 3D computer images and widely available software to simplify the process of osteotomy site selection. We studied 5 patients (2 traumatic and 3 congenital) with unilateral radial head dislocation. These patients were treated with surgery based on traditional radiographs, but they also had full sets of 3D CT imaging done both before and after their surgery: these 3D CT images form the basis for this study. From the 3D CT images, each patient generated 3 sets of 3D-printed bone models: 2 copies of the preoperative condition, and 1 copy of the postoperative condition. One set of the preoperative models was then actually osteotomized and fixed in the manner suggested by our graphic technique. Arcs of rotation of the 3 sets of 3D-printed bone models were then compared. Arcs of rotation of the 3 groups of bone models were significantly different, with the models osteotomized accordingly to our graphic technique having the widest arcs. For chronic radial head dislocation, our graphic overlay approach simplifies the selection of the osteotomy site(s). Three-dimensional-printed bone models suggest that this approach could improve range of motion of the forearm in actual surgical practice. Level IV-therapeutic study.

  12. uPy: a ubiquitous computer graphics Python API with Biological Modeling Applications

    PubMed Central

    Autin, L.; Johnson, G.; Hake, J.; Olson, A.; Sanner, M.

    2015-01-01

    In this paper we describe uPy, an extension module for the Python programming language that provides a uniform abstraction of the APIs of several 3D computer graphics programs called hosts, including: Blender, Maya, Cinema4D, and DejaVu. A plugin written with uPy is a unique piece of code that will run in all uPy-supported hosts. We demonstrate the creation of complex plug-ins for molecular/cellular modeling and visualization and discuss how uPy can more generally simplify programming for many types of projects (not solely science applications) intended for multi-host distribution. uPy is available at http://upy.scripps.edu PMID:24806987

  13. Geometric modeling for computer aided design

    NASA Technical Reports Server (NTRS)

    Schwing, James L.

    1992-01-01

    The goal was the design and implementation of software to be used in the conceptual design of aerospace vehicles. Several packages and design studies were completed, including two software tools currently used in the conceptual level design of aerospace vehicles. These tools are the Solid Modeling Aerospace Research Tool (SMART) and the Environment for Software Integration and Execution (EASIE). SMART provides conceptual designers with a rapid prototyping capability and additionally provides initial mass property analysis. EASIE provides a set of interactive utilities that simplify the task of building and executing computer aided design systems consisting of diverse, stand alone analysis codes that result in the streamlining of the exchange of data between programs, reducing errors and improving efficiency.

  14. On the torsional loading of elastoplastic spheres in contact

    NASA Astrophysics Data System (ADS)

    Nadimi, Sadegh; Fonseca, Joana

    2017-06-01

    The mechanical interaction between two bodies involves normal loading in combination with tangential, torsional and rotational loading. This paper focuses on the torsional loading of two spherical bodies which leads to twisting moment. The theoretical approach for calculating twisting moment between two spherical bodies has been proposed by Lubkin [1]. Due to the complexity of the solution, this has been simplified by Deresiewicz for discrete element modelling [2]. Here, the application of a simplified model for elastoplastic spheres is verified using computational modelling. The single grain interaction is simulated in a combined finite discrete element domain. In this domain a grain can deform using a finite element formulation and can interact with other objects based on discrete element principles. For an elastoplastic model, the contact area is larger in comparison with the elastic model, under a given normal force. Therefore, the plastic twisting moment is stiffer. The results presented here are important for describing any granular system involving torsional loading of elastoplastic grains. In particular, recent research on the behaviour of soil has clearly shown the importance of plasticity on grain interaction and rearrangement.

  15. The performance of fine-grained and coarse-grained elastic network models and its dependence on various factors.

    PubMed

    Na, Hyuntae; Song, Guang

    2015-07-01

    In a recent work we developed a method for deriving accurate simplified models that capture the essentials of conventional all-atom NMA and identified two best simplified models: ssNMA and eANM, both of which have a significantly higher correlation with NMA in mean square fluctuation calculations than existing elastic network models such as ANM and ANMr2, a variant of ANM that uses the inverse of the squared separation distances as spring constants. Here, we examine closely how the performance of these elastic network models depends on various factors, namely, the presence of hydrogen atoms in the model, the quality of input structures, and the effect of crystal packing. The study reveals the strengths and limitations of these models. Our results indicate that ssNMA and eANM are the best fine-grained elastic network models but their performance is sensitive to the quality of input structures. When the quality of input structures is poor, ANMr2 is a good alternative for computing mean-square fluctuations while ANM model is a good alternative for obtaining normal modes. © 2015 Wiley Periodicals, Inc.

  16. Combining wet and dry research: experience with model development for cardiac mechano-electric structure-function studies

    PubMed Central

    Quinn, T. Alexander; Kohl, Peter

    2013-01-01

    Since the development of the first mathematical cardiac cell model 50 years ago, computational modelling has become an increasingly powerful tool for the analysis of data and for the integration of information related to complex cardiac behaviour. Current models build on decades of iteration between experiment and theory, representing a collective understanding of cardiac function. All models, whether computational, experimental, or conceptual, are simplified representations of reality and, like tools in a toolbox, suitable for specific applications. Their range of applicability can be explored (and expanded) by iterative combination of ‘wet’ and ‘dry’ investigation, where experimental or clinical data are used to first build and then validate computational models (allowing integration of previous findings, quantitative assessment of conceptual models, and projection across relevant spatial and temporal scales), while computational simulations are utilized for plausibility assessment, hypotheses-generation, and prediction (thereby defining further experimental research targets). When implemented effectively, this combined wet/dry research approach can support the development of a more complete and cohesive understanding of integrated biological function. This review illustrates the utility of such an approach, based on recent examples of multi-scale studies of cardiac structure and mechano-electric function. PMID:23334215

  17. Frontier molecular orbitals of a single molecule adsorbed on thin insulating films supported by a metal substrate: electron and hole attachment energies.

    PubMed

    Scivetti, Iván; Persson, Mats

    2017-09-06

    We present calculations of vertical electron and hole attachment energies to the frontier orbitals of a pentacene molecule absorbed on multi-layer sodium chloride films supported by a copper substrate using a simplified density functional theory (DFT) method. The adsorbate and the film are treated fully within DFT, whereas the metal is treated implicitly by a perfect conductor model. We find that the computed energy gap between the highest and lowest unoccupied molecular orbitals-HOMO and LUMO -from the vertical attachment energies increases with the thickness of the insulating film, in agreement with experiments. This increase of the gap can be rationalised in a simple dielectric model with parameters determined from DFT calculations and is found to be dominated by the image interaction with the metal. We find, however, that this simplified model overestimates the downward shift of the energy gap in the limit of an infinitely thick film.

  18. Frontier molecular orbitals of a single molecule adsorbed on thin insulating films supported by a metal substrate: electron and hole attachment energies

    NASA Astrophysics Data System (ADS)

    Scivetti, Iván; Persson, Mats

    2017-09-01

    We present calculations of vertical electron and hole attachment energies to the frontier orbitals of a pentacene molecule absorbed on multi-layer sodium chloride films supported by a copper substrate using a simplified density functional theory (DFT) method. The adsorbate and the film are treated fully within DFT, whereas the metal is treated implicitly by a perfect conductor model. We find that the computed energy gap between the highest and lowest unoccupied molecular orbitals—HOMO and LUMO -from the vertical attachment energies increases with the thickness of the insulating film, in agreement with experiments. This increase of the gap can be rationalised in a simple dielectric model with parameters determined from DFT calculations and is found to be dominated by the image interaction with the metal. We find, however, that this simplified model overestimates the downward shift of the energy gap in the limit of an infinitely thick film.

  19. Computational Modeling of Proteins based on Cellular Automata: A Method of HP Folding Approximation.

    PubMed

    Madain, Alia; Abu Dalhoum, Abdel Latif; Sleit, Azzam

    2018-06-01

    The design of a protein folding approximation algorithm is not straightforward even when a simplified model is used. The folding problem is a combinatorial problem, where approximation and heuristic algorithms are usually used to find near optimal folds of proteins primary structures. Approximation algorithms provide guarantees on the distance to the optimal solution. The folding approximation approach proposed here depends on two-dimensional cellular automata to fold proteins presented in a well-studied simplified model called the hydrophobic-hydrophilic model. Cellular automata are discrete computational models that rely on local rules to produce some overall global behavior. One-third and one-fourth approximation algorithms choose a subset of the hydrophobic amino acids to form H-H contacts. Those algorithms start with finding a point to fold the protein sequence into two sides where one side ignores H's at even positions and the other side ignores H's at odd positions. In addition, blocks or groups of amino acids fold the same way according to a predefined normal form. We intend to improve approximation algorithms by considering all hydrophobic amino acids and folding based on the local neighborhood instead of using normal forms. The CA does not assume a fixed folding point. The proposed approach guarantees one half approximation minus the H-H endpoints. This lower bound guaranteed applies to short sequences only. This is proved as the core and the folds of the protein will have two identical sides for all short sequences.

  20. Symbolic computation of equivalence transformations and parameter reduction for nonlinear physical models

    NASA Astrophysics Data System (ADS)

    Cheviakov, Alexei F.

    2017-11-01

    An efficient systematic procedure is provided for symbolic computation of Lie groups of equivalence transformations and generalized equivalence transformations of systems of differential equations that contain arbitrary elements (arbitrary functions and/or arbitrary constant parameters), using the software package GeM for Maple. Application of equivalence transformations to the reduction of the number of arbitrary elements in a given system of equations is discussed, and several examples are considered. The first computational example of generalized equivalence transformations where the transformation of the dependent variable involves an arbitrary constitutive function is presented. As a detailed physical example, a three-parameter family of nonlinear wave equations describing finite anti-plane shear displacements of an incompressible hyperelastic fiber-reinforced medium is considered. Equivalence transformations are computed and employed to radically simplify the model for an arbitrary fiber direction, invertibly reducing the model to a simple form that corresponds to a special fiber direction, and involves no arbitrary elements. The presented computation algorithm is applicable to wide classes of systems of differential equations containing arbitrary elements.

  1. Simplified mathematical model of losses in a centrifugal compressor stage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seleznev, K.P.; Galerkin, Yu.B.; Popova, E.Yu.

    1988-05-01

    A mathematical model was developed for optimizing the parameters of the stage which does not require calculation of the flow around grids. The loss coefficients of the stage elements were considered as functions of the flow-through section, the angle of incidence, the compressibility criterion, and the Reynolds number. The relationships were used to calculate losses in all blade components, including blade diffusers, deflectors, and rotors. The model is implemented in a microcomputer and will compute the efficiency of one variant of the flow-through section of a stage in 60 minutes.

  2. Transmission and visualization of large geographical maps

    NASA Astrophysics Data System (ADS)

    Zhang, Liqiang; Zhang, Liang; Ren, Yingchao; Guo, Zhifeng

    Transmission and visualization of large geographical maps have become a challenging research issue in GIS applications. This paper presents an efficient and robust way to simplify large geographical maps using frame buffers and Voronoi diagrams. The topological relationships are kept during the simplification by removing the Voronoi diagram's self-overlapped regions. With the simplified vector maps, we establish different levels of detail (LOD) models of these maps. Then we introduce a client/server architecture which integrates our out-of-core algorithm, progressive transmission and rendering scheme based on computer graphics hardware. The architecture allows the viewers to view different regions interactively at different LODs on the network. Experimental results show that our proposed scheme provides an effective way for powerful transmission and manipulation of large maps.

  3. Brute force meets Bruno force in parameter optimisation: introduction of novel constraints for parameter accuracy improvement by symbolic computation.

    PubMed

    Nakatsui, M; Horimoto, K; Lemaire, F; Ürgüplü, A; Sedoglavic, A; Boulier, F

    2011-09-01

    Recent remarkable advances in computer performance have enabled us to estimate parameter values by the huge power of numerical computation, the so-called 'Brute force', resulting in the high-speed simultaneous estimation of a large number of parameter values. However, these advancements have not been fully utilised to improve the accuracy of parameter estimation. Here the authors review a novel method for parameter estimation using symbolic computation power, 'Bruno force', named after Bruno Buchberger, who found the Gröbner base. In the method, the objective functions combining the symbolic computation techniques are formulated. First, the authors utilise a symbolic computation technique, differential elimination, which symbolically reduces an equivalent system of differential equations to a system in a given model. Second, since its equivalent system is frequently composed of large equations, the system is further simplified by another symbolic computation. The performance of the authors' method for parameter accuracy improvement is illustrated by two representative models in biology, a simple cascade model and a negative feedback model in comparison with the previous numerical methods. Finally, the limits and extensions of the authors' method are discussed, in terms of the possible power of 'Bruno force' for the development of a new horizon in parameter estimation.

  4. Plasma sheath effects on ion collection by a pinhole

    NASA Technical Reports Server (NTRS)

    Herr, Joel L.; Snyder, David B.

    1993-01-01

    This work presents tables to assist in the evaluation of pinhole collection effects on spacecraft. These tables summarize results of a computer model which tracks particle trajectories through a simplified electric field in the plasma sheath. A technique is proposed to account for plasma sheath effects in the application of these results and scaling rules are proposed to apply the calculations to specific situations. This model is compared to ion current measurements obtained by another worker, and the agreement is very good.

  5. Advanced propulsion for LEO-Moon transport. 3: Transportation model. M.S. Thesis - California Univ.

    NASA Technical Reports Server (NTRS)

    Henley, Mark W.

    1992-01-01

    A simplified computational model of low Earth orbit-Moon transportation system has been developed to provide insight into the benefits of new transportation technologies. A reference transportation infrastructure, based upon near-term technology developments, is used as a departure point for assessing other, more advanced alternatives. Comparison of the benefits of technology application, measured in terms of a mass payback ratio, suggests that several of the advanced technology alternatives could substantially improve the efficiency of low Earth orbit-Moon transportation.

  6. Modeling Infrared Signal Reflections to Characterize Indoor Multipath Propagation

    PubMed Central

    De-La-Llana-Calvo, Álvaro; Lázaro-Galilea, José Luis; Gardel-Vicente, Alfredo; Rodríguez-Navarro, David; Bravo-Muñoz, Ignacio; Tsirigotis, Georgios; Iglesias-Miguel, Juan

    2017-01-01

    In this paper, we propose a model to characterize Infrared (IR) signal reflections on any kind of surface material, together with a simplified procedure to compute the model parameters. The model works within the framework of Local Positioning Systems (LPS) based on IR signals (IR-LPS) to evaluate the behavior of transmitted signal Multipaths (MP), which are the main cause of error in IR-LPS, and makes several contributions to mitigation methods. Current methods are based on physics, optics, geometry and empirical methods, but these do not meet our requirements because of the need to apply several different restrictions and employ complex tools. We propose a simplified model based on only two reflection components, together with a method for determining the model parameters based on 12 empirical measurements that are easily performed in the real environment where the IR-LPS is being applied. Our experimental results show that the model provides a comprehensive solution to the real behavior of IR MP, yielding small errors when comparing real and modeled data (the mean error ranges from 1% to 4% depending on the environment surface materials). Other state-of-the-art methods yielded mean errors ranging from 15% to 40% in test measurements. PMID:28406436

  7. Simplifying and upscaling water resources systems models that combine natural and engineered components

    NASA Astrophysics Data System (ADS)

    McIntyre, N.; Keir, G.

    2014-12-01

    Water supply systems typically encompass components of both natural systems (e.g. catchment runoff, aquifer interception) and engineered systems (e.g. process equipment, water storages and transfers). Many physical processes of varying spatial and temporal scales are contained within these hybrid systems models. The need to aggregate and simplify system components has been recognised for reasons of parsimony and comprehensibility; and the use of probabilistic methods for modelling water-related risks also prompts the need to seek computationally efficient up-scaled conceptualisations. How to manage the up-scaling errors in such hybrid systems models has not been well-explored, compared to research in the hydrological process domain. Particular challenges include the non-linearity introduced by decision thresholds and non-linear relations between water use, water quality, and discharge strategies. Using a case study of a mining region, we explore the nature of up-scaling errors in water use, water quality and discharge, and we illustrate an approach to identification of a scale-adjusted model including an error model. Ways forward for efficient modelling of such complex, hybrid systems are discussed, including interactions with human, energy and carbon systems models.

  8. An Exact Algorithm to Compute the Double-Cut-and-Join Distance for Genomes with Duplicate Genes.

    PubMed

    Shao, Mingfu; Lin, Yu; Moret, Bernard M E

    2015-05-01

    Computing the edit distance between two genomes is a basic problem in the study of genome evolution. The double-cut-and-join (DCJ) model has formed the basis for most algorithmic research on rearrangements over the last few years. The edit distance under the DCJ model can be computed in linear time for genomes without duplicate genes, while the problem becomes NP-hard in the presence of duplicate genes. In this article, we propose an integer linear programming (ILP) formulation to compute the DCJ distance between two genomes with duplicate genes. We also provide an efficient preprocessing approach to simplify the ILP formulation while preserving optimality. Comparison on simulated genomes demonstrates that our method outperforms MSOAR in computing the edit distance, especially when the genomes contain long duplicated segments. We also apply our method to assign orthologous gene pairs among human, mouse, and rat genomes, where once again our method outperforms MSOAR.

  9. Using color histogram normalization for recovering chromatic illumination-changed images.

    PubMed

    Pei, S C; Tseng, C L; Wu, C C

    2001-11-01

    We propose a novel image-recovery method using the covariance matrix of the red-green-blue (R-G-B) color histogram and tensor theories. The image-recovery method is called the color histogram normalization algorithm. It is known that the color histograms of an image taken under varied illuminations are related by a general affine transformation of the R-G-B coordinates when the illumination is changed. We propose a simplified affine model for application with illumination variation. This simplified affine model considers the effects of only three basic forms of distortion: translation, scaling, and rotation. According to this principle, we can estimate the affine transformation matrix necessary to recover images whose color distributions are varied as a result of illumination changes. We compare the normalized color histogram of the standard image with that of the tested image. By performing some operations of simple linear algebra, we can estimate the matrix of the affine transformation between two images under different illuminations. To demonstrate the performance of the proposed algorithm, we divide the experiments into two parts: computer-simulated images and real images corresponding to illumination changes. Simulation results show that the proposed algorithm is effective for both types of images. We also explain the noise-sensitive skew-rotation estimation that exists in the general affine model and demonstrate that the proposed simplified affine model without the use of skew rotation is better than the general affine model for such applications.

  10. Recent advances in QM/MM free energy calculations using reference potentials☆

    PubMed Central

    Duarte, Fernanda; Amrein, Beat A.; Blaha-Nelson, David; Kamerlin, Shina C.L.

    2015-01-01

    Background Recent years have seen enormous progress in the development of methods for modeling (bio)molecular systems. This has allowed for the simulation of ever larger and more complex systems. However, as such complexity increases, the requirements needed for these models to be accurate and physically meaningful become more and more difficult to fulfill. The use of simplified models to describe complex biological systems has long been shown to be an effective way to overcome some of the limitations associated with this computational cost in a rational way. Scope of review Hybrid QM/MM approaches have rapidly become one of the most popular computational tools for studying chemical reactivity in biomolecular systems. However, the high cost involved in performing high-level QM calculations has limited the applicability of these approaches when calculating free energies of chemical processes. In this review, we present some of the advances in using reference potentials and mean field approximations to accelerate high-level QM/MM calculations. We present illustrative applications of these approaches and discuss challenges and future perspectives for the field. Major conclusions The use of physically-based simplifications has shown to effectively reduce the cost of high-level QM/MM calculations. In particular, lower-level reference potentials enable one to reduce the cost of expensive free energy calculations, thus expanding the scope of problems that can be addressed. General significance As was already demonstrated 40 years ago, the usage of simplified models still allows one to obtain cutting edge results with substantially reduced computational cost. This article is part of a Special Issue entitled Recent developments of molecular dynamics. PMID:25038480

  11. Maximized Gust Loads of a Closed-Loop, Nonlinear Aeroelastic System Using Nonlinear Systems Theory

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.

    1999-01-01

    The problem of computing the maximized gust load for a nonlinear, closed-loop aeroelastic aircraft is discusses. The Volterra theory of nonlinear systems is applied in order to define a linearized system that provides a bounds on the response of the nonlinear system of interest. The method is applied to a simplified model of an Airbus A310.

  12. Nuclear Engineering Computer Modules, Thermal-Hydraulics, TH-3: High Temperature Gas Cooled Reactor Thermal-Hydraulics.

    ERIC Educational Resources Information Center

    Reihman, Thomas C.

    This learning module is concerned with the temperature field, the heat transfer rates, and the coolant pressure drop in typical high temperature gas-cooled reactor (HTGR) fuel assemblies. As in all of the modules of this series, emphasis is placed on developing the theory and demonstrating its use with a simplified model. The heart of the module…

  13. A hypothesis on the formation of the primary ossification centers in the membranous neurocranium: a mathematical and computational model.

    PubMed

    Garzón-Alvarado, Diego A

    2013-01-21

    This article develops a model of the appearance and location of the primary centers of ossification in the calvaria. The model uses a system of reaction-diffusion equations of two molecules (BMP and Noggin) whose behavior is of type activator-substrate and its solution produces Turing patterns, which represents the primary ossification centers. Additionally, the model includes the level of cell maturation as a function of the location of mesenchymal cells. Thus the mature cells can become osteoblasts due to the action of BMP2. Therefore, with this model, we can have two frontal primary centers, two parietal, and one, two or more occipital centers. The location of these centers in the simplified computational model is highly consistent with those centers found at an embryonic level. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Computer Program for the Design and Off-Design Performance of Turbojet and Turbofan Engine Cycles

    NASA Technical Reports Server (NTRS)

    Morris, S. J.

    1978-01-01

    The rapid computer program is designed to be run in a stand-alone mode or operated within a larger program. The computation is based on a simplified one-dimensional gas turbine cycle. Each component in the engine is modeled thermo-dynamically. The component efficiencies used in the thermodynamic modeling are scaled for the off-design conditions from input design point values using empirical trends which are included in the computer code. The engine cycle program is capable of producing reasonable engine performance prediction with a minimum of computer execute time. The current computer execute time on the IBM 360/67 for one Mach number, one altitude, and one power setting is about 0.1 seconds. about 0.1 seconds. The principal assumption used in the calculation is that the compressor is operated along a line of maximum adiabatic efficiency on the compressor map. The fluid properties are computed for the combustion mixture, but dissociation is not included. The procedure included in the program is only for the combustion of JP-4, methane, or hydrogen.

  15. TAIR: A transonic airfoil analysis computer code

    NASA Technical Reports Server (NTRS)

    Dougherty, F. C.; Holst, T. L.; Grundy, K. L.; Thomas, S. D.

    1981-01-01

    The operation of the TAIR (Transonic AIRfoil) computer code, which uses a fast, fully implicit algorithm to solve the conservative full-potential equation for transonic flow fields about arbitrary airfoils, is described on two levels of sophistication: simplified operation and detailed operation. The program organization and theory are elaborated to simplify modification of TAIR for new applications. Examples with input and output are given for a wide range of cases, including incompressible, subcritical compressible, and transonic calculations.

  16. Simulating physiological interactions in a hybrid system of mathematical models.

    PubMed

    Kretschmer, Jörn; Haunsberger, Thomas; Drost, Erick; Koch, Edmund; Möller, Knut

    2014-12-01

    Mathematical models can be deployed to simulate physiological processes of the human organism. Exploiting these simulations, reactions of a patient to changes in the therapy regime can be predicted. Based on these predictions, medical decision support systems (MDSS) can help in optimizing medical therapy. An MDSS designed to support mechanical ventilation in critically ill patients should not only consider respiratory mechanics but should also consider other systems of the human organism such as gas exchange or blood circulation. A specially designed framework allows combining three model families (respiratory mechanics, cardiovascular dynamics and gas exchange) to predict the outcome of a therapy setting. Elements of the three model families are dynamically combined to form a complex model system with interacting submodels. Tests revealed that complex model combinations are not computationally feasible. In most patients, cardiovascular physiology could be simulated by simplified models decreasing computational costs. Thus, a simplified cardiovascular model that is able to reproduce basic physiological behavior is introduced. This model purely consists of difference equations and does not require special algorithms to be solved numerically. The model is based on a beat-to-beat model which has been extended to react to intrathoracic pressure levels that are present during mechanical ventilation. The introduced reaction to intrathoracic pressure levels as found during mechanical ventilation has been tuned to mimic the behavior of a complex 19-compartment model. Tests revealed that the model is able to represent general system behavior comparable to the 19-compartment model closely. Blood pressures were calculated with a maximum deviation of 1.8 % in systolic pressure and 3.5 % in diastolic pressure, leading to a simulation error of 0.3 % in cardiac output. The gas exchange submodel being reactive to changes in cardiac output showed a resulting deviation of less than 0.1 %. Therefore, the proposed model is usable in combinations where cardiovascular simulation does not have to be detailed. Computing costs have been decreased dramatically by a factor 186 compared to a model combination employing the 19-compartment model.

  17. Parallel stochastic simulation of macroscopic calcium currents.

    PubMed

    González-Vélez, Virginia; González-Vélez, Horacio

    2007-06-01

    This work introduces MACACO, a macroscopic calcium currents simulator. It provides a parameter-sweep framework which computes macroscopic Ca(2+) currents from the individual aggregation of unitary currents, using a stochastic model for L-type Ca(2+) channels. MACACO uses a simplified 3-state Markov model to simulate the response of each Ca(2+) channel to different voltage inputs to the cell. In order to provide an accurate systematic view for the stochastic nature of the calcium channels, MACACO is composed of an experiment generator, a central simulation engine and a post-processing script component. Due to the computational complexity of the problem and the dimensions of the parameter space, the MACACO simulation engine employs a grid-enabled task farm. Having been designed as a computational biology tool, MACACO heavily borrows from the way cell physiologists conduct and report their experimental work.

  18. Rotary engine performance computer program (RCEMAP and RCEMAPPC): User's guide

    NASA Technical Reports Server (NTRS)

    Bartrand, Timothy A.; Willis, Edward A.

    1993-01-01

    This report is a user's guide for a computer code that simulates the performance of several rotary combustion engine configurations. It is intended to assist prospective users in getting started with RCEMAP and/or RCEMAPPC. RCEMAP (Rotary Combustion Engine performance MAP generating code) is the mainframe version, while RCEMAPPC is a simplified subset designed for the personal computer, or PC, environment. Both versions are based on an open, zero-dimensional combustion system model for the prediction of instantaneous pressures, temperature, chemical composition and other in-chamber thermodynamic properties. Both versions predict overall engine performance and thermal characteristics, including bmep, bsfc, exhaust gas temperature, average material temperatures, and turbocharger operating conditions. Required inputs include engine geometry, materials, constants for use in the combustion heat release model, and turbomachinery maps. Illustrative examples and sample input files for both versions are included.

  19. A method for modeling finite-core vortices in wake-flow calculations

    NASA Technical Reports Server (NTRS)

    Stremel, P. M.

    1984-01-01

    A numerical method for computing nonplanar vortex wakes represented by finite-core vortices is presented. The approach solves for the velocity on an Eulerian grid, using standard finite-difference techniques; the vortex wake is tracked by Lagrangian methods. In this method, the distribution of continuous vorticity in the wake is replaced by a group of discrete vortices. An axially symmetric distribution of vorticity about the center of each discrete vortex is used to represent the finite-core model. Two distributions of vorticity, or core models, are investigated: a finite distribution of vorticity represented by a third-order polynomial, and a continuous distribution of vorticity throughout the wake. The method provides for a vortex-core model that is insensitive to the mesh spacing. Results for a simplified case are presented. Computed results for the roll-up of a vortex wake generated by wings with different spanwise load distributions are presented; contour plots of the flow-field velocities are included; and comparisons are made of the computed flow-field velocities with experimentally measured velocities.

  20. Ad Hoc modeling, expert problem solving, and R&T program evaluation

    NASA Technical Reports Server (NTRS)

    Silverman, B. G.; Liebowitz, J.; Moustakis, V. S.

    1983-01-01

    A simplified cost and time (SCAT) analysis program utilizing personal-computer technology is presented and demonstrated in the case of the NASA-Goddard end-to-end data system. The difficulties encountered in implementing complex program-selection and evaluation models in the research and technology field are outlined. The prototype SCAT system described here is designed to allow user-friendly ad hoc modeling in real time and at low cost. A worksheet constructed on the computer screen displays the critical parameters and shows how each is affected when one is altered experimentally. In the NASA case, satellite data-output and control requirements, ground-facility data-handling capabilities, and project priorities are intricately interrelated. Scenario studies of the effects of spacecraft phaseout or new spacecraft on throughput and delay parameters are shown. The use of a network of personal computers for higher-level coordination of decision-making processes is suggested, as a complement or alternative to complex large-scale modeling.

  1. Lung Ultrasonography in Patients With Idiopathic Pulmonary Fibrosis: Evaluation of a Simplified Protocol With High-Resolution Computed Tomographic Correlation.

    PubMed

    Vassalou, Evangelia E; Raissaki, Maria; Magkanas, Eleftherios; Antoniou, Katerina M; Karantanas, Apostolos H

    2018-03-01

    To compare a simplified ultrasonographic (US) protocol in 2 patient positions with the same-positioned comprehensive US assessments and high-resolution computed tomographic (CT) findings in patients with idiopathic pulmonary fibrosis. Twenty-five consecutive patients with idiopathic pulmonary fibrosis were prospectively enrolled and examined in 2 sessions. During session 1, patients were examined with a US protocol including 56 lung intercostal spaces in supine/sitting (supine/sitting comprehensive protocol) and lateral decubitus (decubitus comprehensive protocol) positions. During session 2, patients were evaluated with a 16-intercostal space US protocol in sitting (sitting simplified protocol) and left/right decubitus (decubitus simplified protocol) positions. The 16 intercostal spaces were chosen according to the prevalence of idiopathic pulmonary fibrosis-related changes on high-resolution CT. The sum of B-lines counted in each intercostal space formed the US scores for all 4 US protocols: supine/sitting and decubitus comprehensive US scores and sitting and decubitus simplified US scores. High-resolution CT-related Warrick scores (J Rheumatol 1991; 18:1520-1528) were compared to US scores. The duration of each protocol was recorded. A significant correlation was found between all US scores and Warrick scores and between simplified and corresponding comprehensive scores (P < .0001). Decubitus simplified US scores showed a slightly higher correlation with Warrick scores compared to sitting simplified US scores. Mean durations of decubitus and sitting simplified protocols were 4.76 and 6.20 minutes, respectively (P < .005). Simplified 16-intercostal space protocols correlated with comprehensive protocols and high-resolution CT findings in patients with idiopathic pulmonary fibrosis. The 16-intercostal space simplified protocol in the lateral decubitus position correlated better with high-resolution CT findings and was less time-consuming compared to the sitting position. © 2017 by the American Institute of Ultrasound in Medicine.

  2. High-performance computational fluid dynamics: a custom-code approach

    NASA Astrophysics Data System (ADS)

    Fannon, James; Loiseau, Jean-Christophe; Valluri, Prashant; Bethune, Iain; Náraigh, Lennon Ó.

    2016-07-01

    We introduce a modified and simplified version of the pre-existing fully parallelized three-dimensional Navier-Stokes flow solver known as TPLS. We demonstrate how the simplified version can be used as a pedagogical tool for the study of computational fluid dynamics (CFDs) and parallel computing. TPLS is at its heart a two-phase flow solver, and uses calls to a range of external libraries to accelerate its performance. However, in the present context we narrow the focus of the study to basic hydrodynamics and parallel computing techniques, and the code is therefore simplified and modified to simulate pressure-driven single-phase flow in a channel, using only relatively simple Fortran 90 code with MPI parallelization, but no calls to any other external libraries. The modified code is analysed in order to both validate its accuracy and investigate its scalability up to 1000 CPU cores. Simulations are performed for several benchmark cases in pressure-driven channel flow, including a turbulent simulation, wherein the turbulence is incorporated via the large-eddy simulation technique. The work may be of use to advanced undergraduate and graduate students as an introductory study in CFDs, while also providing insight for those interested in more general aspects of high-performance computing.

  3. QRS detection based ECG quality assessment.

    PubMed

    Hayn, Dieter; Jammerbund, Bernhard; Schreier, Günter

    2012-09-01

    Although immediate feedback concerning ECG signal quality during recording is useful, up to now not much literature describing quality measures is available. We have implemented and evaluated four ECG quality measures. Empty lead criterion (A), spike detection criterion (B) and lead crossing point criterion (C) were calculated from basic signal properties. Measure D quantified the robustness of QRS detection when applied to the signal. An advanced Matlab-based algorithm combining all four measures and a simplified algorithm for Android platforms, excluding measure D, were developed. Both algorithms were evaluated by taking part in the Computing in Cardiology Challenge 2011. Each measure's accuracy and computing time was evaluated separately. During the challenge, the advanced algorithm correctly classified 93.3% of the ECGs in the training-set and 91.6 % in the test-set. Scores for the simplified algorithm were 0.834 in event 2 and 0.873 in event 3. Computing time for measure D was almost five times higher than for other measures. Required accuracy levels depend on the application and are related to computing time. While our simplified algorithm may be accurate for real-time feedback during ECG self-recordings, QRS detection based measures can further increase the performance if sufficient computing power is available.

  4. New vibro-acoustic paradigms in biological tissues with application to diagnosis of pulmonary disorders

    NASA Astrophysics Data System (ADS)

    Zhang, Xiangling

    The fundamental objective of the present study is to improve our understanding of audible sound propagation in the pulmonary system and torso. A related applied objective is to assess the feasibility of using audible acoustics for diagnosis of specific pulmonary conditions, such as pneumothorax (PTX). To accomplish these objectives, this study includes theoretical, computational and experimental developments aimed at: (1) better identifying the mechanical dynamic properties of soft biological tissues found in the torso region, (2) investigating the mechanisms of sound attenuation that occur when a PTX is present using greatly simplified theoretical and computational models, and (3) exploring the feasibility and utility of more comprehensive and precise computational finite element models of audible sound propagation in the pulmonary system and torso that would aid in related diagnostic developments. Mechanical material properties of soft biological tissue are studied for the low audible frequency range. The sensitivity to shear viscoelastic material constants of theoretical solutions for radiation impedance and surface wave motion are compared. Theoretical solutions are also compared to experimental measurements and numerical results from finite element analysis. It is found that, while prior theoretical solutions for radiation impedance are accurate, use of such measurements to estimate shear viscoelastic constants is not as precise as the use of surface wave measurements. The feasibility of using audible sound for diagnosis of pneumothorax is studied. Simplified one- and two-dimensional theoretical and numerical models of sound transmission through the pulmonary system and chest region to the chest wall surface are developed to more clearly understand the mechanism of energy loss when a pneumothorax is present, relative to a baseline case. A canine study on which these models are based predicts significant decreases in acoustic transmission strength when a pneumothorax is presented, in qualitative agreement with experimental measurements in dogs. Finally, the feasibility of building three-dimensional computational models is studied based on CT images of human subject or combination of the Horsfield airway model with geometry of other parts approximate from medical illustration. Preliminary results from these models show the same trend of acoustic energy loss when a PTX is present.

  5. Controller design via structural reduced modeling by FETM

    NASA Technical Reports Server (NTRS)

    Yousuff, A.

    1986-01-01

    The Finite Element - Transfer Matrix (FETM) method has been developed to reduce the computations involved in analysis of structures. This widely accepted method, however, has certain limitations, and does not directly produce reduced models for control design. To overcome these shortcomings, a modification of FETM method has been developed. The modified FETM method easily produces reduced models that are tailored toward subsequent control design. Other features of this method are its ability to: (1) extract open loop frequencies and mode shapes with less computations, (2) overcome limitations of the original FETM method, and (3) simplify the procedures for output feedback, constrained compensation, and decentralized control. This semi annual report presents the development of the modified FETM, and through an example, illustrates its applicability to an output feedback and a decentralized control design.

  6. The NURBS curves in modelling the shape of the boundary in the parametric integral equations systems for solving the Laplace equation

    NASA Astrophysics Data System (ADS)

    Zieniuk, Eugeniusz; Kapturczak, Marta; Sawicki, Dominik

    2016-06-01

    In solving of boundary value problems the shapes of the boundary can be modelled by the curves widely used in computer graphics. In parametric integral equations system (PIES) such curves are directly included into the mathematical formalism. Its simplify the way of definition and modification of the shape of the boundary. Until now in PIES the B-spline, Bézier and Hermite curves were used. Recent developments in the computer graphics paid our attention, therefore we implemented in PIES possibility of defining the shape of boundary using the NURBS curves. The curves will allow us to modeling different shapes more precisely. In this paper we will compare PIES solutions (with applied NURBS) with the solutions existing in the literature.

  7. Controller design via structural reduced modeling by FETM

    NASA Technical Reports Server (NTRS)

    Yousuff, Ajmal

    1987-01-01

    The Finite Element-Transfer Matrix (FETM) method has been developed to reduce the computations involved in analysis of structures. This widely accepted method, however, has certain limitations, and does not address the issues of control design. To overcome these, a modification of the FETM method has been developed. The new method easily produces reduced models tailored toward subsequent control design. Other features of this method are its ability to: (1) extract open loop frequencies and mode shapes with less computations, (2) overcome limitations of the original FETM method, and (3) simplify the design procedures for output feedback, constrained compensation, and decentralized control. This report presents the development of the new method, generation of reduced models by this method, their properties, and the role of these reduced models in control design. Examples are included to illustrate the methodology.

  8. Statistical processing of large image sequences.

    PubMed

    Khellah, F; Fieguth, P; Murray, M J; Allen, M

    2005-01-01

    The dynamic estimation of large-scale stochastic image sequences, as frequently encountered in remote sensing, is important in a variety of scientific applications. However, the size of such images makes conventional dynamic estimation methods, for example, the Kalman and related filters, impractical. In this paper, we present an approach that emulates the Kalman filter, but with considerably reduced computational and storage requirements. Our approach is illustrated in the context of a 512 x 512 image sequence of ocean surface temperature. The static estimation step, the primary contribution here, uses a mixture of stationary models to accurately mimic the effect of a nonstationary prior, simplifying both computational complexity and modeling. Our approach provides an efficient, stable, positive-definite model which is consistent with the given correlation structure. Thus, the methods of this paper may find application in modeling and single-frame estimation.

  9. Advanced mathematical on-line analysis in nuclear experiments. Usage of parallel computing CUDA routines in standard root analysis

    NASA Astrophysics Data System (ADS)

    Grzeszczuk, A.; Kowalski, S.

    2015-04-01

    Compute Unified Device Architecture (CUDA) is a parallel computing platform developed by Nvidia for increase speed of graphics by usage of parallel mode for processes calculation. The success of this solution has opened technology General-Purpose Graphic Processor Units (GPGPUs) for applications not coupled with graphics. The GPGPUs system can be applying as effective tool for reducing huge number of data for pulse shape analysis measures, by on-line recalculation or by very quick system of compression. The simplified structure of CUDA system and model of programming based on example Nvidia GForce GTX580 card are presented by our poster contribution in stand-alone version and as ROOT application.

  10. Simplified procedure for computing the absorption of sound by the atmosphere

    DOT National Transportation Integrated Search

    2007-10-31

    This paper describes a study that resulted in the development of a simplified : method for calculating attenuation by atmospheric-absorption for wide-band : sounds analyzed by one-third octave-band filters. The new method [referred to : herein as the...

  11. Large Angle Transient Dynamics (LATDYN) user's manual

    NASA Technical Reports Server (NTRS)

    Abrahamson, A. Louis; Chang, Che-Wei; Powell, Michael G.; Wu, Shih-Chin; Bingel, Bradford D.; Theophilos, Paula M.

    1991-01-01

    A computer code for modeling the large angle transient dynamics (LATDYN) of structures was developed to investigate techniques for analyzing flexible deformation and control/structure interaction problems associated with large angular motions of spacecraft. This type of analysis is beyond the routine capability of conventional analytical tools without simplifying assumptions. In some instances, the motion may be sufficiently slow and the spacecraft (or component) sufficiently rigid to simplify analyses of dynamics and controls by making pseudo-static and/or rigid body assumptions. The LATDYN introduces a new approach to the problem by combining finite element structural analysis, multi-body dynamics, and control system analysis in a single tool. It includes a type of finite element that can deform and rotate through large angles at the same time, and which can be connected to other finite elements either rigidly or through mechanical joints. The LATDYN also provides symbolic capabilities for modeling control systems which are interfaced directly with the finite element structural model. Thus, the nonlinear equations representing the structural model are integrated along with the equations representing sensors, processing, and controls as a coupled system.

  12. Scripting human animations in a virtual environment

    NASA Technical Reports Server (NTRS)

    Goldsby, Michael E.; Pandya, Abhilash K.; Maida, James C.

    1994-01-01

    The current deficiencies of virtual environment (VE) are well known: annoying lag time in drawing the current view, drastically simplified environments to reduce that time lag, low resolution and narrow field of view. Animation scripting is an application of VE technology which can be carried out successfully despite these deficiencies. The final product is a smoothly moving high resolution animation displaying detailed models. In this system, the user is represented by a human computer model with the same body proportions. Using magnetic tracking, the motions of the model's upper torso, head and arms are controlled by the user's movements (18 degrees of freedom). The model's lower torso and global position and orientation are controlled by a spaceball and keypad (12 degrees of freedom). Using this system human motion scripts can be extracted from the user's movements while immersed in a simplified virtual environment. Recorded data is used to define key frames; motion is interpolated between them and post processing adds a more detailed environment. The result is a considerable savings in time and a much more natural-looking movement of a human figure in a smooth and seamless animation.

  13. ALC: automated reduction of rule-based models

    PubMed Central

    Koschorreck, Markus; Gilles, Ernst Dieter

    2008-01-01

    Background Combinatorial complexity is a challenging problem for the modeling of cellular signal transduction since the association of a few proteins can give rise to an enormous amount of feasible protein complexes. The layer-based approach is an approximative, but accurate method for the mathematical modeling of signaling systems with inherent combinatorial complexity. The number of variables in the simulation equations is highly reduced and the resulting dynamic models show a pronounced modularity. Layer-based modeling allows for the modeling of systems not accessible previously. Results ALC (Automated Layer Construction) is a computer program that highly simplifies the building of reduced modular models, according to the layer-based approach. The model is defined using a simple but powerful rule-based syntax that supports the concepts of modularity and macrostates. ALC performs consistency checks on the model definition and provides the model output in different formats (C MEX, MATLAB, Mathematica and SBML) as ready-to-run simulation files. ALC also provides additional documentation files that simplify the publication or presentation of the models. The tool can be used offline or via a form on the ALC website. Conclusion ALC allows for a simple rule-based generation of layer-based reduced models. The model files are given in different formats as ready-to-run simulation files. PMID:18973705

  14. Computer simulation comparison of tripolar, bipolar, and spline Laplacian electrocadiogram estimators.

    PubMed

    Chen, T; Besio, W; Dai, W

    2009-01-01

    A comparison of the performance of the tripolar and bipolar concentric as well as spline Laplacian electrocardiograms (LECGs) and body surface Laplacian mappings (BSLMs) for localizing and imaging the cardiac electrical activation has been investigated based on computer simulation. In the simulation a simplified eccentric heart-torso sphere-cylinder homogeneous volume conductor model were developed. Multiple dipoles with different orientations were used to simulate the underlying cardiac electrical activities. Results show that the tripolar concentric ring electrodes produce the most accurate LECG and BSLM estimation among the three estimators with the best performance in spatial resolution.

  15. Extension, validation and application of the NASCAP code

    NASA Technical Reports Server (NTRS)

    Katz, I.; Cassidy, J. J., III; Mandell, M. J.; Schnuelle, G. W.; Steen, P. G.; Parks, D. E.; Rotenberg, M.; Alexander, J. H.

    1979-01-01

    Numerous extensions were made in the NASCAP code. They fall into three categories: a greater range of definable objects, a more sophisticated computational model, and simplified code structure and usage. An important validation of NASCAP was performed using a new two dimensional computer code (TWOD). An interactive code (MATCHG) was written to compare material parameter inputs with charging results. The first major application of NASCAP was performed on the SCATHA satellite. Shadowing and charging calculation were completed. NASCAP was installed at the Air Force Geophysics Laboratory, where researchers plan to use it to interpret SCATHA data.

  16. Systematic comparison of the behaviors produced by computational models of epileptic neocortex.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warlaumont, A. S.; Lee, H. C.; Benayoun, M.

    2010-12-01

    Two existing models of brain dynamics in epilepsy, one detailed (i.e., realistic) and one abstract (i.e., simplified) are compared in terms of behavioral range and match to in vitro mouse recordings. A new method is introduced for comparing across computational models that may have very different forms. First, high-level metrics were extracted from model and in vitro output time series. A principal components analysis was then performed over these metrics to obtain a reduced set of derived features. These features define a low-dimensional behavior space in which quantitative measures of behavioral range and degree of match to real data canmore » be obtained. The detailed and abstract models and the mouse recordings overlapped considerably in behavior space. Both the range of behaviors and similarity to mouse data were similar between the detailed and abstract models. When no high-level metrics were used and principal components analysis was computed over raw time series, the models overlapped minimally with the mouse recordings. The method introduced here is suitable for comparing across different kinds of model data and across real brain recordings. It appears that, despite differences in form and computational expense, detailed and abstract models do not necessarily differ in their behaviors.« less

  17. Computational Analysis of Static and Dynamic Behaviour of Magnetic Suspensions and Magnetic Bearings

    NASA Technical Reports Server (NTRS)

    Britcher, Colin P. (Editor); Groom, Nelson J.

    1996-01-01

    Static modelling of magnetic bearings is often carried out using magnetic circuit theory. This theory cannot easily include nonlinear effects such as magnetic saturation or the fringing of flux in air-gaps. Modern computational tools are able to accurately model complex magnetic bearing geometries, provided some care is exercised. In magnetic suspension applications, the magnetic fields are highly three-dimensional and require computational tools for the solution of most problems of interest. The dynamics of a magnetic bearing or magnetic suspension system can be strongly affected by eddy currents. Eddy currents are present whenever a time-varying magnetic flux penetrates a conducting medium. The direction of flow of the eddy current is such as to reduce the rate-of-change of flux. Analytic solutions for eddy currents are available for some simplified geometries, but complex geometries must be solved by computation. It is only in recent years that such computations have been considered truly practical. At NASA Langley Research Center, state-of-the-art finite-element computer codes, 'OPERA', 'TOSCA' and 'ELEKTRA' have recently been installed and applied to the magnetostatic and eddy current problems. This paper reviews results of theoretical analyses which suggest general forms of mathematical models for eddy currents, together with computational results. A simplified circuit-based eddy current model proposed appears to predict the observed trends in the case of large eddy current circuits in conducting non-magnetic material. A much more difficult case is seen to be that of eddy currents in magnetic material, or in non-magnetic material at higher frequencies, due to the lower skin depths. Even here, the dissipative behavior has been shown to yield at least somewhat to linear modelling. Magnetostatic and eddy current computations have been carried out relating to the Annular Suspension and Pointing System, a prototype for a space payload pointing and vibration isolation system, where the magnetic actuator geometry resembles a conventional magnetic bearing. Magnetostatic computations provide estimates of flux density within airgaps and the iron core material, fringing at the pole faces and the net force generated. Eddy current computations provide coil inductance, power dissipation and the phase lag in the magnetic field, all as functions of excitation frequency. Here, the dynamics of the magnetic bearings, notably the rise time of forces with changing currents, are found to be very strongly affected by eddy currents, even at quite low frequencies. Results are also compared to experimental measurements of the performance of a large-gap magnetic suspension system, the Large Angle Magnetic Suspension Test Fixture (LAMSTF). Eddy current effects are again shown to significantly affect the dynamics of the system. Some consideration is given to the ease and accuracy of computation, specifically relating to OPERA/TOSCA/ELEKTRA.

  18. Application of a new model for groundwater age distributions: Modeling and isotopic analysis of artificial recharge in the Rialto-Colton basin, California

    USGS Publications Warehouse

    Ginn, T.R.; Woolfenden, L.

    2002-01-01

    A project for modeling and isotopic analysis of artificial recharge in the Rialto-Colton basin aquifer in California, is discussed. The Rialto-Colton aquifer has been divided into four primary and significant flowpaths following the general direction of groundwater flow from NW to SE. The introductory investigation include sophisticated chemical reaction modeling, with highly simplified flow path simulation. A comprehensive reactive transport model with the established set of geochemical reactions over the whole aquifer will also be developed for treating both reactions and transport realistically. This will be completed by making use of HBGC123D implemented with isotopic calculation step to compute Carbon-14 (C14) and stable Carbon-13 (C13) contents of the water. Computed carbon contents will also be calibrated with the measured carbon contents for assessment of the amount of imported recharge into the Linden pond.

  19. Development of a computational technique to measure cartilage contact area.

    PubMed

    Willing, Ryan; Lapner, Michael; Lalone, Emily A; King, Graham J W; Johnson, James A

    2014-03-21

    Computational measurement of joint contact distributions offers the benefit of non-invasive measurements of joint contact without the use of interpositional sensors or casting materials. This paper describes a technique for indirectly measuring joint contact based on overlapping of articular cartilage computer models derived from CT images and positioned using in vitro motion capture data. The accuracy of this technique when using the physiological nonuniform cartilage thickness distribution, or simplified uniform cartilage thickness distributions, is quantified through comparison with direct measurements of contact area made using a casting technique. The efficacy of using indirect contact measurement techniques for measuring the changes in contact area resulting from hemiarthroplasty at the elbow is also quantified. Using the physiological nonuniform cartilage thickness distribution reliably measured contact area (ICC=0.727), but not better than the assumed bone specific uniform cartilage thicknesses (ICC=0.673). When a contact pattern agreement score (s(agree)) was used to assess the accuracy of cartilage contact measurements made using physiological nonuniform or simplified uniform cartilage thickness distributions in terms of size, shape and location, their accuracies were not significantly different (p>0.05). The results of this study demonstrate that cartilage contact can be measured indirectly based on the overlapping of cartilage contact models. However, the results also suggest that in some situations, inter-bone distance measurement and an assumed cartilage thickness may suffice for predicting joint contact patterns. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Development of a global aerosol model using a two-dimensional sectional method: 1. Model design

    NASA Astrophysics Data System (ADS)

    Matsui, H.

    2017-08-01

    This study develops an aerosol module, the Aerosol Two-dimensional bin module for foRmation and Aging Simulation version 2 (ATRAS2), and implements the module into a global climate model, Community Atmosphere Model. The ATRAS2 module uses a two-dimensional (2-D) sectional representation with 12 size bins for particles from 1 nm to 10 μm in dry diameter and 8 black carbon (BC) mixing state bins. The module can explicitly calculate the enhancement of absorption and cloud condensation nuclei activity of BC-containing particles by aging processes. The ATRAS2 module is an extension of a 2-D sectional aerosol module ATRAS used in our previous studies within a framework of a regional three-dimensional model. Compared with ATRAS, the computational cost of the aerosol module is reduced by more than a factor of 10 by simplifying the treatment of aerosol processes and 2-D sectional representation, while maintaining good accuracy of aerosol parameters in the simulations. Aerosol processes are simplified for condensation of sulfate, ammonium, and nitrate, organic aerosol formation, coagulation, and new particle formation processes, and box model simulations show that these simplifications do not substantially change the predicted aerosol number and mass concentrations and their mixing states. The 2-D sectional representation is simplified (the number of advected species is reduced) primarily by the treatment of chemical compositions using two interactive bin representations. The simplifications do not change the accuracy of global aerosol simulations. In part 2, comparisons with measurements and the results focused on aerosol processes such as BC aging processes are shown.

  1. Deconstructing the core dynamics from a complex time-lagged regulatory biological circuit.

    PubMed

    Eriksson, O; Brinne, B; Zhou, Y; Björkegren, J; Tegnér, J

    2009-03-01

    Complex regulatory dynamics is ubiquitous in molecular networks composed of genes and proteins. Recent progress in computational biology and its application to molecular data generate a growing number of complex networks. Yet, it has been difficult to understand the governing principles of these networks beyond graphical analysis or extensive numerical simulations. Here the authors exploit several simplifying biological circumstances which thereby enable to directly detect the underlying dynamical regularities driving periodic oscillations in a dynamical nonlinear computational model of a protein-protein network. System analysis is performed using the cell cycle, a mathematically well-described complex regulatory circuit driven by external signals. By introducing an explicit time delay and using a 'tearing-and-zooming' approach the authors reduce the system to a piecewise linear system with two variables that capture the dynamics of this complex network. A key step in the analysis is the identification of functional subsystems by identifying the relations between state-variables within the model. These functional subsystems are referred to as dynamical modules operating as sensitive switches in the original complex model. By using reduced mathematical representations of the subsystems the authors derive explicit conditions on how the cell cycle dynamics depends on system parameters, and can, for the first time, analyse and prove global conditions for system stability. The approach which includes utilising biological simplifying conditions, identification of dynamical modules and mathematical reduction of the model complexity may be applicable to other well-characterised biological regulatory circuits. [Includes supplementary material].

  2. A theoretical and computational study of lithium-ion battery thermal management for electric vehicles using heat pipes

    NASA Astrophysics Data System (ADS)

    Greco, Angelo; Cao, Dongpu; Jiang, Xi; Yang, Hong

    2014-07-01

    A simplified one-dimensional transient computational model of a prismatic lithium-ion battery cell is developed using thermal circuit approach in conjunction with the thermal model of the heat pipe. The proposed model is compared to an analytical solution based on variable separation as well as three-dimensional (3D) computational fluid dynamics (CFD) simulations. The three approaches, i.e. the 1D computational model, analytical solution, and 3D CFD simulations, yielded nearly identical results for the thermal behaviours. Therefore the 1D model is considered to be sufficient to predict the temperature distribution of lithium-ion battery thermal management using heat pipes. Moreover, a maximum temperature of 27.6 °C was predicted for the design of the heat pipe setup in a distributed configuration, while a maximum temperature of 51.5 °C was predicted when forced convection was applied to the same configuration. The higher surface contact of the heat pipes allows a better cooling management compared to forced convection cooling. Accordingly, heat pipes can be used to achieve effective thermal management of a battery pack with confined surface areas.

  3. Causal Learning with Local Computations

    ERIC Educational Resources Information Center

    Fernbach, Philip M.; Sloman, Steven A.

    2009-01-01

    The authors proposed and tested a psychological theory of causal structure learning based on local computations. Local computations simplify complex learning problems via cues available on individual trials to update a single causal structure hypothesis. Structural inferences from local computations make minimal demands on memory, require…

  4. Highly Physical Solar Radiation Pressure Modeling During Penumbra Transitions

    NASA Astrophysics Data System (ADS)

    Robertson, Robert V.

    Solar radiation pressure (SRP) is one of the major non-gravitational forces acting on spacecraft. Acceleration by radiation pressure depends on the radiation flux; on spacecraft shape, attitude, and mass; and on the optical properties of the spacecraft surfaces. Precise modeling of SRP is needed for dynamic satellite orbit determination, space mission design and control, and processing of data from space-based science instruments. During Earth penumbra transitions, sunlight is passing through Earth's lower atmosphere and, in the process, its path, intensity, spectral composition, and shape are significantly affected. This dissertation presents a new method for highly physical SRP modeling in Earth's penumbra called Solar radiation pressure with Oblateness and Lower Atmospheric Absorption, Refraction, and Scattering (SOLAARS). The fundamental geometry and approach mirrors past work, where the solar radiation field is modeled using a number of light rays, rather than treating the Sun as a single point source. This dissertation aims to clarify this approach, simplify its implementation, and model previously overlooked factors. The complex geometries involved in modeling penumbra solar radiation fields are described in a more intuitive and complete way to simplify implementation. Atmospheric effects due to solar radiation passing through the troposphere and stratosphere are modeled, and the results are tabulated to significantly reduce computational cost. SOLAARS includes new, more efficient and accurate approaches to modeling atmospheric effects which allow us to consider the spatial and temporal variability in lower atmospheric conditions. A new approach to modeling the influence of Earth's polar flattening draws on past work to provide a relatively simple but accurate method for this important effect. Previous penumbra SRP models tend to lie at two extremes of complexity and computational cost, and so the significant improvement in accuracy provided by the complex models has often been lost in the interest of convenience and efficiency. This dissertation presents a simple model which provides an accurate alternative to the full, high precision SOLAARS model with reduced complexity and computational cost. This simpler method is based on curve fitting to results of the full SOLAARS model and is called SOLAARS Curve Fit (SOLAARS-CF). Both the high precision SOLAARS model and the simpler SOLAARS-CF model are applied to the Gravity Recovery and Climate Experiment (GRACE) satellites. Modeling results are compared to the sub-nm/s2 precision GRACE accelerometer data and the results of a traditional penumbra SRP model. These comparisons illustrate the improved accuracy of the SOLAARS and SOLAARS-CF models. A sensitivity analyses for the GRACE orbit illustrates the significance of various input parameters and features of the SOLAARS model on results. The SOLAARS-CF model is applied to a study of penumbra SRP and the Earth flyby anomaly. Beyond the value of its results to the scientific community, this study provides an application example where the computational efficiency of the simplified SOLAARS-CF model is necessary. The Earth flyby anomaly is an open question in orbit determination which has gone unsolved for over 20 years. This study quantifies the influence of penumbra SRP modeling errors on the observed anomalies from the Galileo, Cassini, and Rosetta Earth flybys. The results of this study prove that penumbra SRP is not an explanation for or significant contributor to the Earth flyby anomaly.

  5. Computational study on UV curing characteristics in nanoimprint lithography: Stochastic simulation

    NASA Astrophysics Data System (ADS)

    Koyama, Masanori; Shirai, Masamitsu; Kawata, Hiroaki; Hirai, Yoshihiko; Yasuda, Masaaki

    2017-06-01

    A computational simulation model of UV curing in nanoimprint lithography based on a simplified stochastic approach is proposed. The activated unit reacts with a randomly selected monomer within a critical reaction radius. Cluster units are chained to each other. Then, another monomer is activated and the next chain reaction occurs. This process is repeated until a virgin monomer disappears within the reaction radius or until the activated monomers react with each other. The simulation model well describes the basic UV curing characteristics, such as the molecular weight distributions of the reacted monomers and the effect of the initiator concentration on the conversion ratio. The effects of film thickness on UV curing characteristics are also studied by the simulation.

  6. A Plan Recognition Model for Subdialogues in Conversations.

    DTIC Science & Technology

    1984-11-01

    82-K-0193. A simplified, shortened version appears in tho Proceedings of the 10th International Conference on Computational Linguistics , Stanford... linguistic results from such work. Consider the following two dialogue fragments. Dialogue 1 was collected at an information booth in a train station in...network structures [Sidner and Bates, 19831. Unlike Dialogue 1, the system’s interaction with the user is primarily non- linguistic , with utterances only

  7. A multiple-time-scale turbulence model based on variable partitioning of turbulent kinetic energy spectrum

    NASA Technical Reports Server (NTRS)

    Kim, S.-W.; Chen, C.-P.

    1987-01-01

    A multiple-time-scale turbulence model of a single point closure and a simplified split-spectrum method is presented. In the model, the effect of the ratio of the production rate to the dissipation rate on eddy viscosity is modeled by use of the multiple-time-scales and a variable partitioning of the turbulent kinetic energy spectrum. The concept of a variable partitioning of the turbulent kinetic energy spectrum and the rest of the model details are based on the previously reported algebraic stress turbulence model. Example problems considered include: a fully developed channel flow, a plane jet exhausting into a moving stream, a wall jet flow, and a weakly coupled wake-boundary layer interaction flow. The computational results compared favorably with those obtained by using the algebraic stress turbulence model as well as experimental data. The present turbulence model, as well as the algebraic stress turbulence model, yielded significantly improved computational results for the complex turbulent boundary layer flows, such as the wall jet flow and the wake boundary layer interaction flow, compared with available computational results obtained by using the standard kappa-epsilon turbulence model.

  8. A multiple-time-scale turbulence model based on variable partitioning of the turbulent kinetic energy spectrum

    NASA Technical Reports Server (NTRS)

    Kim, S.-W.; Chen, C.-P.

    1989-01-01

    A multiple-time-scale turbulence model of a single point closure and a simplified split-spectrum method is presented. In the model, the effect of the ratio of the production rate to the dissipation rate on eddy viscosity is modeled by use of the multiple-time-scales and a variable partitioning of the turbulent kinetic energy spectrum. The concept of a variable partitioning of the turbulent kinetic energy spectrum and the rest of the model details are based on the previously reported algebraic stress turbulence model. Example problems considered include: a fully developed channel flow, a plane jet exhausting into a moving stream, a wall jet flow, and a weakly coupled wake-boundary layer interaction flow. The computational results compared favorably with those obtained by using the algebraic stress turbulence model as well as experimental data. The present turbulence model, as well as the algebraic stress turbulence model, yielded significantly improved computational results for the complex turbulent boundary layer flows, such as the wall jet flow and the wake boundary layer interaction flow, compared with available computational results obtained by using the standard kappa-epsilon turbulence model.

  9. Aspects of Unstructured Grids and Finite-Volume Solvers for the Euler and Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    1992-01-01

    One of the major achievements in engineering science has been the development of computer algorithms for solving nonlinear differential equations such as the Navier-Stokes equations. In the past, limited computer resources have motivated the development of efficient numerical schemes in computational fluid dynamics (CFD) utilizing structured meshes. The use of structured meshes greatly simplifies the implementation of CFD algorithms on conventional computers. Unstructured grids on the other hand offer an alternative to modeling complex geometries. Unstructured meshes have irregular connectivity and usually contain combinations of triangles, quadrilaterals, tetrahedra, and hexahedra. The generation and use of unstructured grids poses new challenges in CFD. The purpose of this note is to present recent developments in the unstructured grid generation and flow solution technology.

  10. Nonlinear transient analysis of multi-mass flexible rotors - theory and applications

    NASA Technical Reports Server (NTRS)

    Kirk, R. G.; Gunter, E. J.

    1973-01-01

    The equations of motion necessary to compute the transient response of multi-mass flexible rotors are formulated to include unbalance, rotor acceleration, and flexible damped nonlinear bearing stations. A method of calculating the unbalance response of flexible rotors from a modified Myklestad-Prohl technique is discussed in connection with the method of solution for the transient response. Several special cases of simplified rotor-bearing systems are presented and analyzed for steady-state response, stability, and transient behavior. These simplified rotor models produce extensive design information necessary to insure stable performance to elastic mounted rotor-bearing systems under varying levels and forms of excitation. The nonlinear journal bearing force expressions derived from the short bearing approximation are utilized in the study of the stability and transient response of the floating bush squeeze damper support system. Both rigid and flexible rotor models are studied, and results indicate that the stability of flexible rotors supported by journal bearings can be greatly improved by the use of squeeze damper supports. Results from linearized stability studies of flexible rotors indicate that a tuned support system can greatly improve the performance of the units from the standpoint of unbalanced response and impact loading. Extensive stability and design charts may be readily produced for given rotor specifications by the computer codes presented in this analysis.

  11. Design and Strength check of Large Blow Molding Machine Rack

    NASA Astrophysics Data System (ADS)

    Fei-fei, GU; Zhi-song, ZHU; Xiao-zhao, YAN; Yi-min, ZHU

    Design procedure of large blow moulding machine rack is discussed in the article. A strength checking method is presented. Finite element analysis is conducted in the design procedure by ANSYS software. The actual situation of the rack load bearing is fully considered. The necessary means to simplify the model are done. The dimensional linear element Beam 188 is analyzed. MESH200 is used to mesh. Therefore, it simplifies the analysis process and improves computational efficiency. The maximum deformation of rack is 8.037 mm: it is occurred in the position of accumulator head. The result states: it meets the national standard curvature which is not greater than 0.3% of the total channel length; it also meets strength requirement that the maximum stress was 54.112 MPa.

  12. Inelastic behavior of structural components

    NASA Technical Reports Server (NTRS)

    Hussain, N.; Khozeimeh, K.; Toridis, T. G.

    1980-01-01

    A more accurate procedure was developed for the determination of the inelastic behavior of structural components. The actual stress-strain curve for the mathematical of the structure was utilized to generate the force-deformation relationships for the structural elements, rather than using simplified models such as elastic-plastic, bilinear and trilinear approximations. relationships were generated for beam elements with various types of cross sections. In the generational of these curves, stress or load reversals, kinematic hardening and hysteretic behavior were taken into account. Intersections between loading and unloading branches were determined through an iterative process. Using the inelastic properties obtained, the plastic static response of some simple structural systems composed of beam elements was computed. Results were compared with known solutions, indicating a considerable improvement over response predictions obtained by means of simplified approximations used in previous investigations.

  13. Probabilistic Structures Analysis Methods (PSAM) for select space propulsion system components

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The basic formulation for probabilistic finite element analysis is described and demonstrated on a few sample problems. This formulation is based on iterative perturbation that uses the factorized stiffness on the unperturbed system as the iteration preconditioner for obtaining the solution to the perturbed problem. This approach eliminates the need to compute, store and manipulate explicit partial derivatives of the element matrices and force vector, which not only reduces memory usage considerably, but also greatly simplifies the coding and validation tasks. All aspects for the proposed formulation were combined in a demonstration problem using a simplified model of a curved turbine blade discretized with 48 shell elements, and having random pressure and temperature fields with partial correlation, random uniform thickness, and random stiffness at the root.

  14. A simplified fourwall interference assessment procedure for airfoil data obtained in the Langley 0.3-meter transonic cryogenic tunnel

    NASA Technical Reports Server (NTRS)

    Murthy, A. V.

    1987-01-01

    A simplified fourwall interference assessment method has been described, and a computer program developed to facilitate correction of the airfoil data obtained in the Langley 0.3-m Transonic Cryogenic Tunnel (TCT). The procedure adopted is to first apply a blockage correction due to sidewall boundary-layer effects by various methods. The sidewall boundary-layer corrected data are then used to calculate the top and bottom wall interference effects by the method of Capallier, Chevallier and Bouinol, using the measured wall pressure distribution and the model force coefficients. The interference corrections obtained by the present method have been compared with other methods and found to give good agreement for the experimental data obtained in the TCT with slotted top and bottom walls.

  15. Modal ring method for the scattering of sound

    NASA Technical Reports Server (NTRS)

    Baumeister, Kenneth J.; Kreider, Kevin L.

    1993-01-01

    The modal element method for acoustic scattering can be simplified when the scattering body is rigid. In this simplified method, called the modal ring method, the scattering body is represented by a ring of triangular finite elements forming the outer surface. The acoustic pressure is calculated at the element nodes. The pressure in the infinite computational region surrounding the body is represented analytically by an eigenfunction expansion. The two solution forms are coupled by the continuity of pressure and velocity on the body surface. The modal ring method effectively reduces the two-dimensional scattering problem to a one-dimensional problem capable of handling very high frequency scattering. In contrast to the boundary element method or the method of moments, which perform a similar reduction in problem dimension, the model line method has the added advantage of having a highly banded solution matrix requiring considerably less computer storage. The method shows excellent agreement with analytic results for scattering from rigid circular cylinders over a wide frequency range (1 is equal to or less than ka is less than or equal to 100) in the near and far fields.

  16. Blood Flow in Idealized Vascular Access for Hemodialysis: A Review of Computational Studies.

    PubMed

    Ene-Iordache, Bogdan; Remuzzi, Andrea

    2017-09-01

    Although our understanding of the failure mechanism of vascular access for hemodialysis has increased substantially, this knowledge has not translated into successful therapies. Despite advances in technology, it is recognized that vascular access is difficult to maintain, due to complications such as intimal hyperplasia. Computational studies have been used to estimate hemodynamic changes induced by vascular access creation. Due to the heterogeneity of patient-specific geometries, and difficulties with obtaining reliable models of access vessels, idealized models were often employed. In this review we analyze the knowledge gained with the use of computational such simplified models. A review of the literature was conducted, considering studies employing a computational fluid dynamics approach to gain insights into the flow field phenotype that develops in idealized models of vascular access. Several important discoveries have originated from idealized model studies, including the detrimental role of disturbed flow and turbulent flow, and the beneficial role of spiral flow in intimal hyperplasia. The general flow phenotype was consistent among studies, but findings were not treated homogeneously since they paralleled achievements in cardiovascular biomechanics which spanned over the last two decades. Computational studies in idealized models are important for studying local blood flow features and evaluating new concepts that may improve the patency of vascular access for hemodialysis. For future studies we strongly recommend numerical modelling targeted at accurately characterizing turbulent flows and multidirectional wall shear disturbances.

  17. Finite temperature corrections to tachyon mass in intersecting D-branes

    NASA Astrophysics Data System (ADS)

    Sethi, Varun; Chowdhury, Sudipto Paul; Sarkar, Swarnendu

    2017-04-01

    We continue with the analysis of finite temperature corrections to the Tachyon mass in intersecting branes which was initiated in [1]. In this paper we extend the computation to the case of intersecting D3 branes by considering a setup of two intersecting branes in flat-space background. A holographic model dual to BCS superconductor consisting of intersecting D8 branes in D4 brane background was proposed in [2]. The background considered here is a simplified configuration of this dual model. We compute the one-loop Tachyon amplitude in the Yang-Mills approximation and show that the result is finite. Analyzing the amplitudes further we numerically compute the transition temperature at which the Tachyon becomes massless. The analytic expressions for the one-loop amplitudes obtained here reduce to those for intersecting D1 branes obtained in [1] as well as those for intersecting D2 branes.

  18. Computation of incompressible viscous flows through turbopump components

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Chang, Leon

    1993-01-01

    Flow through pump components, such as an inducer and an impeller, is efficiently simulated by solving the incompressible Navier-Stokes equations. The solution method is based on the pseudocompressibility approach and uses an implicit-upwind differencing scheme together with the Gauss-Seidel line relaxation method. the equations are solved in steadily rotating reference frames and the centrifugal force and the Coriolis force are added to the equation of motion. Current computations use a one-equation Baldwin-Barth turbulence model which is derived from a simplified form of the standard k-epsilon model equations. The resulting computer code is applied to the flow analysis inside a generic rocket engine pump inducer, a fuel pump impeller, and SSME high pressure fuel turbopump impeller. Numerical results of inducer flow are compared with experimental measurements. In the fuel pump impeller, the effect of downstream boundary conditions is investigated. Flow analyses at 80 percent, 100 percent, and 120 percent of design conditions are presented.

  19. The Influence of Realistic Reynolds Numbers on Slat Noise Simulations

    NASA Technical Reports Server (NTRS)

    Lockard, David P.; Choudhari, Meelan M.

    2012-01-01

    The slat noise from the 30P/30N high-lift system has been computed using a computational fluid dynamics code in conjunction with a Ffowcs Williams-Hawkings solver. Varying the Reynolds number from 1.71 to 12.0 million based on the stowed chord resulted in slight changes in the radiated noise. Tonal features in the spectra were robust and evident for all Reynolds numbers and even when a spanwise flow was imposed. The general trends observed in near-field fluctuations were also similar for all the different Reynolds numbers. Experiments on simplified, subscale high-lift systems have exhibited noticeable dependencies on the Reynolds number and tripping, although primarily for tonal features rather than the broadband portion of the spectra. Either the 30P/30N model behaves differently, or the computational model is unable to capture these effects. Hence, the results underscore the need for more detailed measurements of the slat cove flow.

  20. Explanation of the computer listings of Faraday factors for INTASAT users

    NASA Technical Reports Server (NTRS)

    Nesterczuk, G.; Llewellyn, S. K.; Bent, R. B.; Schmid, P. E.

    1974-01-01

    Using a simplified form of the Appleton-Hartree formula for the phase refractive index, a relationship was obtained between the Faraday rotation angle along the angular path and the total electron content along the vertical path, intersecting the angular at the height of maximum electron density. Using the second mean value theorem of integration, the function B cosine theta second chi was removed from under the integral sign and replaced by a 'mean' value. The mean value factors were printed on the computer listing for 39 stations receiving signals from the INTASAT satellite during the specified time period. The data is presented by station and date. Graphs are included to demonstrate the variation of the Faraday factor with local time and season, with magnetic latitude, elevation and azimuth angles. Other topics discussed include a description of the bent ionospheric model, the earth's magnetic field model, and the sample computer listing.

  1. Channelized debris flow hazard mitigation through the use of flexible barriers: a simplified computational approach for a sensitivity analysis.

    NASA Astrophysics Data System (ADS)

    Segalini, Andrea; Ferrero, Anna Maria; Brighenti, Roberto

    2013-04-01

    A channelized debris flow is usually represented by a mixture of solid particles of various sizes and water, flowing along a laterally confined inclined channel-shaped region up to an unconfined area where it slow down its motion and spreads out into a flat-shaped mass. The study of these phenomena is very difficult due to their short duration and unpredictability, lack of historical data for a given basin and complexity of the involved mechanical phenomena. The post event surveys allow for the identification of some depositional features and provide indication about the maximum flow height; however they lack information about development of the phenomena with time. For this purpose the monitoring of recursive events has been carried out by several Authors. Most of the studies, aimed at the determination of the characteristic features of a debris flow, were carried out in artificial channels, where the main involved variables were measured and other where controlled during the tests; however, some uncertainties remained and other scaled models where developed to simulate the deposition mechanics as well as to analyze the transportation mechanics and the energy dissipation. The assessment of the mechanical behavior of the protection structures upon impact with the flow as well as the energy associated to it are necessary for the proper design of such structures that, in densely populated area, can avoid victims and limit the destructive effects of such a phenomenon. In this work a simplified structural model, developed by the Authors for the safety assessment of retention barrier against channelized debris flow, is presented and some parametric cases are interpreted through the proposed approach; this model is developed as a simplified and efficient tool to be used for the verification of the supporting cables and foundations of a flexible debris flow barrier. The present analytical and numerical-based approach has a different aim of a FEM model. The computational experiences by using FEM modeling for these kind of structures, had shown that a large amount of time for both the geometrical setup of the model and its computation is necessary. The big effort required by FEM for this class of problems limits the actual possibility to investigate different geometrical configurations, load schemes etc. and it is suitable to represent a specific configuration but it does not allow for investigation of the influence of parameter changes. On the other hand parametrical analysis are common practice in geotechnical design for the quoted reasons. Consequently, the Authors felt the need to develop a simplified method (which is not yet available in our knowledge) that allow to perform several parametrical analysis in a limited time. It should be noted that, in this paper, no consideration regarding the mechanical and physical behavior of debris flows are carried out; the proposed model requires the input of parameters that must be acquired through a preliminary characterization of the design event. However, adopting the proposed tool, the designer will be able to perform sensitivity analysis that will help in quantify the influence of parameters variability as commonly occurs in geotechnical design.

  2. Flux-split algorithms for flows with non-equilibrium chemistry and vibrational relaxation

    NASA Technical Reports Server (NTRS)

    Grossman, B.; Cinnella, P.

    1990-01-01

    The present consideration of numerical computation methods for gas flows with nonequilibrium chemistry thermodynamics gives attention to an equilibrium model, a general nonequilibrium model, and a simplified model based on vibrational relaxation. Flux-splitting procedures are developed for the fully-coupled inviscid equations encompassing fluid dynamics and both chemical and internal energy-relaxation processes. A fully coupled and implicit large-block structure is presented which embodies novel forms of flux-vector split and flux-difference split algorithms valid for nonequilibrium flow; illustrative high-temperature shock tube and nozzle flow examples are given.

  3. Gasdynamic model of turbulent combustion in an explosion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuhl, A.L.; Ferguson, R.E.; Chien, K.Y.

    1994-08-31

    Proposed here is a gasdynamic model of turbulent combustion in explosions. It is used to investigate turbulent mixing aspects of afterburning found in TNT charges detonated in air. Evolution of the turbulent velocity field was calculated by a high-order Godunov solution of the gasdynamic equations. Adaptive Mesh Refinement (AMR) was used to follow convective-mixing processes on the computational grid. Combustion was then taken into account by a simplified sub-grid model, demonstrating that it was controlled by turbulent mixing. The rate of fuel consumption decayed inversely with time, and was shown to be insensitive to grid resolution.

  4. Experimental and Numerical Analysis of Narrowband Coherent Rayleigh-Brillouin Scattering in Atomic and Molecular Species (Pre Print)

    DTIC Science & Technology

    2012-02-01

    use of polar gas species. While current simplified models have adequately predicted CRS and CRBS line shapes for a wide variety of cases, multiple ...published simplified models are presented for argon, molecular nitrogen, and methane at 300 & 500 K and 1 atm. The simplified models require uncertain gas... models are presented for argon, molecular nitrogen, and methane at 300 & 500 K and 1 atm. The simplified models require uncertain gas properties

  5. A survey of upwind methods for flows with equilibrium and non-equilibrium chemistry and thermodynamics

    NASA Technical Reports Server (NTRS)

    Grossman, B.; Garrett, J.; Cinnella, P.

    1989-01-01

    Several versions of flux-vector split and flux-difference split algorithms were compared with regard to general applicability and complexity. Test computations were performed using curve-fit equilibrium air chemistry for an M = 5 high-temperature inviscid flow over a wedge, and an M = 24.5 inviscid flow over a blunt cylinder for test computations; for these cases, little difference in accuracy was found among the versions of the same flux-split algorithm. For flows with nonequilibrium chemistry, the effects of the thermodynamic model on the development of flux-vector split and flux-difference split algorithms were investigated using an equilibrium model, a general nonequilibrium model, and a simplified model based on vibrational relaxation. Several numerical examples are presented, including nonequilibrium air chemistry in a high-temperature shock tube and nonequilibrium hydrogen-air chemistry in a supersonic diffuser.

  6. 3D Model Generation From the Engineering Drawing

    NASA Astrophysics Data System (ADS)

    Vaský, Jozef; Eliáš, Michal; Bezák, Pavol; Červeňanská, Zuzana; Izakovič, Ladislav

    2010-01-01

    The contribution deals with the transformation of engineering drawings in a paper form into a 3D computer representation. A 3D computer model can be further processed in CAD/CAM system, it can be modified, archived, and a technical drawing can be then generated from it as well. The transformation process from paper form to the data one is a complex and difficult one, particularly owing to the different types of drawings, forms of displayed objects and encountered errors and deviations from technical standards. The algorithm for 3D model generating from an orthogonal vector input representing a simplified technical drawing of the rotational part is described in this contribution. The algorithm was experimentally implemented as ObjectARX application in the AutoCAD system and the test sample as the representation of the rotational part was used for verificaton.

  7. GeoDataspaces: Simplifying Data Management Tasks with Globus

    NASA Astrophysics Data System (ADS)

    Malik, T.; Chard, K.; Tchoua, R. B.; Foster, I.

    2014-12-01

    Data and its management are central to modern scientific enterprise. Typically, geoscientists rely on observations and model output data from several disparate sources (file systems, RDBMS, spreadsheets, remote data sources). Integrated data management solutions that provide intuitive semantics and uniform interfaces, irrespective of the kind of data source are, however, lacking. Consequently, geoscientists are left to conduct low-level and time-consuming data management tasks, individually, and repeatedly for discovering each data source, often resulting in errors in handling. In this talk we will describe how the EarthCube GeoDataspace project is improving this situation for seismologists, hydrologists, and space scientists by simplifying some of the existing data management tasks that arise when developing computational models. We will demonstrate a GeoDataspace, bootstrapped with "geounits", which are self-contained metadata packages that provide complete description of all data elements associated with a model run, including input/output and parameter files, model executable and any associated libraries. Geounits link raw and derived data as well as associating provenance information describing how data was derived. We will discuss challenges in establishing geounits and describe machine learning and human annotation approaches that can be used for extracting and associating ad hoc and unstructured scientific metadata hidden in binary formats with data resources and models. We will show how geounits can improve search and discoverability of data associated with model runs. To support this model, we will describe efforts related towards creating a scalable metadata catalog that helps to maintain, search and discover geounits within the Globus network of accessible endpoints. This talk will focus on the issue of creating comprehensive personal inventories of data assets for computational geoscientists, and describe a publishing mechanism, which can be used to feed into national, international, or thematic discovery portals.

  8. Research on simplified parametric finite element model of automobile frontal crash

    NASA Astrophysics Data System (ADS)

    Wu, Linan; Zhang, Xin; Yang, Changhai

    2018-05-01

    The modeling method and key technologies of the automobile frontal crash simplified parametric finite element model is studied in this paper. By establishing the auto body topological structure, extracting and parameterizing the stiffness properties of substructures, choosing appropriate material models for substructures, the simplified parametric FE model of M6 car is built. The comparison of the results indicates that the simplified parametric FE model can accurately calculate the automobile crash responses and the deformation of the key substructures, and the simulation time is reduced from 6 hours to 2 minutes.

  9. Simplified particulate model for coarse-grained hemodynamics simulations

    NASA Astrophysics Data System (ADS)

    Janoschek, F.; Toschi, F.; Harting, J.

    2010-11-01

    Human blood flow is a multiscale problem: in first approximation, blood is a dense suspension of plasma and deformable red cells. Physiological vessel diameters range from about one to thousands of cell radii. Current computational models either involve a homogeneous fluid and cannot track particulate effects or describe a relatively small number of cells with high resolution but are incapable to reach relevant time and length scales. Our approach is to simplify much further than existing particulate models. We combine well-established methods from other areas of physics in order to find the essential ingredients for a minimalist description that still recovers hemorheology. These ingredients are a lattice Boltzmann method describing rigid particle suspensions to account for hydrodynamic long-range interactions and—in order to describe the more complex short-range behavior of cells—anisotropic model potentials known from molecular-dynamics simulations. Paying detailedness, we achieve an efficient and scalable implementation which is crucial for our ultimate goal: establishing a link between the collective behavior of millions of cells and the macroscopic properties of blood in realistic flow situations. In this paper we present our model and demonstrate its applicability to conditions typical for the microvasculature.

  10. A simplified model of all-sky artificial sky glow derived from VIIRS Day/Night band data

    NASA Astrophysics Data System (ADS)

    Duriscoe, Dan M.; Anderson, Sharolyn J.; Luginbuhl, Christian B.; Baugh, Kimberly E.

    2018-07-01

    We present a simplified method using geographic analysis tools to predict the average artificial luminance over the hemisphere of the night sky, expressed as a ratio to the natural condition. The VIIRS Day/Night Band upward radiance data from the Suomi NPP orbiting satellite was used for input to the model. The method is based upon a relation between sky glow brightness and the distance from the observer to the source of upward radiance. This relationship was developed using a Garstang radiative transfer model with Day/Night Band data as input, then refined and calibrated with ground-based all-sky V-band photometric data taken under cloudless and low atmospheric aerosol conditions. An excellent correlation was found between observed sky quality and the predicted values from the remotely sensed data. Thematic maps of large regions of the earth showing predicted artificial V-band sky brightness may be quickly generated with modest computing resources. We have found a fast and accurate method based on previous work to model all-sky quality. We provide limitations to this method. The proposed model meets requirements needed by decision makers and land managers of an easy to interpret and understand metric of sky quality.

  11. Benchmark for Numerical Models of Stented Coronary Bifurcation Flow.

    PubMed

    García Carrascal, P; García García, J; Sierra Pallares, J; Castro Ruiz, F; Manuel Martín, F J

    2018-09-01

    In-stent restenosis ails many patients who have undergone stenting. When the stented artery is a bifurcation, the intervention is particularly critical because of the complex stent geometry involved in these structures. Computational fluid dynamics (CFD) has been shown to be an effective approach when modeling blood flow behavior and understanding the mechanisms that underlie in-stent restenosis. However, these CFD models require validation through experimental data in order to be reliable. It is with this purpose in mind that we performed particle image velocimetry (PIV) measurements of velocity fields within flows through a simplified coronary bifurcation. Although the flow in this simplified bifurcation differs from the actual blood flow, it emulates the main fluid dynamic mechanisms found in hemodynamic flow. Experimental measurements were performed for several stenting techniques in both steady and unsteady flow conditions. The test conditions were strictly controlled, and uncertainty was accurately predicted. The results obtained in this research represent readily accessible, easy to emulate, detailed velocity fields and geometry, and they have been successfully used to validate our numerical model. These data can be used as a benchmark for further development of numerical CFD modeling in terms of comparison of the main flow pattern characteristics.

  12. Initial Coupling of the RELAP-7 and PRONGHORN Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. Ortensi; D. Andrs; A.A. Bingham

    2012-10-01

    Modern nuclear reactor safety codes require the ability to solve detailed coupled neutronic- thermal fluids problems. For larger cores, this implies fully coupled higher dimensionality spatial dynamics with appropriate feedback models that can provide enough resolution to accurately compute core heat generation and removal during steady and unsteady conditions. The reactor analysis code PRONGHORN is being coupled to RELAP-7 as a first step to extend RELAP’s current capabilities. This report details the mathematical models, the type of coupling, and the testing results from the integrated system. RELAP-7 is a MOOSE-based application that solves the continuity, momentum, and energy equations inmore » 1-D for a compressible fluid. The pipe and joint capabilities enable it to model parts of the power conversion unit. The PRONGHORN application, also developed on the MOOSE infrastructure, solves the coupled equations that define the neutron diffusion, fluid flow, and heat transfer in a full core model. The two systems are loosely coupled to simplify the transition towards a more complex infrastructure. The integration is tested on a simplified version of the OECD/NEA MHTGR-350 Coupled Neutronics-Thermal Fluids benchmark model.« less

  13. Interannual Variability of Martian Global Dust Storms: Simulations with a Low-Order Model of the General Circulation

    NASA Technical Reports Server (NTRS)

    Pankine, A. A.; Ingersoll, Andrew P.

    2002-01-01

    We present simulations of the interannual variability of martian global dust storms (GDSs) with a simplified low-order model (LOM) of the general circulation. The simplified model allows one to conduct computationally fast long-term simulations of the martian climate system. The LOM is constructed by Galerkin projection of a 2D (zonally averaged) general circulation model (GCM) onto a truncated set of basis functions. The resulting LOM consists of 12 coupled nonlinear ordinary differential equations describing atmospheric dynamics and dust transport within the Hadley cell. The forcing of the model is described by simplified physics based on Newtonian cooling and Rayleigh friction. The atmosphere and surface are coupled: atmospheric heating depends on the dustiness of the atmosphere, and the surface dust source depends on the strength of the atmospheric winds. Parameters of the model are tuned to fit the output of the NASA AMES GCM and the fit is generally very good. Interannual variability of GDSs is possible in the IBM, but only when stochastic forcing is added to the model. The stochastic forcing could be provided by transient weather systems or some surface process such as redistribution of the sand particles in storm generating zones on the surface. The results are sensitive to the value of the saltation threshold, which hints at a possible feedback between saltation threshold and dust storm activity. According to this hypothesis, erodable material builds up its a result of a local process, whose effect is to lower the saltation threshold until a GDS occurs. The saltation threshold adjusts its value so that dust storms are barely able to occur.

  14. Cochlear pharmacokinetics with local inner ear drug delivery using a three-dimensional finite-element computer model.

    PubMed

    Plontke, Stefan K; Siedow, Norbert; Wegener, Raimund; Zenner, Hans-Peter; Salt, Alec N

    2007-01-01

    Cochlear fluid pharmacokinetics can be better represented by three-dimensional (3D) finite-element simulations of drug dispersal. Local drug deliveries to the round window membrane are increasingly being used to treat inner ear disorders. Crucial to the development of safe therapies is knowledge of drug distribution in the inner ear with different delivery methods. Computer simulations allow application protocols and drug delivery systems to be evaluated, and may permit animal studies to be extrapolated to the larger cochlea of the human. A finite-element 3D model of the cochlea was constructed based on geometric dimensions of the guinea pig cochlea. Drug propagation along and between compartments was described by passive diffusion. To demonstrate the potential value of the model, methylprednisolone distribution in the cochlea was calculated for two clinically relevant application protocols using pharmacokinetic parameters derived from a prior one-dimensional (1D) model. In addition, a simplified geometry was used to compare results from 3D with 1D simulations. For the simplified geometry, calculated concentration profiles with distance were in excellent agreement between the 1D and the 3D models. Different drug delivery strategies produce very different concentration time courses, peak concentrations and basal-apical concentration gradients of drug. In addition, 3D computations demonstrate the existence of substantial gradients across the scalae in the basal turn. The 3D model clearly shows the presence of drug gradients across the basal scalae of guinea pigs, demonstrating the necessity of a 3D approach to predict drug movements across and between scalae with larger cross-sectional areas, such as the human, with accuracy. This is the first model to incorporate the volume of the spiral ligament and to calculate diffusion through this structure. Further development of the 3D model will have to incorporate a more accurate geometry of the entire inner ear and incorporate more of the specific processes that contribute to drug removal from the inner ear fluids. Appropriate computer models may assist in both drug and drug delivery system design and can thus accelerate the development of a rationale-based local drug delivery to the inner ear and its successful establishment in clinical practice. Copyright 2007 S. Karger AG, Basel.

  15. Cochlear Pharmacokinetics with Local Inner Ear Drug Delivery Using a Three-Dimensional Finite-Element Computer Model

    PubMed Central

    Plontke, Stefan K.; Siedow, Norbert; Wegener, Raimund; Zenner, Hans-Peter; Salt, Alec N.

    2006-01-01

    Hypothesis: Cochlear fluid pharmacokinetics can be better represented by three-dimensional (3D) finite-element simulations of drug dispersal. Background: Local drug deliveries to the round window membrane are increasingly being used to treat inner ear disorders. Crucial to the development of safe therapies is knowledge of drug distribution in the inner ear with different delivery methods. Computer simulations allow application protocols and drug delivery systems to be evaluated, and may permit animal studies to be extrapolated to the larger cochlea of the human. Methods: A finite-element 3D model of the cochlea was constructed based on geometric dimensions of the guinea pig cochlea. Drug propagation along and between compartments was described by passive diffusion. To demonstrate the potential value of the model, methylprednisolone distribution in the cochlea was calculated for two clinically relevant application protocols using pharmacokinetic parameters derived from a prior one-dimensional (1D) model. In addition, a simplified geometry was used to compare results from 3D with 1D simulations. Results: For the simplified geometry, calculated concentration profiles with distance were in excellent agreement between the 1D and the 3D models. Different drug delivery strategies produce very different concentration time courses, peak concentrations and basal-apical concentration gradients of drug. In addition, 3D computations demonstrate the existence of substantial gradients across the scalae in the basal turn. Conclusion: The 3D model clearly shows the presence of drug gradients across the basal scalae of guinea pigs, demonstrating the necessity of a 3D approach to predict drug movements across and between scalae with larger cross-sectional areas, such as the human, with accuracy. This is the first model to incorporate the volume of the spiral ligament and to calculate diffusion through this structure. Further development of the 3D model will have to incorporate a more accurate geometry of the entire inner ear and incorporate more of the specific processes that contribute to drug removal from the inner ear fluids. Appropriate computer models may assist in both drug and drug delivery system design and can thus accelerate the development of a rationale-based local drug delivery to the inner ear and its successful establishment in clinical practice. PMID:17119332

  16. Figures of merit for self-beating filtered microwave photonic systems.

    PubMed

    Pérez, Daniel; Gasulla, Ivana; Capmany, José; Fandiño, Javier S; Muñoz, Pascual; Alavi, Hossein

    2016-05-02

    We present a model to compute the figures of merit of self-beating Microwave Photonic systems, a novel class of systems that work on a self-homodyne fashion by sharing the same laser source for information bearing and local oscillator tasks. General and simplified expressions are given and, as an example, we have considered their application to the design of a tunable RF MWP BS/UE front end for band selection, based on a Chebyshev Type-II optical filter. The applicability and usefulness of the model are also discussed.

  17. Automated design optimization of supersonic airplane wing structures under dynamic constraints

    NASA Technical Reports Server (NTRS)

    Fox, R. L.; Miura, H.; Rao, S. S.

    1972-01-01

    The problems of the preliminary and first level detail design of supersonic aircraft wings are stated as mathematical programs and solved using automated optimum design techniques. The problem is approached in two phases: the first is a simplified equivalent plate model in which the envelope, planform and structural parameters are varied to produce a design, the second is a finite element model with fixed configuration in which the material distribution is varied. Constraints include flutter, aeroelastically computed stresses and deflections, natural frequency and a variety of geometric limitations.

  18. A simplified model for the gravitational potential of the atmosphere and its effect on the geoid

    NASA Technical Reports Server (NTRS)

    Madden, S. J., Jr.

    1972-01-01

    The earth's atmosphere is considered as made up of oblate spheroidal layers of variable density lying over an oblate spheroidal earth. The gravitational attraction of the atmosphere at exterior points is computed and its contribution to the usual spherical harmonic gravitational expansion is assessed. The potential is also found for points at the bottom of the model atmosphere. This latter result is of interest for determination of the potential at the surface of the geoid. The atmospheric correction to the geoid determination from satellite coefficients is given.

  19. New 3D model for dynamics modeling

    NASA Astrophysics Data System (ADS)

    Perez, Alain

    1994-05-01

    The wrist articulation represents one of the most complex mechanical systems of the human body. It is composed of eight bones rolling and sliding along their surface and along the faces of the five metacarpals of the hand and the two bones of the arm. The wrist dynamics are however fundamental for the hand movement, but it is so complex that it still remains incompletely explored. This work is a part of a new concept of computer-assisted surgery, which consists in developing computer models to perfect surgery acts by predicting their consequences. The modeling of the wrist dynamics are based first on the static model of its bones in three dimensions. This 3D model must optimise the collision detection procedure which is the necessary step to estimate the physical contact constraints. As many other possible computer vision models do not fit with enough precision to this problem, a new 3D model has been developed thanks to the median axis of the digital distance map of the bones reconstructed volume. The collision detection procedure is then simplified for contacts are detected between spheres. The experiment of this original 3D dynamic model products realistic computer animation images of solids in contact. It is now necessary to detect ligaments on digital medical images and to model them in order to complete a wrist model.

  20. Modeling and measurements of urban aerosol processes on the neighborhood scale in Rotterdam, Oslo and Helsinki

    NASA Astrophysics Data System (ADS)

    Karl, Matthias; Kukkonen, Jaakko; Keuken, Menno P.; Lützenkirchen, Susanne; Pirjola, Liisa; Hussein, Tareq

    2016-04-01

    This study evaluates the influence of aerosol processes on the particle number (PN) concentrations in three major European cities on the temporal scale of 1 h, i.e., on the neighborhood and city scales. We have used selected measured data of particle size distributions from previous campaigns in the cities of Helsinki, Oslo and Rotterdam. The aerosol transformation processes were evaluated using the aerosol dynamics model MAFOR, combined with a simplified treatment of roadside and urban atmospheric dispersion. We have compared the model predictions of particle number size distributions with the measured data, and conducted sensitivity analyses regarding the influence of various model input variables. We also present a simplified parameterization for aerosol processes, which is based on the more complex aerosol process computations; this simple model can easily be implemented to both Gaussian and Eulerian urban dispersion models. Aerosol processes considered in this study were (i) the coagulation of particles, (ii) the condensation and evaporation of two organic vapors, and (iii) dry deposition. The chemical transformation of gas-phase compounds was not taken into account. By choosing concentrations and particle size distributions at roadside as starting point of the computations, nucleation of gas-phase vapors from the exhaust has been regarded as post tail-pipe emission, avoiding the need to include nucleation in the process analysis. Dry deposition and coagulation of particles were identified to be the most important aerosol dynamic processes that control the evolution and removal of particles. The error of the contribution from dry deposition to PN losses due to the uncertainty of measured deposition velocities ranges from -76 to +64 %. The removal of nanoparticles by coagulation enhanced considerably when considering the fractal nature of soot aggregates and the combined effect of van der Waals and viscous interactions. The effect of condensation and evaporation of organic vapors emitted by vehicles on particle numbers and on particle size distributions was examined. Under inefficient dispersion conditions, the model predicts that condensational growth contributes to the evolution of PN from roadside to the neighborhood scale. The simplified parameterization of aerosol processes predicts the change in particle number concentrations between roadside and urban background within 10 % of that predicted by the fully size-resolved MAFOR model.

  1. Interpretation of laser/multi-sensor data for short range terrain modeling and hazard detection

    NASA Technical Reports Server (NTRS)

    Messing, B. S.

    1980-01-01

    A terrain modeling algorithm that would reconstruct the sensed ground images formed by the triangulation scheme, and classify as unsafe any terrain feature that would pose a hazard to a roving vehicle is described. This modeler greatly reduces quantization errors inherent in a laser/sensing system through the use of a thinning algorithm. Dual filters are employed to separate terrain steps from the general landscape, simplifying the analysis of terrain features. A crosspath analysis is utilized to detect and avoid obstacles that would adversely affect the roll of the vehicle. Computer simulations of the rover on various terrains examine the performance of the modeler.

  2. One-dimensional nonlinear elastodynamic models and their local conservation laws with applications to biological membranes.

    PubMed

    Cheviakov, A F; Ganghoffer, J-F

    2016-05-01

    The framework of incompressible nonlinear hyperelasticity and viscoelasticity is applied to the derivation of one-dimensional models of nonlinear wave propagation in fiber-reinforced elastic solids. Equivalence transformations are used to simplify the resulting wave equations and to reduce the number of parameters. Local conservation laws and global conserved quantities of the models are systematically computed and discussed, along with other related mathematical properties. Sample numerical solutions are presented. The models considered in the paper are appropriate for the mathematical description of certain aspects of the behavior of biological membranes and similar structures. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Applications of the hybrid coordinate method to the TOPS autopilot

    NASA Technical Reports Server (NTRS)

    Fleischer, G. E.

    1978-01-01

    Preliminary results are presented from the application of the hybrid coordinate method to modeling TOPS (thermoelectric outer planet spacecraft) structural dynamics. Computer simulated responses of the vehicle are included which illustrate the interaction of relatively flexible appendages with an autopilot control system. Comparisons were made between simplified single-axis models of the control loop, with spacecraft flexibility represented by hinged rigid bodies, and a very detailed three-axis spacecraft model whose flexible portions are described by modal coordinates. While single-axis system, root loci provided reasonable qualitative indications of stability margins in this case, they were quantitatively optimistic when matched against responses of the detailed model.

  4. Optical laboratory solution and error model simulation of a linear time-varying finite element equation

    NASA Technical Reports Server (NTRS)

    Taylor, B. K.; Casasent, D. P.

    1989-01-01

    The use of simplified error models to accurately simulate and evaluate the performance of an optical linear-algebra processor is described. The optical architecture used to perform banded matrix-vector products is reviewed, along with a linear dynamic finite-element case study. The laboratory hardware and ac-modulation technique used are presented. The individual processor error-source models and their simulator implementation are detailed. Several significant simplifications are introduced to ease the computational requirements and complexity of the simulations. The error models are verified with a laboratory implementation of the processor, and are used to evaluate its potential performance.

  5. Thrombosis in Cerebral Aneurysms and the Computational Modeling Thereof: A Review

    PubMed Central

    Ngoepe, Malebogo N.; Frangi, Alejandro F.; Byrne, James V.; Ventikos, Yiannis

    2018-01-01

    Thrombosis is a condition closely related to cerebral aneurysms and controlled thrombosis is the main purpose of endovascular embolization treatment. The mechanisms governing thrombus initiation and evolution in cerebral aneurysms have not been fully elucidated and this presents challenges for interventional planning. Significant effort has been directed towards developing computational methods aimed at streamlining the interventional planning process for unruptured cerebral aneurysm treatment. Included in these methods are computational models of thrombus development following endovascular device placement. The main challenge with developing computational models for thrombosis in disease cases is that there exists a wide body of literature that addresses various aspects of the clotting process, but it may not be obvious what information is of direct consequence for what modeling purpose (e.g., for understanding the effect of endovascular therapies). The aim of this review is to present the information so it will be of benefit to the community attempting to model cerebral aneurysm thrombosis for interventional planning purposes, in a simplified yet appropriate manner. The paper begins by explaining current understanding of physiological coagulation and highlights the documented distinctions between the physiological process and cerebral aneurysm thrombosis. Clinical observations of thrombosis following endovascular device placement are then presented. This is followed by a section detailing the demands placed on computational models developed for interventional planning. Finally, existing computational models of thrombosis are presented. This last section begins with description and discussion of physiological computational clotting models, as they are of immense value in understanding how to construct a general computational model of clotting. This is then followed by a review of computational models of clotting in cerebral aneurysms, specifically. Even though some progress has been made towards computational predictions of thrombosis following device placement in cerebral aneurysms, many gaps still remain. Answering the key questions will require the combined efforts of the clinical, experimental and computational communities. PMID:29670533

  6. Thrombosis in Cerebral Aneurysms and the Computational Modeling Thereof: A Review.

    PubMed

    Ngoepe, Malebogo N; Frangi, Alejandro F; Byrne, James V; Ventikos, Yiannis

    2018-01-01

    Thrombosis is a condition closely related to cerebral aneurysms and controlled thrombosis is the main purpose of endovascular embolization treatment. The mechanisms governing thrombus initiation and evolution in cerebral aneurysms have not been fully elucidated and this presents challenges for interventional planning. Significant effort has been directed towards developing computational methods aimed at streamlining the interventional planning process for unruptured cerebral aneurysm treatment. Included in these methods are computational models of thrombus development following endovascular device placement. The main challenge with developing computational models for thrombosis in disease cases is that there exists a wide body of literature that addresses various aspects of the clotting process, but it may not be obvious what information is of direct consequence for what modeling purpose (e.g., for understanding the effect of endovascular therapies). The aim of this review is to present the information so it will be of benefit to the community attempting to model cerebral aneurysm thrombosis for interventional planning purposes, in a simplified yet appropriate manner. The paper begins by explaining current understanding of physiological coagulation and highlights the documented distinctions between the physiological process and cerebral aneurysm thrombosis. Clinical observations of thrombosis following endovascular device placement are then presented. This is followed by a section detailing the demands placed on computational models developed for interventional planning. Finally, existing computational models of thrombosis are presented. This last section begins with description and discussion of physiological computational clotting models, as they are of immense value in understanding how to construct a general computational model of clotting. This is then followed by a review of computational models of clotting in cerebral aneurysms, specifically. Even though some progress has been made towards computational predictions of thrombosis following device placement in cerebral aneurysms, many gaps still remain. Answering the key questions will require the combined efforts of the clinical, experimental and computational communities.

  7. Computational methods for diffusion-influenced biochemical reactions.

    PubMed

    Dobrzynski, Maciej; Rodríguez, Jordi Vidal; Kaandorp, Jaap A; Blom, Joke G

    2007-08-01

    We compare stochastic computational methods accounting for space and discrete nature of reactants in biochemical systems. Implementations based on Brownian dynamics (BD) and the reaction-diffusion master equation are applied to a simplified gene expression model and to a signal transduction pathway in Escherichia coli. In the regime where the number of molecules is small and reactions are diffusion-limited predicted fluctuations in the product number vary between the methods, while the average is the same. Computational approaches at the level of the reaction-diffusion master equation compute the same fluctuations as the reference result obtained from the particle-based method if the size of the sub-volumes is comparable to the diameter of reactants. Using numerical simulations of reversible binding of a pair of molecules we argue that the disagreement in predicted fluctuations is due to different modeling of inter-arrival times between reaction events. Simulations for a more complex biological study show that the different approaches lead to different results due to modeling issues. Finally, we present the physical assumptions behind the mesoscopic models for the reaction-diffusion systems. Input files for the simulations and the source code of GMP can be found under the following address: http://www.cwi.nl/projects/sic/bioinformatics2007/

  8. Toward Petascale Biologically Plausible Neural Networks

    NASA Astrophysics Data System (ADS)

    Long, Lyle

    This talk will describe an approach to achieving petascale neural networks. Artificial intelligence has been oversold for many decades. Computers in the beginning could only do about 16,000 operations per second. Computer processing power, however, has been doubling every two years thanks to Moore's law, and growing even faster due to massively parallel architectures. Finally, 60 years after the first AI conference we have computers on the order of the performance of the human brain (1016 operations per second). The main issues now are algorithms, software, and learning. We have excellent models of neurons, such as the Hodgkin-Huxley model, but we do not know how the human neurons are wired together. With careful attention to efficient parallel computing, event-driven programming, table lookups, and memory minimization massive scale simulations can be performed. The code that will be described was written in C + + and uses the Message Passing Interface (MPI). It uses the full Hodgkin-Huxley neuron model, not a simplified model. It also allows arbitrary network structures (deep, recurrent, convolutional, all-to-all, etc.). The code is scalable, and has, so far, been tested on up to 2,048 processor cores using 107 neurons and 109 synapses.

  9. [Influence of trabecular microstructure modeling on finite element analysis of dental implant].

    PubMed

    Shen, M J; Wang, G G; Zhu, X H; Ding, X

    2016-09-01

    To analyze the influence of trabecular microstructure modeling on the biomechanical distribution of implant-bone interface with a three-dimensional finite element mandible model of trabecular structure. Dental implants were embeded in the mandibles of a beagle dog. After three months of the implant installation, the mandibles with dental implants were harvested and scaned by micro-CT and cone-beam CT. Two three-dimensional finite element mandible models, trabecular microstructure(precise model) and macrostructure(simplified model), were built. The values of stress and strain of implant-bone interface were calculated using the software of Ansys 14.0. Compared with the simplified model, the precise models' average values of the implant bone interface stress increased obviously and its maximum values did not change greatly. The maximum values of quivalent stress of the precise models were 80% and 110% of the simplified model and the average values were 170% and 290% of simplified model. The maximum and average values of equivalent strain of precise models were obviously decreased, and the maximum values of the equivalent effect strain were 17% and 26% of simplified model and the average ones were 21% and 16% of simplified model respectively. Stress and strain concentrations at implant-bone interface were obvious in the simplified model. However, the distributions of stress and strain were uniform in the precise model. The precise model has significant effect on the distribution of stress and strain at implant-bone interface.

  10. Coupling of TRAC-PF1/MOD2, Version 5.4.25, with NESTLE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knepper, P.L.; Hochreiter, L.E.; Ivanov, K.N.

    1999-09-01

    A three-dimensional (3-D) spatial kinetics capability within a thermal-hydraulics system code provides a more correct description of the core physics during reactor transients that involve significant variations in the neutron flux distribution. Coupled codes provide the ability to forecast safety margins in a best-estimate manner. The behavior of a reactor core and the feedback to the plant dynamics can be accurately simulated. For each time step, coupled codes are capable of resolving system interaction effects on neutronics feedback and are capable of describing local neutronics effects caused by the thermal hydraulics and neutronics coupling. With the improvements in computational technology,more » modeling complex reactor behaviors with coupled thermal hydraulics and spatial kinetics is feasible. Previously, reactor analysis codes were limited to either a detailed thermal-hydraulics model with simplified kinetics or multidimensional neutron kinetics with a simplified thermal-hydraulics model. The authors discuss the coupling of the Transient Reactor Analysis Code (TRAC)-PF1/MOD2, Version 5.4.25, with the NESTLE code.« less

  11. Critical evaluation of Jet-A spray combustion using propane chemical kinetics in gas turbine combustion simulated by KIVA-II

    NASA Technical Reports Server (NTRS)

    Nguyen, H. L.; Ying, S.-J.

    1990-01-01

    Numerical solutions of the Jet-A spray combustion were obtained by means of the KIVA-II computer code after Jet-A properties were added to the 12 chemical species the program had initially contained. Three different reaction mechanism models are considered. The first model consists of 131 reactions and 45 species; it is evaluated by comparing calculated ignition delay times with available shock tube data, and it is used in the evaluation of the other two simplified models. The simplified mechanisms consider 45 reactions and 27 species and 5 reactions and 12 species, respectively. In the prediction of pollutants NOx and CO, the full mechanism of 131 reactions is considered to be more reliable. The numerical results indicate that the variation of the maximum flame temperature is within 20 percent as compared with that of the full mechanism of 131 reactions. The chemical compositions of major components such as C3H8, H2O, O2, CO2, and N2 are of the same order of magnitude. However, the concentrations of pollutants are quite different.

  12. Mutually opposing forces during locomotion can eliminate the tradeoff between maneuverability and stability

    NASA Astrophysics Data System (ADS)

    Cowan, Noah; Sefati, Shahin; Neveln, Izaak; Roth, Eatai; Mitchell, Terence; Snyder, James; Maciver, Malcolm; Fortune, Eric

    A surprising feature of animal locomotion is that organisms typically produce substantial forces in directions other than what is necessary to move the animal through its environment, such as perpendicular to, or counter to, the direction of travel. The effect of these forces has been difficult to observe because they are often mutually opposing and therefore cancel out. Using a combination of robotic physical modeling, computational modeling, and biological experiments, we discovered that these forces serve an important role: to simplify and enhance the control of locomotion. Specifically, we examined a well-suited model system, the glass knifefish Eigenmannia virescens, which produces mutually opposing forces during a hovering behavior. By systematically varying the locomotor parameters of our biomimetic robot, and measuring the resulting forces and kinematics, we demonstrated that the production and differential control of mutually opposing forces is a strategy that generates passive stabilization while simultaneously enhancing maneuverability. Mutually opposing forces during locomotion are widespread across animal taxa, and these results indicate that such forces can eliminate the tradeoff between stability and maneuverability, thereby simplifying robotic and neural control.

  13. MPI, HPF or OpenMP: A Study with the NAS Benchmarks

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Frumkin, Michael; Hribar, Michelle; Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1999-01-01

    Porting applications to new high performance parallel and distributed platforms is a challenging task. Writing parallel code by hand is time consuming and costly, but the task can be simplified by high level languages and would even better be automated by parallelizing tools and compilers. The definition of HPF (High Performance Fortran, based on data parallel model) and OpenMP (based on shared memory parallel model) standards has offered great opportunity in this respect. Both provide simple and clear interfaces to language like FORTRAN and simplify many tedious tasks encountered in writing message passing programs. In our study we implemented the parallel versions of the NAS Benchmarks with HPF and OpenMP directives. Comparison of their performance with the MPI implementation and pros and cons of different approaches will be discussed along with experience of using computer-aided tools to help parallelize these benchmarks. Based on the study,potentials of applying some of the techniques to realistic aerospace applications will be presented

  14. MPI, HPF or OpenMP: A Study with the NAS Benchmarks

    NASA Technical Reports Server (NTRS)

    Jin, H.; Frumkin, M.; Hribar, M.; Waheed, A.; Yan, J.; Saini, Subhash (Technical Monitor)

    1999-01-01

    Porting applications to new high performance parallel and distributed platforms is a challenging task. Writing parallel code by hand is time consuming and costly, but this task can be simplified by high level languages and would even better be automated by parallelizing tools and compilers. The definition of HPF (High Performance Fortran, based on data parallel model) and OpenMP (based on shared memory parallel model) standards has offered great opportunity in this respect. Both provide simple and clear interfaces to language like FORTRAN and simplify many tedious tasks encountered in writing message passing programs. In our study, we implemented the parallel versions of the NAS Benchmarks with HPF and OpenMP directives. Comparison of their performance with the MPI implementation and pros and cons of different approaches will be discussed along with experience of using computer-aided tools to help parallelize these benchmarks. Based on the study, potentials of applying some of the techniques to realistic aerospace applications will be presented.

  15. Extended Finite Element Method with Simplified Spherical Harmonics Approximation for the Forward Model of Optical Molecular Imaging

    PubMed Central

    Li, Wei; Yi, Huangjian; Zhang, Qitan; Chen, Duofang; Liang, Jimin

    2012-01-01

    An extended finite element method (XFEM) for the forward model of 3D optical molecular imaging is developed with simplified spherical harmonics approximation (SPN). In XFEM scheme of SPN equations, the signed distance function is employed to accurately represent the internal tissue boundary, and then it is used to construct the enriched basis function of the finite element scheme. Therefore, the finite element calculation can be carried out without the time-consuming internal boundary mesh generation. Moreover, the required overly fine mesh conforming to the complex tissue boundary which leads to excess time cost can be avoided. XFEM conveniences its application to tissues with complex internal structure and improves the computational efficiency. Phantom and digital mouse experiments were carried out to validate the efficiency of the proposed method. Compared with standard finite element method and classical Monte Carlo (MC) method, the validation results show the merits and potential of the XFEM for optical imaging. PMID:23227108

  16. Extended finite element method with simplified spherical harmonics approximation for the forward model of optical molecular imaging.

    PubMed

    Li, Wei; Yi, Huangjian; Zhang, Qitan; Chen, Duofang; Liang, Jimin

    2012-01-01

    An extended finite element method (XFEM) for the forward model of 3D optical molecular imaging is developed with simplified spherical harmonics approximation (SP(N)). In XFEM scheme of SP(N) equations, the signed distance function is employed to accurately represent the internal tissue boundary, and then it is used to construct the enriched basis function of the finite element scheme. Therefore, the finite element calculation can be carried out without the time-consuming internal boundary mesh generation. Moreover, the required overly fine mesh conforming to the complex tissue boundary which leads to excess time cost can be avoided. XFEM conveniences its application to tissues with complex internal structure and improves the computational efficiency. Phantom and digital mouse experiments were carried out to validate the efficiency of the proposed method. Compared with standard finite element method and classical Monte Carlo (MC) method, the validation results show the merits and potential of the XFEM for optical imaging.

  17. An Analysis of Once-per-revolution Oscillating Aerodynamic Thrust Loads on Single-Rotation Propellers on Tractor Airplanes at Zero Yaw

    NASA Technical Reports Server (NTRS)

    Rogallo, Vernon L; Yaggy, Paul F; Mccloud, John L , III

    1956-01-01

    A simplified procedure is shown for calculating the once-per-revolution oscillating aerodynamic thrust loads on propellers of tractor airplanes at zero yaw. The only flow field information required for the application of the procedure is a knowledge of the upflow angles at the horizontal center line of the propeller disk. Methods are presented whereby these angles may be computed without recourse to experimental survey of the flow field. The loads computed by the simplified procedure are compared with those computed by a more rigorous method and the procedure is applied to several airplane configurations which are believed typical of current designs. The results are generally satisfactory.

  18. Observations on SOFIA Observation Scheduling: Search and Inference in the Face of Discrete and Continuous Constraints

    NASA Technical Reports Server (NTRS)

    Frank, Jeremy; Gross, Michael; Kuerklu, Elif

    2003-01-01

    We did cool stuff to reduce the number of IVPs and BVPs needed to schedule SOFIA by restricting the problem. The restriction costs us little in terms of the value of the flight plans we can build. The restriction allowed us to reformulate part of the search problem as a zero-finding problem. The result is a simplified planning model and significant savings in computation time.

  19. GEANT4 distributed computing for compact clusters

    NASA Astrophysics Data System (ADS)

    Harrawood, Brian P.; Agasthya, Greeshma A.; Lakshmanan, Manu N.; Raterman, Gretchen; Kapadia, Anuj J.

    2014-11-01

    A new technique for distribution of GEANT4 processes is introduced to simplify running a simulation in a parallel environment such as a tightly coupled computer cluster. Using a new C++ class derived from the GEANT4 toolkit, multiple runs forming a single simulation are managed across a local network of computers with a simple inter-node communication protocol. The class is integrated with the GEANT4 toolkit and is designed to scale from a single symmetric multiprocessing (SMP) machine to compact clusters ranging in size from tens to thousands of nodes. User designed 'work tickets' are distributed to clients using a client-server work flow model to specify the parameters for each individual run of the simulation. The new g4DistributedRunManager class was developed and well tested in the course of our Neutron Stimulated Emission Computed Tomography (NSECT) experiments. It will be useful for anyone running GEANT4 for large discrete data sets such as covering a range of angles in computed tomography, calculating dose delivery with multiple fractions or simply speeding the through-put of a single model.

  20. Verification of the GIS-based Newmark method through 2D dynamic modelling of slope stability

    NASA Astrophysics Data System (ADS)

    Torgoev, A.; Havenith, H.-B.

    2012-04-01

    The goal of this work is to verify the simplified GIS-based Newmark displacement approach through 2D dynamic modelling of slope stability. The research is applied to a landslide-prone area in Central Asia, the Mailuu-Suu Valley, situated in the south of Kyrgyzstan. The comparison is carried out on the basis of 30 different profiles located in the target area, presenting different geological, tectonic and morphological settings. One part of the profiles were selected within landslide zones, the other part was selected in stable areas. Many of the landslides are complex slope failures involving falls, rotational sliding and/or planar sliding and flows. These input data were extracted from a 3D structural geological model built with the GOCAD software. Geophysical and geomechanical parameters were defined on the basis of results obtained by multiple surveys performed in the area over the past 15 years. These include geophysical investigation, seismological experiments and ambient noise measurements. Dynamic modelling of slope stability is performed with the UDEC version 4.01 software that is able to compute deformation of discrete elements. Inside these elements both elasto-plastic and purely elastic materials (similar to rigid blocks) were tested. Various parameter variations were tested to assess their influence on the final outputs. And even though no groundwater flow was included, the numerous simulations are very time-consuming (20 mins per model for 10 secs simulated shaking) - about 500 computation hours have been completed so far (more than 100 models). Preliminary results allow us to compare Newmark displacements computed using different GIS approaches (Jibson et al., 1998; Miles and Ho, 1999, among others) with the displacements computed using the original Newmark method (Newmark, 1965, here simulated seismograms were used) and displacements produced along joints by the corresponding 2D dynamical models. The generation of seismic amplification and its impact on peak-ground-acceleration, Arias Intensity and permanent slope movements (total and slip on joints) is assessed for numerous morphological-lithological settings (curvature, slope angle, surficial geology, various layer dips and orientations) throughout the target area. The final results of our studies should allow us to define the limitations of the simplified GIS-based Newmark displacement modelling; thus, the verified method would make landslide susceptibility and hazard mapping in seismically active regions more reliable.

  1. A Kernel-Free Particle-Finite Element Method for Hypervelocity Impact Simulation. Chapter 4

    NASA Technical Reports Server (NTRS)

    Park, Young-Keun; Fahrenthold, Eric P.

    2004-01-01

    An improved hybrid particle-finite element method has been developed for the simulation of hypervelocity impact problems. Unlike alternative methods, the revised formulation computes the density without reference to any kernel or interpolation functions, for either the density or the rate of dilatation. This simplifies the state space model and leads to a significant reduction in computational cost. The improved method introduces internal energy variables as generalized coordinates in a new formulation of the thermomechanical Lagrange equations. Example problems show good agreement with exact solutions in one dimension and good agreement with experimental data in a three dimensional simulation.

  2. A programmable computational image sensor for high-speed vision

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Shi, Cong; Long, Xitian; Wu, Nanjian

    2013-08-01

    In this paper we present a programmable computational image sensor for high-speed vision. This computational image sensor contains four main blocks: an image pixel array, a massively parallel processing element (PE) array, a row processor (RP) array and a RISC core. The pixel-parallel PE is responsible for transferring, storing and processing image raw data in a SIMD fashion with its own programming language. The RPs are one dimensional array of simplified RISC cores, it can carry out complex arithmetic and logic operations. The PE array and RP array can finish great amount of computation with few instruction cycles and therefore satisfy the low- and middle-level high-speed image processing requirement. The RISC core controls the whole system operation and finishes some high-level image processing algorithms. We utilize a simplified AHB bus as the system bus to connect our major components. Programming language and corresponding tool chain for this computational image sensor are also developed.

  3. Interior Noise Predictions in the Preliminary Design of the Large Civil Tiltrotor (LCTR2)

    NASA Technical Reports Server (NTRS)

    Grosveld, Ferdinand W.; Cabell, Randolph H.; Boyd, David D.

    2013-01-01

    A prediction scheme was established to compute sound pressure levels in the interior of a simplified cabin model of the second generation Large Civil Tiltrotor (LCTR2) during cruise conditions, while being excited by turbulent boundary layer flow over the fuselage, or by tiltrotor blade loading and thickness noise. Finite element models of the cabin structure, interior acoustic space, and acoustically absorbent (poro-elastic) materials in the fuselage were generated and combined into a coupled structural-acoustic model. Fluctuating power spectral densities were computed according to the Efimtsov turbulent boundary layer excitation model. Noise associated with the tiltrotor blades was predicted in the time domain as fluctuating surface pressures and converted to power spectral densities at the fuselage skin finite element nodes. A hybrid finite element (FE) approach was used to compute the low frequency acoustic cabin response over the frequency range 6-141 Hz with a 1 Hz bandwidth, and the Statistical Energy Analysis (SEA) approach was used to predict the interior noise for the 125-8000 Hz one-third octave bands.

  4. A parallel computing engine for a class of time critical processes.

    PubMed

    Nabhan, T M; Zomaya, A Y

    1997-01-01

    This paper focuses on the efficient parallel implementation of systems of numerically intensive nature over loosely coupled multiprocessor architectures. These analytical models are of significant importance to many real-time systems that have to meet severe time constants. A parallel computing engine (PCE) has been developed in this work for the efficient simplification and the near optimal scheduling of numerical models over the different cooperating processors of the parallel computer. First, the analytical system is efficiently coded in its general form. The model is then simplified by using any available information (e.g., constant parameters). A task graph representing the interconnections among the different components (or equations) is generated. The graph can then be compressed to control the computation/communication requirements. The task scheduler employs a graph-based iterative scheme, based on the simulated annealing algorithm, to map the vertices of the task graph onto a Multiple-Instruction-stream Multiple-Data-stream (MIMD) type of architecture. The algorithm uses a nonanalytical cost function that properly considers the computation capability of the processors, the network topology, the communication time, and congestion possibilities. Moreover, the proposed technique is simple, flexible, and computationally viable. The efficiency of the algorithm is demonstrated by two case studies with good results.

  5. Non-driving intersegmental knee moments in cycling computed using a model that includes three-dimensional kinematics of the shank/foot and the effect of simplifying assumptions.

    PubMed

    Gregersen, Colin S; Hull, M L

    2003-06-01

    Assessing the importance of non-driving intersegmental knee moments (i.e. varus/valgus and internal/external axial moments) on over-use knee injuries in cycling requires the use of a three-dimensional (3-D) model to compute these loads. The objectives of this study were: (1) to develop a complete, 3-D model of the lower limb to calculate the 3-D knee loads during pedaling for a sample of the competitive cycling population, and (2) to examine the effects of simplifying assumptions on the calculations of the non-driving knee moments. The non-driving knee moments were computed using a complete 3-D model that allowed three rotational degrees of freedom at the knee joint, included the 3-D inertial loads of the shank/foot, and computed knee loads in a shank-fixed coordinate system. All input data, which included the 3-D segment kinematics and the six pedal load components, were collected from the right limb of 15 competitive cyclists while pedaling at 225 W and 90 rpm. On average, the peak varus and internal axial moments of 7.8 and 1.5 N m respectively occurred during the power stroke whereas the peak valgus and external axial moments of 8.1 and 2.5 N m respectively occurred during the recovery stroke. However, the non-driving knee moments were highly variable between subjects; the coefficients of variability in the peak values ranged from 38.7% to 72.6%. When it was assumed that the inertial loads of the shank/foot for motion out of the sagittal plane were zero, the root-mean-squared difference (RMSD) in the non-driving knee moments relative to those for the complete model was 12% of the peak varus/valgus moment and 25% of the peak axial moment. When it was also assumed that the knee joint was revolute with the flexion/extension axis perpendicular to the sagittal plane, the RMSD increased to 24% of the peak varus/valgus moment and 204% of the peak axial moment. Thus, the 3-D orientation of the shank segment has a major affect on the computation of the non-driving knee moments, while the inertial contributions to these loads for motions out of the sagittal plane are less important.

  6. Imaging System Model Crammed Into A 32K Microcomputer

    NASA Astrophysics Data System (ADS)

    Tyson, Robert K.

    1986-12-01

    An imaging system model, based upon linear systems theory, has been developed for a microcomputer with less than 32K of free random access memory (RAM). The model includes diffraction effects of the optics, aberrations in the optics, and atmospheric propagation transfer functions. Variables include pupil geometry, magnitude and character of the aberrations, and strength of atmospheric turbulence ("seeing"). Both coherent and incoherent image formation can be evaluated. The techniques employed for crowding the model into a very small computer will be discussed in detail. Simplifying assumptions for the diffraction and aberration phenomena will be shown along with practical considerations in modeling the optical system. Particular emphasis is placed on avoiding inaccuracies in modeling the pupil and the associated optical transfer function knowing limits on spatial frequency content and resolution. Memory and runtime constraints are analyzed stressing the efficient use of assembly language Fourier transform routines, disk input/output, and graphic displays. The compromises between computer time, limited RAM, and scientific accuracy will be given with techniques for balancing these parameters for individual needs.

  7. A baseline-free procedure for transformation models under interval censorship.

    PubMed

    Gu, Ming Gao; Sun, Liuquan; Zuo, Guoxin

    2005-12-01

    An important property of Cox regression model is that the estimation of regression parameters using the partial likelihood procedure does not depend on its baseline survival function. We call such a procedure baseline-free. Using marginal likelihood, we show that an baseline-free procedure can be derived for a class of general transformation models under interval censoring framework. The baseline-free procedure results a simplified and stable computation algorithm for some complicated and important semiparametric models, such as frailty models and heteroscedastic hazard/rank regression models, where the estimation procedures so far available involve estimation of the infinite dimensional baseline function. A detailed computational algorithm using Markov Chain Monte Carlo stochastic approximation is presented. The proposed procedure is demonstrated through extensive simulation studies, showing the validity of asymptotic consistency and normality. We also illustrate the procedure with a real data set from a study of breast cancer. A heuristic argument showing that the score function is a mean zero martingale is provided.

  8. MadDM: Computation of dark matter relic abundance

    NASA Astrophysics Data System (ADS)

    Backović, Mihailo; Kong, Kyoungchul; McCaskey, Mathew

    2017-12-01

    MadDM computes dark matter relic abundance and dark matter nucleus scattering rates in a generic model. The code is based on the existing MadGraph 5 architecture and as such is easily integrable into any MadGraph collider study. A simple Python interface offers a level of user-friendliness characteristic of MadGraph 5 without sacrificing functionality. MadDM is able to calculate the dark matter relic abundance in models which include a multi-component dark sector, resonance annihilation channels and co-annihilations. The direct detection module of MadDM calculates spin independent / spin dependent dark matter-nucleon cross sections and differential recoil rates as a function of recoil energy, angle and time. The code provides a simplified simulation of detector effects for a wide range of target materials and volumes.

  9. Effects of pressure drop and superficial velocity on the bubbling fluidized bed incinerator.

    PubMed

    Wang, Feng-Jehng; Chen, Suming; Lei, Perng-Kwei; Wu, Chung-Hsing

    2007-12-01

    Since performance and operational conditions, such as superficial velocity, pressure drop, particles viodage, and terminal velocity, are difficult to measure on an incinerator, this study used computational fluid dynamics (CFD) to determine numerical solutions. The effects of pressure drop and superficial velocity on a bubbling fluidized bed incinerator (BFBI) were evaluated. Analytical results indicated that simulation models were able to effectively predict the relationship between superficial velocity and pressure drop over bed height in the BFBI. Second, the models in BFBI were simplified to simulate scale-up beds without excessive computation time. Moreover, simulation and experimental results showed that minimum fluidization velocity of the BFBI must be controlled in at 0.188-3.684 m/s and pressure drop was mainly caused by bed particles.

  10. Trade-off Assessment of Simplified Routing Models for Short-Term Hydropower Reservoir Optimization

    NASA Astrophysics Data System (ADS)

    Issao Kuwajima, Julio; Schwanenberg, Dirk; Alvardo Montero, Rodolfo; Mainardi Fan, Fernando; Assis dos Reis, Alberto

    2014-05-01

    Short-term reservoir optimization, also referred to as model predictive control, integrates model-based forecasts and optimization algorithms to meet multiple management objectives such as water supply, navigation, hydroelectricity generation, environmental obligations and flood protection. It is a valuable decision support tool to handle water-stress conditions or flooding events, and supports decision makers to minimize their impact. If the reservoir management includes downstream control, for example for mitigation flood damages in inundation areas downstream of the operated dam, the flow routing between the dam and the downstream inundation area is of major importance. The unsteady open channel flow in river reaches can be described by the one-dimensional Saint-Venant equations. However, owing to the mathematical complexity of those equations, some simplifications may be required to speed up the computation within the optimization procedure. Another strategy to limit the model runtime is a schematization on a course computational grid. In particular the last measure can introduce significant numerical diffusion into the solution. This is a major drawback, in particular if the reservoir release has steep gradients which we often find in hydropower reservoirs. In this work, four different routing models are assessed concerning their implementation in the predictive control of the Três Marias Reservoir located at the Upper River São Francisco in Brazil: i) a fully dynamic model using the software package SOBEK; ii) a semi-distributed rainfall-runoff model with Muskingum-Cunge routing for the flow reaches of interest, the MGB-IPH (Modelo Hidrológico de Grandes Bacias - Instituto de Pesquisas Hidráulicas); iii) a reservoir routing approach; and iv) a diffusive wave model. The last two models are implemented in the RTC-Tool toolbox. The overall model accuracy between the simplified models in RTC-Tools (iii, iv) and the more sophisticated SOBEK model (i) are comparable, and a lower performance was assessed for the MGB model (ii). Whereas the SOBEK model is able to propagate sharp discharge gradient downstream, the diffusive wave model is damping these gradients significantly due to the course spatial schematization. In the reservoir routing model, which is also schematized on a course grid, we counteract this drawback by modeling parts of the river reach by advection. This results in an excellent ratio between model accuracy / robustness and computational effort making it the approach of choice from the predictive control perspective.

  11. An immersed boundary-simplified sphere function-based gas kinetic scheme for simulation of 3D incompressible flows

    NASA Astrophysics Data System (ADS)

    Yang, L. M.; Shu, C.; Yang, W. M.; Wang, Y.; Wu, J.

    2017-08-01

    In this work, an immersed boundary-simplified sphere function-based gas kinetic scheme (SGKS) is presented for the simulation of 3D incompressible flows with curved and moving boundaries. At first, the SGKS [Yang et al., "A three-dimensional explicit sphere function-based gas-kinetic flux solver for simulation of inviscid compressible flows," J. Comput. Phys. 295, 322 (2015) and Yang et al., "Development of discrete gas kinetic scheme for simulation of 3D viscous incompressible and compressible flows," J. Comput. Phys. 319, 129 (2016)], which is often applied for the simulation of compressible flows, is simplified to improve the computational efficiency for the simulation of incompressible flows. In the original SGKS, the integral domain along the spherical surface for computing conservative variables and numerical fluxes is usually not symmetric at the cell interface. This leads the expression of numerical fluxes at the cell interface to be relatively complicated. For incompressible flows, the sphere at the cell interface can be approximately considered to be symmetric as shown in this work. Besides that, the energy equation is usually not needed for the simulation of incompressible isothermal flows. With all these simplifications, the simple and explicit formulations for the conservative variables and numerical fluxes at the cell interface can be obtained. Second, to effectively implement the no-slip boundary condition for fluid flow problems with complex geometry as well as moving boundary, the implicit boundary condition-enforced immersed boundary method [Wu and Shu, "Implicit velocity correction-based immersed boundary-lattice Boltzmann method and its applications," J. Comput. Phys. 228, 1963 (2009)] is introduced into the simplified SGKS. That is, the flow field is solved by the simplified SGKS without considering the presence of an immersed body and the no-slip boundary condition is implemented by the immersed boundary method. The accuracy and efficiency of the present scheme are validated by simulating the decaying vortex flow, flow past a stationary and rotating sphere, flow past a stationary torus, and flows over dragonfly flight.

  12. Simplified Application of Material Efficiency Green Metrics to Synthesis Plans: Pedagogical Case Studies Selected from "Organic Syntheses"

    ERIC Educational Resources Information Center

    Andraos, John

    2015-01-01

    This paper presents a simplified approach for the application of material efficiency metrics to linear and convergent synthesis plans encountered in organic synthesis courses. Computations are facilitated and automated using intuitively designed Microsoft Excel spreadsheets without invoking abstract mathematical formulas. The merits of this…

  13. Yahoo! Compute Coop (YCC). A Next-Generation Passive Cooling Design for Data Centers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robison, AD; Page, Christina; Lytle, Bob

    The purpose of the Yahoo! Compute Coop (YCC) project is to research, design, build and implement a greenfield "efficient data factory" and to specifically demonstrate that the YCC concept is feasible for large facilities housing tens of thousands of heat-producing computing servers. The project scope for the Yahoo! Compute Coop technology includes: - Analyzing and implementing ways in which to drastically decrease energy consumption and waste output. - Analyzing the laws of thermodynamics and implementing naturally occurring environmental effects in order to maximize the "free-cooling" for large data center facilities. "Free cooling" is the direct usage of outside air tomore » cool the servers vs. traditional "mechanical cooling" which is supplied by chillers or other Dx units. - Redesigning and simplifying building materials and methods. - Shortening and simplifying build-to-operate schedules while at the same time reducing initial build and operating costs. Selected for its favorable climate, the greenfield project site is located in Lockport, NY. Construction on the 9.0 MW critical load data center facility began in May 2009, with the fully operational facility deployed in September 2010. The relatively low initial build cost, compatibility with current server and network models, and the efficient use of power and water are all key features that make it a highly compatible and globally implementable design innovation for the data center industry. Yahoo! Compute Coop technology is designed to achieve 99.98% uptime availability. This integrated building design allows for free cooling 99% of the year via the building's unique shape and orientation, as well as server physical configuration.« less

  14. Order Matters: Sequencing Scale-Realistic Versus Simplified Models to Improve Science Learning

    NASA Astrophysics Data System (ADS)

    Chen, Chen; Schneps, Matthew H.; Sonnert, Gerhard

    2016-10-01

    Teachers choosing between different models to facilitate students' understanding of an abstract system must decide whether to adopt a model that is simplified and striking or one that is realistic and complex. Only recently have instructional technologies enabled teachers and learners to change presentations swiftly and to provide for learning based on multiple models, thus giving rise to questions about the order of presentation. Using disjoint individual growth modeling to examine the learning of astronomical concepts using a simulation of the solar system on tablets for 152 high school students (age 15), the authors detect both a model effect and an order effect in the use of the Orrery, a simplified model that exaggerates the scale relationships, and the True-to-scale, a proportional model that more accurately represents the realistic scale relationships. Specifically, earlier exposure to the simplified model resulted in diminution of the conceptual gain from the subsequent realistic model, but the realistic model did not impede learning from the following simplified model.

  15. Modular use of human body models of varying levels of complexity: Validation of head kinematics.

    PubMed

    Decker, William; Koya, Bharath; Davis, Matthew L; Gayzik, F Scott

    2017-05-29

    The significant computational resources required to execute detailed human body finite-element models has motivated the development of faster running, simplified models (e.g., GHBMC M50-OS). Previous studies have demonstrated the ability to modularly incorporate the validated GHBMC M50-O brain model into the simplified model (GHBMC M50-OS+B), which allows for localized analysis of the brain in a fraction of the computation time required for the detailed model. The objective of this study is to validate the head and neck kinematics of the GHBMC M50-O and M50-OS (detailed and simplified versions of the same model) against human volunteer test data in frontal and lateral loading. Furthermore, the effect of modular insertion of the detailed brain model into the M50-OS is quantified. Data from the Navy Biodynamics Laboratory (NBDL) human volunteer studies, including a 15g frontal, 8g frontal, and 7g lateral impact, were reconstructed and simulated using LS-DYNA. A five-point restraint system was used for all simulations, and initial positions of the models were matched with volunteer data using settling and positioning techniques. Both the frontal and lateral simulations were run with the M50-O, M50-OS, and M50-OS+B with active musculature for a total of nine runs. Normalized run times for the various models used in this study were 8.4 min/ms for the M50-O, 0.26 min/ms for the M50-OS, and 0.97 min/ms for the M50-OS+B, a 32- and 9-fold reduction in run time, respectively. Corridors were reanalyzed for head and T1 kinematics from the NBDL studies. Qualitative evaluation of head rotational accelerations and linear resultant acceleration, as well as linear resultant T1 acceleration, showed reasonable results between all models and the experimental data. Objective evaluation of the results for head center of gravity (CG) accelerations was completed via ISO TS 18571, and indicated scores of 0.673 (M50-O), 0.638 (M50-OS), and 0.656 (M50-OS+B) for the 15g frontal impact. Scores at lower g levels yielded similar results, 0.667 (M50-O), 0.675 (M50-OS), and 0.710 (M50-OS+B) for the 8g frontal impact. The 7g lateral simulations also compared fairly with an average ISO score of 0.565 for the M50-O, 0.634 for the M50-OS, and 0.606 for the M50-OS+B. The three HBMs experienced similar head and neck motion in the frontal simulations, but the M50-O predicted significantly greater head rotation in the lateral simulation. The greatest departure from the detailed occupant models were noted in lateral flexion, potentially indicating the need for further study. Precise modeling of the belt system however was limited by available data. A sensitivity study of these parameters in the frontal condition showed that belt slack and muscle activation have a modest effect on the ISO score. The reduction in computation time of the M50-OS+B reduces the burden of high computational requirements when handling detailed HBMs. Future work will focus on harmonizing the lateral head response of the models and studying localized injury criteria within the brain from the M50-O and M50-OS+B.

  16. Model-based estimation for dynamic cardiac studies using ECT.

    PubMed

    Chiao, P C; Rogers, W L; Clinthorne, N H; Fessler, J A; Hero, A O

    1994-01-01

    The authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (emission computed tomography). They construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. They also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, the authors discuss model assumptions and potential uses of the joint estimation strategy.

  17. A SQL-Database Based Meta-CASE System and its Query Subsystem

    NASA Astrophysics Data System (ADS)

    Eessaar, Erki; Sgirka, Rünno

    Meta-CASE systems simplify the creation of CASE (Computer Aided System Engineering) systems. In this paper, we present a meta-CASE system that provides a web-based user interface and uses an object-relational database system (ORDBMS) as its basis. The use of ORDBMSs allows us to integrate different parts of the system and simplify the creation of meta-CASE and CASE systems. ORDBMSs provide powerful query mechanism. The proposed system allows developers to use queries to evaluate and gradually improve artifacts and calculate values of software measures. We illustrate the use of the systems by using SimpleM modeling language and discuss the use of SQL in the context of queries about artifacts. We have created a prototype of the meta-CASE system by using PostgreSQL™ ORDBMS and PHP scripting language.

  18. A data driven approach using Takagi-Sugeno models for computationally efficient lumped floodplain modelling

    NASA Astrophysics Data System (ADS)

    Wolfs, Vincent; Willems, Patrick

    2013-10-01

    Many applications in support of water management decisions require hydrodynamic models with limited calculation time, including real time control of river flooding, uncertainty and sensitivity analyses by Monte-Carlo simulations, and long term simulations in support of the statistical analysis of the model simulation results (e.g. flood frequency analysis). Several computationally efficient hydrodynamic models exist, but little attention is given to the modelling of floodplains. This paper presents a methodology that can emulate output from a full hydrodynamic model by predicting one or several levels in a floodplain, together with the flow rate between river and floodplain. The overtopping of the embankment is modelled as an overflow at a weir. Adaptive neuro fuzzy inference systems (ANFIS) are exploited to cope with the varying factors affecting the flow. Different input sets and identification methods are considered in model construction. Because of the dual use of simplified physically based equations and data-driven techniques, the ANFIS consist of very few rules with a low number of input variables. A second calculation scheme can be followed for exceptionally large floods. The obtained nominal emulation model was tested for four floodplains along the river Dender in Belgium. Results show that the obtained models are accurate with low computational cost.

  19. Uncertainty Analysis of Air Radiation for Lunar Return Shock Layers

    NASA Technical Reports Server (NTRS)

    Kleb, Bil; Johnston, Christopher O.

    2008-01-01

    By leveraging a new uncertainty markup technique, two risk analysis methods are used to compute the uncertainty of lunar-return shock layer radiation predicted by the High temperature Aerothermodynamic Radiation Algorithm (HARA). The effects of epistemic uncertainty, or uncertainty due to a lack of knowledge, is considered for the following modeling parameters: atomic line oscillator strengths, atomic line Stark broadening widths, atomic photoionization cross sections, negative ion photodetachment cross sections, molecular bands oscillator strengths, and electron impact excitation rates. First, a simplified shock layer problem consisting of two constant-property equilibrium layers is considered. The results of this simplified problem show that the atomic nitrogen oscillator strengths and Stark broadening widths in both the vacuum ultraviolet and infrared spectral regions, along with the negative ion continuum, are the dominant uncertainty contributors. Next, three variable property stagnation-line shock layer cases are analyzed: a typical lunar return case and two Fire II cases. For the near-equilibrium lunar return and Fire 1643-second cases, the resulting uncertainties are very similar to the simplified case. Conversely, the relatively nonequilibrium 1636-second case shows significantly larger influence from electron impact excitation rates of both atoms and molecules. For all cases, the total uncertainty in radiative heat flux to the wall due to epistemic uncertainty in modeling parameters is 30% as opposed to the erroneously-small uncertainty levels (plus or minus 6%) found when treating model parameter uncertainties as aleatory (due to chance) instead of epistemic (due to lack of knowledge).

  20. High fidelity simulations of infrared imagery with animated characters

    NASA Astrophysics Data System (ADS)

    Näsström, F.; Persson, A.; Bergström, D.; Berggren, J.; Hedström, J.; Allvar, J.; Karlsson, M.

    2012-06-01

    High fidelity simulations of IR signatures and imagery tend to be slow and do not have effective support for animation of characters. Simplified rendering methods based on computer graphics methods can be used to overcome these limitations. This paper presents a method to combine these tools and produce simulated high fidelity thermal IR data of animated people in terrain. Infrared signatures for human characters have been calculated using RadThermIR. To handle multiple character models, these calculations use a simplified material model for the anatomy and clothing. Weather and temperature conditions match the IR-texture used in the terrain model. The calculated signatures are applied to the animated 3D characters that, together with the terrain model, are used to produce high fidelity IR imagery of people or crowds. For high level animation control and crowd simulations, HLAS (High Level Animation System) has been developed. There are tools available to create and visualize skeleton based animations, but tools that allow control of the animated characters on a higher level, e.g. for crowd simulation, are usually expensive and closed source. We need the flexibility of HLAS to add animation into an HLA enabled sensor system simulation framework.

  1. A study of modelling simplifications in ground vibration predictions for railway traffic at grade

    NASA Astrophysics Data System (ADS)

    Germonpré, M.; Degrande, G.; Lombaert, G.

    2017-10-01

    Accurate computational models are required to predict ground-borne vibration due to railway traffic. Such models generally require a substantial computational effort. Therefore, much research has focused on developing computationally efficient methods, by either exploiting the regularity of the problem geometry in the direction along the track or assuming a simplified track structure. This paper investigates the modelling errors caused by commonly made simplifications of the track geometry. A case study is presented investigating a ballasted track in an excavation. The soil underneath the ballast is stiffened by a lime treatment. First, periodic track models with different cross sections are analyzed, revealing that a prediction of the rail receptance only requires an accurate representation of the soil layering directly underneath the ballast. A much more detailed representation of the cross sectional geometry is required, however, to calculate vibration transfer from track to free field. Second, simplifications in the longitudinal track direction are investigated by comparing 2.5D and periodic track models. This comparison shows that the 2.5D model slightly overestimates the track stiffness, while the transfer functions between track and free field are well predicted. Using a 2.5D model to predict the response during a train passage leads to an overestimation of both train-track interaction forces and free field vibrations. A combined periodic/2.5D approach is therefore proposed in this paper. First, the dynamic axle loads are computed by solving the train-track interaction problem with a periodic model. Next, the vibration transfer to the free field is computed with a 2.5D model. This combined periodic/2.5D approach only introduces small modelling errors compared to an approach in which a periodic model is used in both steps, while significantly reducing the computational cost.

  2. A simplified computer solution for the flexibility matrix of contacting teeth for spiral bevel gears

    NASA Technical Reports Server (NTRS)

    Hsu, C. Y.; Cheng, H. S.

    1987-01-01

    A computer code, FLEXM, was developed to calculate the flexibility matrices of contacting teeth for spiral bevel gears using a simplified analysis based on the elementary beam theory for the deformation of gear and shaft. The simplified theory requires a computer time at least one order of magnitude less than that needed for the complete finite element method analysis reported earlier by H. Chao, and it is much easier to apply for different gear and shaft geometries. Results were obtained for a set of spiral bevel gears. The teeth deflections due to torsion, bending moment, shearing strain and axial force were found to be in the order 10(-5), 10(-6), 10(-7), and 10(-8) respectively. Thus, the torsional deformation was the most predominant factor. In the analysis of dynamic load, response frequencies were found to be larger when the mass or moment of inertia was smaller or the stiffness was larger. The change in damping coefficient had little influence on the resonance frequency, but has a marked influence on the dynamic load at the resonant frequencies.

  3. A simplified analysis of propulsion installation losses for computerized aircraft design

    NASA Technical Reports Server (NTRS)

    Morris, S. J., Jr.; Nelms, W. P., Jr.; Bailey, R. O.

    1976-01-01

    A simplified method is presented for computing the installation losses of aircraft gas turbine propulsion systems. The method has been programmed for use in computer aided conceptual aircraft design studies that cover a broad range of Mach numbers and altitudes. The items computed are: inlet size, pressure recovery, additive drag, subsonic spillage drag, bleed and bypass drags, auxiliary air systems drag, boundary-layer diverter drag, nozzle boattail drag, and the interference drag on the region adjacent to multiple nozzle installations. The methods for computing each of these installation effects are described and computer codes for the calculation of these effects are furnished. The results of these methods are compared with selected data for the F-5A and other aircraft. The computer program can be used with uninstalled engine performance information which is currently supplied by a cycle analysis program. The program, including comments, is about 600 FORTRAN statements long, and uses both theoretical and empirical techniques.

  4. Evaluation of SSME test data reduction methods

    NASA Technical Reports Server (NTRS)

    Santi, L. Michael

    1994-01-01

    Accurate prediction of hardware and flow characteristics within the Space Shuttle Main Engine (SSME) during transient and main-stage operation requires a significant integration of ground test data, flight experience, and computational models. The process of integrating SSME test measurements with physical model predictions is commonly referred to as data reduction. Uncertainties within both test measurements and simplified models of the SSME flow environment compound the data integration problem. The first objective of this effort was to establish an acceptability criterion for data reduction solutions. The second objective of this effort was to investigate the data reduction potential of the ROCETS (Rocket Engine Transient Simulation) simulation platform. A simplified ROCETS model of the SSME was obtained from the MSFC Performance Analysis Branch . This model was examined and tested for physical consistency. Two modules were constructed and added to the ROCETS library to independently check the mass and energy balances of selected engine subsystems including the low pressure fuel turbopump, the high pressure fuel turbopump, the low pressure oxidizer turbopump, the high pressure oxidizer turbopump, the fuel preburner, the oxidizer preburner, the main combustion chamber coolant circuit, and the nozzle coolant circuit. A sensitivity study was then conducted to determine the individual influences of forty-two hardware characteristics on fourteen high pressure region prediction variables as returned by the SSME ROCETS model.

  5. Simplified models for dark matter searches at the LHC

    NASA Astrophysics Data System (ADS)

    Abdallah, Jalal; Araujo, Henrique; Arbey, Alexandre; Ashkenazi, Adi; Belyaev, Alexander; Berger, Joshua; Boehm, Celine; Boveia, Antonio; Brennan, Amelia; Brooke, Jim; Buchmueller, Oliver; Buckley, Matthew; Busoni, Giorgio; Calibbi, Lorenzo; Chauhan, Sushil; Daci, Nadir; Davies, Gavin; De Bruyn, Isabelle; De Jong, Paul; De Roeck, Albert; de Vries, Kees; Del Re, Daniele; De Simone, Andrea; Di Simone, Andrea; Doglioni, Caterina; Dolan, Matthew; Dreiner, Herbi K.; Ellis, John; Eno, Sarah; Etzion, Erez; Fairbairn, Malcolm; Feldstein, Brian; Flaecher, Henning; Feng, Eric; Fox, Patrick; Genest, Marie-Hélène; Gouskos, Loukas; Gramling, Johanna; Haisch, Ulrich; Harnik, Roni; Hibbs, Anthony; Hoh, Siewyan; Hopkins, Walter; Ippolito, Valerio; Jacques, Thomas; Kahlhoefer, Felix; Khoze, Valentin V.; Kirk, Russell; Korn, Andreas; Kotov, Khristian; Kunori, Shuichi; Landsberg, Greg; Liem, Sebastian; Lin, Tongyan; Lowette, Steven; Lucas, Robyn; Malgeri, Luca; Malik, Sarah; McCabe, Christopher; Mete, Alaettin Serhan; Morgante, Enrico; Mrenna, Stephen; Nakahama, Yu; Newbold, Dave; Nordstrom, Karl; Pani, Priscilla; Papucci, Michele; Pataraia, Sophio; Penning, Bjoern; Pinna, Deborah; Polesello, Giacomo; Racco, Davide; Re, Emanuele; Riotto, Antonio Walter; Rizzo, Thomas; Salek, David; Sarkar, Subir; Schramm, Steven; Skubic, Patrick; Slone, Oren; Smirnov, Juri; Soreq, Yotam; Sumner, Timothy; Tait, Tim M. P.; Thomas, Marc; Tomalin, Ian; Tunnell, Christopher; Vichi, Alessandro; Volansky, Tomer; Weiner, Neal; West, Stephen M.; Wielers, Monika; Worm, Steven; Yavin, Itay; Zaldivar, Bryan; Zhou, Ning; Zurek, Kathryn

    2015-09-01

    This document outlines a set of simplified models for dark matter and its interactions with Standard Model particles. It is intended to summarize the main characteristics that these simplified models have when applied to dark matter searches at the LHC, and to provide a number of useful expressions for reference. The list of models includes both ss-channel and tt-channel scenarios. For ss-channel, spin-0 and spin-1 mediations are discussed, and also realizations where the Higgs particle provides a portal between the dark and visible sectors. The guiding principles underpinning the proposed simplified models are spelled out, and some suggestions for implementation are presented.

  6. Computational studies of transthoracic and transvenous defibrillation in a detailed 3-D human thorax model.

    PubMed

    Jorgenson, D B; Haynor, D R; Bardy, G H; Kim, Y

    1995-02-01

    A method for constructing and solving detailed patient-specific 3-D finite element models of the human thorax is presented for use in defibrillation studies. The method utilizes the patient's own X-ray CT scan and a simplified meshing scheme to quickly and efficiently generate a model typically composed of approximately 400,000 elements. A parameter sensitivity study on one human thorax model to examine the effects of variation in assigned tissue resistivity values, level of anatomical detail included in the model, and number of CT slices used to produce the model is presented. Of the seven tissue types examined, the average left ventricular (LV) myocardial voltage gradient was most sensitive to the values of myocardial and blood resistivity. Incorrectly simplifying the model, for example modeling the heart as a homogeneous structure by ignoring the blood in the chambers, caused the average LV myocardial voltage gradient to increase by 12%. The sensitivity of the model to variations in electrode size and position was also examined. Small changes (< 2.0 cm) in electrode position caused average LV myocardial voltage gradient values to increase by up to 12%. We conclude that patient-specific 3-D finite element modeling of human thoracic electric fields is feasible and may reduce the empiric approach to insertion of implantable defibrillators and improve transthoracic defibrillation techniques.

  7. Permanent Seismically Induced Displacement of Rock-Founded Structures Computed by the Newmark Program

    DTIC Science & Technology

    2009-02-01

    solved, great care is exercised by the seismic engineer to size the mesh so that moderate to high wave frequencies are not artificially excluded in...buttressing effect of a reinforced concrete slab (Figure 1.7) is represented in this simplified dynamic model by the user-specified force Presist...retaining wall that is buttressed by an invert spill- way slab (which is a reinforced concrete slab), exemplify a category of Corps retaining walls that may

  8. Program Merges SAR Data on Terrain and Vegetation Heights

    NASA Technical Reports Server (NTRS)

    Siqueira, Paul; Hensley, Scott; Rodriguez, Ernesto; Simard, Marc

    2007-01-01

    X/P Merge is a computer program that estimates ground-surface elevations and vegetation heights from multiple sets of data acquired by the GeoSAR instrument [a terrain-mapping synthetic-aperture radar (SAR) system that operates in the X and bands]. X/P Merge software combines data from X- and P-band digital elevation models, SAR backscatter magnitudes, and interferometric correlation magnitudes into a simplified set of output topographical maps of ground-surface elevation and tree height.

  9. Robot Control Based On Spatial-Operator Algebra

    NASA Technical Reports Server (NTRS)

    Rodriguez, Guillermo; Kreutz, Kenneth K.; Jain, Abhinandan

    1992-01-01

    Method for mathematical modeling and control of robotic manipulators based on spatial-operator algebra providing concise representation and simple, high-level theoretical frame-work for solution of kinematical and dynamical problems involving complicated temporal and spatial relationships. Recursive algorithms derived immediately from abstract spatial-operator expressions by inspection. Transition from abstract formulation through abstract solution to detailed implementation of specific algorithms to compute solution greatly simplified. Complicated dynamical problems like two cooperating robot arms solved more easily.

  10. Direct Synthesis of Microwave Waveforms for Quantum Computing

    NASA Astrophysics Data System (ADS)

    Raftery, James; Vrajitoarea, Andrei; Zhang, Gengyan; Leng, Zhaoqi; Srinivasan, Srikanth; Houck, Andrew

    Current state of the art quantum computing experiments in the microwave regime use control pulses generated by modulating microwave tones with baseband signals generated by an arbitrary waveform generator (AWG). Recent advances in digital analog conversion technology have made it possible to directly synthesize arbitrary microwave pulses with sampling rates of 65 gigasamples per second (GSa/s) or higher. These new ultra-wide bandwidth AWG's could dramatically simplify the classical control chain for quantum computing experiments, presenting potential cost savings and reducing the number of components that need to be carefully calibrated. Here we use a Keysight M8195A AWG to study the viability of such a simplified scheme, demonstrating randomized benchmarking of a superconducting qubit with high fidelity.

  11. Sinking bubbles in stout beers

    NASA Astrophysics Data System (ADS)

    Lee, W. T.; Kaar, S.; O'Brien, S. B. G.

    2018-04-01

    A surprising phenomenon witnessed by many is the sinking bubbles seen in a settling pint of stout beer. Bubbles are less dense than the surrounding fluid so how does this happen? Previous work has shown that the explanation lies in a circulation of fluid promoted by the tilted sides of the glass. However, this work has relied heavily on computational fluid dynamics (CFD) simulations. Here, we show that the phenomenon of sinking bubbles can be predicted using a simple analytic model. To make the model analytically tractable, we work in the limit of small bubbles and consider a simplified geometry. The model confirms both the existence of sinking bubbles and the previously proposed mechanism.

  12. Three-dimensional turbulent-mixing-length modeling for discrete-hole coolant injection into a crossflow

    NASA Technical Reports Server (NTRS)

    Wang, C. R.; Papell, S. S.

    1983-01-01

    Three dimensional mixing length models of a flow field immediately downstream of coolant injection through a discrete circular hole at a 30 deg angle into a crossflow were derived from the measurements of turbulence intensity. To verify their effectiveness, the models were used to estimate the anisotropic turbulent effects in a simplified theoretical and numerical analysis to compute the velocity and temperature fields. With small coolant injection mass flow rate and constant surface temperature, numerical results of the local crossflow streamwise velocity component and surface heat transfer rate are consistent with the velocity measurement and the surface film cooling effectiveness distributions reported in previous studies.

  13. Three-dimensional turbulent-mixing-length modeling for discrete-hole coolant injection into a crossflow

    NASA Astrophysics Data System (ADS)

    Wang, C. R.; Papell, S. S.

    1983-09-01

    Three dimensional mixing length models of a flow field immediately downstream of coolant injection through a discrete circular hole at a 30 deg angle into a crossflow were derived from the measurements of turbulence intensity. To verify their effectiveness, the models were used to estimate the anisotropic turbulent effects in a simplified theoretical and numerical analysis to compute the velocity and temperature fields. With small coolant injection mass flow rate and constant surface temperature, numerical results of the local crossflow streamwise velocity component and surface heat transfer rate are consistent with the velocity measurement and the surface film cooling effectiveness distributions reported in previous studies.

  14. On the combinatorics of sparsification.

    PubMed

    Huang, Fenix Wd; Reidys, Christian M

    2012-10-22

    We study the sparsification of dynamic programming based on folding algorithms of RNA structures. Sparsification is a method that improves significantly the computation of minimum free energy (mfe) RNA structures. We provide a quantitative analysis of the sparsification of a particular decomposition rule, Λ∗. This rule splits an interval of RNA secondary and pseudoknot structures of fixed topological genus. Key for quantifying sparsifications is the size of the so called candidate sets. Here we assume mfe-structures to be specifically distributed (see Assumption 1) within arbitrary and irreducible RNA secondary and pseudoknot structures of fixed topological genus. We then present a combinatorial framework which allows by means of probabilities of irreducible sub-structures to obtain the expectation of the Λ∗-candidate set w.r.t. a uniformly random input sequence. We compute these expectations for arc-based energy models via energy-filtered generating functions (GF) in case of RNA secondary structures as well as RNA pseudoknot structures. Furthermore, for RNA secondary structures we also analyze a simplified loop-based energy model. Our combinatorial analysis is then compared to the expected number of Λ∗-candidates obtained from the folding mfe-structures. In case of the mfe-folding of RNA secondary structures with a simplified loop-based energy model our results imply that sparsification provides a significant, constant improvement of 91% (theory) to be compared to an 96% (experimental, simplified arc-based model) reduction. However, we do not observe a linear factor improvement. Finally, in case of the "full" loop-energy model we can report a reduction of 98% (experiment). Sparsification was initially attributed a linear factor improvement. This conclusion was based on the so called polymer-zeta property, which stems from interpreting polymer chains as self-avoiding walks. Subsequent findings however reveal that the O(n) improvement is not correct. The combinatorial analysis presented here shows that, assuming a specific distribution (see Assumption 1), of mfe-structures within irreducible and arbitrary structures, the expected number of Λ∗-candidates is Θ(n2). However, the constant reduction is quite significant, being in the range of 96%. We furthermore show an analogous result for the sparsification of the Λ∗-decomposition rule for RNA pseudoknotted structures of genus one. Finally we observe that the effect of sparsification is sensitive to the employed energy model.

  15. Outline of cost-benefit analysis and a case study

    NASA Technical Reports Server (NTRS)

    Kellizy, A.

    1978-01-01

    The methodology of cost-benefit analysis is reviewed and a case study involving solar cell technology is presented. Emphasis is placed on simplifying the technique in order to permit a technical person not trained in economics to undertake a cost-benefit study comparing alternative approaches to a given problem. The role of economic analysis in management decision making is discussed. In simplifying the methodology it was necessary to restrict the scope and applicability of this report. Additional considerations and constraints are outlined. Examples are worked out to demonstrate the principles. A computer program which performs the computational aspects appears in the appendix.

  16. Multi-level Hierarchical Poly Tree computer architectures

    NASA Technical Reports Server (NTRS)

    Padovan, Joe; Gute, Doug

    1990-01-01

    Based on the concept of hierarchical substructuring, this paper develops an optimal multi-level Hierarchical Poly Tree (HPT) parallel computer architecture scheme which is applicable to the solution of finite element and difference simulations. Emphasis is given to minimizing computational effort, in-core/out-of-core memory requirements, and the data transfer between processors. In addition, a simplified communications network that reduces the number of I/O channels between processors is presented. HPT configurations that yield optimal superlinearities are also demonstrated. Moreover, to generalize the scope of applicability, special attention is given to developing: (1) multi-level reduction trees which provide an orderly/optimal procedure by which model densification/simplification can be achieved, as well as (2) methodologies enabling processor grading that yields architectures with varying types of multi-level granularity.

  17. 40-Gb/s PAM4 with low-complexity equalizers for next-generation PON systems

    NASA Astrophysics Data System (ADS)

    Tang, Xizi; Zhou, Ji; Guo, Mengqi; Qi, Jia; Hu, Fan; Qiao, Yaojun; Lu, Yueming

    2018-01-01

    In this paper, we demonstrate 40-Gb/s four-level pulse amplitude modulation (PAM4) transmission with 10 GHz devices and low-complexity equalizers for next-generation passive optical network (PON) systems. Simple feed-forward equalizer (FFE) and decision feedback equalizer (DFE) enable 20 km fiber transmission while high-complexity Volterra algorithm in combination with FFE and DFE can extend the transmission distance to 40 km. A simplified Volterra algorithm is proposed for reducing computational complexity. Simulation results show that the simplified Volterra algorithm reduces up to ∼75% computational complexity at a relatively low cost of only 0.4 dB power budget. At a forward error correction (FEC) threshold of 10-3 , we achieve 31.2 dB and 30.8 dB power budget over 40 km fiber transmission using traditional FFE-DFE-Volterra and our simplified FFE-DFE-Volterra, respectively.

  18. Simplified Calculation Of Solar Fluxes In Solar Receivers

    NASA Technical Reports Server (NTRS)

    Bhandari, Pradeep

    1990-01-01

    Simplified Calculation of Solar Flux Distribution on Side Wall of Cylindrical Cavity Solar Receivers computer program employs simple solar-flux-calculation algorithm for cylindrical-cavity-type solar receiver. Results compare favorably with those of more complicated programs. Applications include study of solar energy and transfer of heat, and space power/solar-dynamics engineering. Written in FORTRAN 77.

  19. Graphics and composite material computer program enhancements for SPAR

    NASA Technical Reports Server (NTRS)

    Farley, G. L.; Baker, D. J.

    1980-01-01

    User documentation is provided for additional computer programs developed for use in conjunction with SPAR. These programs plot digital data, simplify input for composite material section properties, and compute lamina stresses and strains. Sample problems are presented including execution procedures, program input, and graphical output.

  20. 29 CFR 548.100 - Introductory statement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... simplify bookkeeping and computation of overtime pay. 1 The regular rate is the average hourly earnings of... AUTHORIZATION OF ESTABLISHED BASIC RATES FOR COMPUTING OVERTIME PAY Interpretations Introduction § 548.100... requirements of computing overtime pay at the regular rate, 1 and to allow, under specific conditions, the use...

  1. A comparative study on different methods of automatic mesh generation of human femurs.

    PubMed

    Viceconti, M; Bellingeri, L; Cristofolini, L; Toni, A

    1998-01-01

    The aim of this study was to evaluate comparatively five methods for automating mesh generation (AMG) when used to mesh a human femur. The five AMG methods considered were: mapped mesh, which provides hexahedral elements through a direct mapping of the element onto the geometry; tetra mesh, which generates tetrahedral elements from a solid model of the object geometry; voxel mesh which builds cubic 8-node elements directly from CT images; and hexa mesh that automatically generated hexahedral elements from a surface definition of the femur geometry. The various methods were tested against two reference models: a simplified geometric model and a proximal femur model. The first model was useful to assess the inherent accuracy of the meshes created by the AMG methods, since an analytical solution was available for the elastic problem of the simplified geometric model. The femur model was used to test the AMG methods in a more realistic condition. The femoral geometry was derived from a reference model (the "standardized femur") and the finite element analyses predictions were compared to experimental measurements. All methods were evaluated in terms of human and computer effort needed to carry out the complete analysis, and in terms of accuracy. The comparison demonstrated that each tested method deserves attention and may be the best for specific situations. The mapped AMG method requires a significant human effort but is very accurate and it allows a tight control of the mesh structure. The tetra AMG method requires a solid model of the object to be analysed but is widely available and accurate. The hexa AMG method requires a significant computer effort but can also be used on polygonal models and is very accurate. The voxel AMG method requires a huge number of elements to reach an accuracy comparable to that of the other methods, but it does not require any pre-processing of the CT dataset to extract the geometry and in some cases may be the only viable solution.

  2. Hypersonic Vehicle Propulsion System Simplified Model Development

    NASA Technical Reports Server (NTRS)

    Stueber, Thomas J.; Raitano, Paul; Le, Dzu K.; Ouzts, Peter

    2007-01-01

    This document addresses the modeling task plan for the hypersonic GN&C GRC team members. The overall propulsion system modeling task plan is a multi-step process and the task plan identified in this document addresses the first steps (short term modeling goals). The procedures and tools produced from this effort will be useful for creating simplified dynamic models applicable to a hypersonic vehicle propulsion system. The document continues with the GRC short term modeling goal. Next, a general description of the desired simplified model is presented along with simulations that are available to varying degrees. The simulations may be available in electronic form (FORTRAN, CFD, MatLab,...) or in paper form in published documents. Finally, roadmaps outlining possible avenues towards realizing simplified model are presented.

  3. Nonlinear Visco-Elastic Response of Composites via Micro-Mechanical Models

    NASA Technical Reports Server (NTRS)

    Gates, Thomas S.; Sridharan, Srinivasan

    2005-01-01

    Micro-mechanical models for a study of nonlinear visco-elastic response of composite laminae are developed and their performance compared. A single integral constitutive law proposed by Schapery and subsequently generalized to multi-axial states of stress is utilized in the study for the matrix material. This is used in conjunction with a computationally facile scheme in which hereditary strains are computed using a recursive relation suggested by Henriksen. Composite response is studied using two competing micro-models, viz. a simplified Square Cell Model (SSCM) and a Finite Element based self-consistent Cylindrical Model (FECM). The algorithm is developed assuming that the material response computations are carried out in a module attached to a general purpose finite element program used for composite structural analysis. It is shown that the SSCM as used in investigations of material nonlinearity can involve significant errors in the prediction of transverse Young's modulus and shear modulus. The errors in the elastic strains thus predicted are of the same order of magnitude as the creep strains accruing due to visco-elasticity. The FECM on the other hand does appear to perform better both in the prediction of elastic constants and the study of creep response.

  4. Bridging automatic speech recognition and psycholinguistics: Extending Shortlist to an end-to-end model of human speech recognition (L)

    NASA Astrophysics Data System (ADS)

    Scharenborg, Odette; ten Bosch, Louis; Boves, Lou; Norris, Dennis

    2003-12-01

    This letter evaluates potential benefits of combining human speech recognition (HSR) and automatic speech recognition by building a joint model of an automatic phone recognizer (APR) and a computational model of HSR, viz., Shortlist [Norris, Cognition 52, 189-234 (1994)]. Experiments based on ``real-life'' speech highlight critical limitations posed by some of the simplifying assumptions made in models of human speech recognition. These limitations could be overcome by avoiding hard phone decisions at the output side of the APR, and by using a match between the input and the internal lexicon that flexibly copes with deviations from canonical phonemic representations.

  5. An improved task-role-based access control model for G-CSCW applications

    NASA Astrophysics Data System (ADS)

    He, Chaoying; Chen, Jun; Jiang, Jie; Han, Gang

    2005-10-01

    Access control is an important and popular security mechanism for multi-user applications. GIS-based Computer Supported Cooperative Work (G-CSCW) application is one of such applications. This paper presents an improved Task-Role-Based Access Control (X-TRBAC) model for G-CSCW applications. The new model inherits the basic concepts of the old ones, such as role and task. Moreover, it has introduced two concepts, i.e. object hierarchy and operation hierarchy, and the corresponding rules to improve the efficiency of permission definition in access control models. The experiments show that the method can simplify the definition of permissions, and it is more applicable for G-CSCW applications.

  6. Modifications to the steady-state 41-node thermoregulatory model including validation of the respiratory and diffusional water loss equations

    NASA Technical Reports Server (NTRS)

    1974-01-01

    After the simplified version of the 41-Node Stolwijk Metabolic Man Model was implemented on the Sigma 3 and UNIVAC 1110 computers in batch mode, it became desirable to make certain revisions. First, the availability of time-sharing terminals makes it possible to provide the capability and flexibility of conversational interaction between user and model. Secondly, recent physiological studies show the need to revise certain parameter values contained in the model. Thirdly, it was desired to make quantitative and accurate predictions of evaporative water loss for humans in an orbiting space station. The result of the first phase of this effort are reported.

  7. A Parametric Study of Unsteady Rotor-Stator Interaction in a Simplified Francis Turbine

    NASA Astrophysics Data System (ADS)

    Wouden, Alex; Cimbala, John; Lewis, Bryan

    2011-11-01

    CFD analysis is becoming a critical stage in the design of hydroturbines. However, its capability to represent unsteady flow interactions between the rotor and stator (which requires a 360-degree, mesh-refined model of the turbine passage) is hindered. For CFD to become a more effective tool in predicting the performance of a hydroturbine, the key interactions between the rotor and stator need to be understood using current numerical methods. As a first step towards evaluating this unsteady behavior without the burden of a computationally expensive domain, the stator and Francis-type rotor blades are reduced to flat plates. Local and global variables are compared using periodic, semi-periodic, and 360-degree geometric models and various turbulence models (k-omega, k-epsilon, and Spalart-Allmaras). The computations take place within the OpenFOAM® environment and utilize a general grid interface (GGI) between the rotor and stator computational domains. The rotor computational domain is capable of dynamic rotation. The results demonstrate some of the strengths and limitations of utilizing CFD for hydroturbine analysis. These case studies will also serve as tutorials to help others learn how to use CFD for turbomachinery. This research is funded by a grant from the DOE.

  8. Simplified gas sensor model based on AlGaN/GaN heterostructure Schottky diode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Das, Subhashis, E-mail: subhashis.ds@gmail.com; Majumdar, S.; Kumar, R.

    2015-08-28

    Physics based modeling of AlGaN/GaN heterostructure Schottky diode gas sensor has been investigated for high sensitivity and linearity of the device. Here the surface and heterointerface properties are greatly exploited. The dependence of two dimensional electron gas (2DEG) upon the surface charges is mainly utilized. The simulation of Schottky diode has been done in Technology Computer Aided Design (TCAD) tool and I-V curves are generated, from the I-V curves 76% response has been recorded in presence of 500 ppm gas at a biasing voltage of 0.95 Volt.

  9. A three-ions model of electrodiffusion kinetics in a nanochannel

    NASA Astrophysics Data System (ADS)

    Sebechlebská, Táňa; Neogrády, Pavel; Valent, Ivan

    2016-10-01

    Nanoscale electrodiffusion transport is involved in many electrochemical, technological and biological processes. Developments in computer power and numerical algorithms allow for solving full time-dependent Nernst-Planck and Poisson equations without simplifying approximations. We simulate spatio-temporal profiles of concentration and electric potential changes after a potential jump in a 10 nm channel with two cations (with opposite concentration gradients and different mobilities) and one anion (of uniform concentration). The temporal dynamics shows three exponential phases and damped oscillations of the electric potential. Despite the absence of surface charges in the studied model, an asymmetric current-voltage characteristic was observed.

  10. Modeling of the JET-EP ICRH antenna

    NASA Astrophysics Data System (ADS)

    Koch, R.; Amarante, G. S.; Heuraux, S.; Pécoul, S.; Louche, F.

    2001-10-01

    The new ICRH antenna planned for the Enhanced Performance phase of JET (JET-EP) is analyzed using the antenna coupling code ICANT, which self-consistently determines the currents on all antenna parts. This study addresses, using a simplified antenna model, the question of the impact on the coupling of the poloidal segmentation of the conductors, of their width and of their poloidal phasing. We also address the question of the relation between the imaginary part of the power computed by the code and the input impedance of the antenna. An example of current distribution on the complete antenna in vacuum is also shown.

  11. Examination of Wave Speed in Rotating Detonation Engines Using Simplified Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel E.

    2018-01-01

    A simplified, two-dimensional, computational fluid dynamic (CFD) simulation, with a reactive Euler solver is used to examine possible causes for the low detonation wave propagation speeds that are consistently observed in air breathing rotating detonation engine (RDE) experiments. Intense, small-scale turbulence is proposed as the primary mechanism. While the solver cannot model this turbulence, it can be used to examine the most likely, and profound effect of turbulence. That is a substantial enlargement of the reaction zone, or equivalently, an effective reduction in the chemical reaction rate. It is demonstrated that in the unique flowfield of the RDE, a reduction in reaction rate leads to a reduction in the detonation speed. A subsequent test of reduced reaction rate in a purely one-dimensional pulsed detonation engine (PDE) flowfield yields no reduction in wave speed. The reasons for this are explained. The impact of reduced wave speed on RDE performance is then examined, and found to be minimal. Two other potential mechanisms are briefly examined. These are heat transfer, and reactive mixture non-uniformity. In the context of the simulation used for this study, both mechanisms are shown to have negligible effect on either wave speed or performance.

  12. Compression response of tri-axially braided textile composites

    NASA Astrophysics Data System (ADS)

    Song, Shunjun

    2007-12-01

    This thesis is concerned with characterizing the compression stiffness and compression strength of 2D tri-axially braided textile composites (2DTBC). Two types of 2DTBC are considered differing only on the resin type, while the textile fiber architecture is kept the same with bias tows at 45 degrees to the axial tows. Experimental, analytical and computational methods are described based on the results generated in this study. Since these composites are manufactured using resin transfer molding, the intended and as manufactured composite samples differ in their microstructure due to consolidation and thermal history effects in the manufacturing cycle. These imperfections are measured and the effect of these imperfections on the compression stiffness and strength are characterized. Since the matrix is a polymer material, the nonuniform thermal history undergone by the polymer at manufacturing (within the composite and in the presence of fibers) renders its properties to be non-homogenous. The effects of these non-homogeneities are captured through the definition of an equivalent in-situ matrix material. A method to characterize the mechanical properties of the in-situ matrix is also described. Fiber tow buckling, fiber tow kinking and matrix microcracking are all observed in the experiments. These failure mechanisms are captured through a computational model that uses the finite element (FE) technique to discretize the structure. The FE equations are solved using the commercial software ABAQUS version 6.5. The fiber tows are modeled as transversely isotropic elastic-plastic solids and the matrix is modeled as an isotropic elastic-plastic solid with and without microcracking damage. Because the 2DTBC is periodic, the question of how many repeat units are necessary to model the compression stiffness and strength are examined. Based on the computational results, the correct representative unit cell for this class of materials is identified. The computational models and results presented in the thesis provide a means to assess the compressive strength of 2DTBC and its dependence on various microstructural parameters. The essential features (for example, fiber kinking) of 2DTBC under compressive loading are captured accurately and the results are validated by the compression experiments. Due to the requirement of large computational resources for the unit cell studies, simplified models that use less computer resources but sacrifice some accuracy are presented for use in engineering design. A combination of the simplified models is shown to provide a good prediction of the salient features (peak strength and plateau strength) of these materials under compression loading. The incorporation of matrix strain rate effects, a study of the effect of the bias tow angle and the inclusion of viscoelastic/viscoplastic behavior for the study of fatigue are suggested as extensions to this work.

  13. Simplified models for dark matter searches at the LHC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdallah, Jalal; Araujo, Henrique; Arbey, Alexandre

    This document a outlines a set of simplified models for dark matter and its interactions with Standard Model particles. It is intended to summarize the main characteristics that these simplified models have when applied to dark matter searches at the LHC, and to provide a number of useful expressions for reference. The list of models includes both s-channel and t-channel scenarios. For s-channel, spin-0 and spin-1 mediations are discussed, and also realizations where the Higgs particle provides a portal between the dark and visible sectors. The guiding principles underpinning the proposed simplified models are spelled out, and some suggestions formore » implementation are presented.« less

  14. Simplified Models for Dark Matter Searches at the LHC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdallah, Jalal

    This document outlines a set of simplified models for dark matter and its interactions with Standard Model particles. It is intended to summarize the main characteristics that these simplified models have when applied to dark matter searches at the LHC, and to provide a number of useful expressions for reference. The list of models includes both s-channel and t-channel scenarios. For s-channel, spin-0 and spin-1 mediations are discussed, and also realizations where the Higgs particle provides a portal between the dark and visible sectors. The guiding principles underpinning the proposed simplified models are spelled out, and some suggestions for implementationmore » are presented.« less

  15. Simplified Models for Dark Matter Searches at the LHC

    DOE PAGES

    Abdallah, Jalal

    2015-08-11

    This document outlines a set of simplified models for dark matter and its interactions with Standard Model particles. It is intended to summarize the main characteristics that these simplified models have when applied to dark matter searches at the LHC, and to provide a number of useful expressions for reference. The list of models includes both s-channel and t-channel scenarios. For s-channel, spin-0 and spin-1 mediations are discussed, and also realizations where the Higgs particle provides a portal between the dark and visible sectors. The guiding principles underpinning the proposed simplified models are spelled out, and some suggestions for implementationmore » are presented.« less

  16. An alternative to fully coupled reactive transport simulations for long-term prediction of chemical reactions in complex geological systems

    NASA Astrophysics Data System (ADS)

    De Lucia, Marco; Kempka, Thomas; Kühn, Michael

    2014-05-01

    Fully-coupled reactive transport simulations involving multiphase hydrodynamics and chemical reactions in heterogeneous settings are extremely challenging from a computational point of view. This often leads to oversimplification of the investigated system: coarse spatial discretization, to keep the number of elements in the order of few thousands; simplified chemistry, disregarding many potentially important reactions. A novel approach for coupling non-reactive hydrodynamic simulations with the outcome of single batch geochemical simulations was therefore introduced to assess the potential long-term mineral trapping at the Ketzin pilot site for underground CO2 storage in Germany [1],[2]. The advantage of the coupling is the ability to use multi-million grid non-reactive hydrodynamics simulations on one side and few batch 0D geochemical simulations on the other, so that the complexity of both systems does not need to be reduced. This contribution shows the approach which was taken to validate this simplified coupling scheme. The procedure involved batch simulations of the reference geochemical model, then performing both non-reactive and fully coupled 1D and 3D reactive transport simulations and finally applying the simplified coupling scheme based on the non-reactive and geochemical batch model. The TOUGHREACT/ECO2N [3] simulator was adopted for the validation. The degree of refinement of the spatial grid and the complexity and velocity of the mineral reactions, along with a cut-off value for the minimum concentration of dissolved CO2 allowed to originate precipitates in the simplified approach were found out to be the governing parameters for the convergence of the two schemes. Systematic discrepancies between the approaches are not reducible, simply because there is no feedback between chemistry and hydrodynamics, and can reach 20 % - 30 % in unfavourable cases. However, even such discrepancy is completely acceptable, in our opinion, given the amount of uncertainty underlying the geochemical models. References [1] Klein, E., De Lucia, M., Kempka, T. Kühn, M. 2013. Evaluation of longterm mineral trapping at the Ketzin pilot site for CO2 storage: an integrative approach using geochemical modelling and reservoir simulation. International Journal of Greenhouse Gas Control 19: 720-730, doi:10.1016/j.ijggc.2013.05.014 [2] Kempka, T., Klein, E., De Lucia, M., Tillner, E. Kühn, M. 2013. Assessment of Long-term CO2 Trapping Mechanisms at the Ketzin Pilot Site (Germany) by Coupled Numerical Modelling. Energy Procedia 37: 5419-5426, doi:10.1016/j.egypro.2013.06.460 [3] Xu, T., Spycher, N., Sonnenthal, E., Zhang, G., Zheng, L., Pruess, K. 2010. TOUGHREACT Version 2.0: A simulator for subsurface reactive transport under non-isothermal multiphase flow conditions, Computers & Geosciences 37(6), doi:10.1016/j.cageo.2010.10.007

  17. GPU-accelerated element-free reverse-time migration with Gauss points partition

    NASA Astrophysics Data System (ADS)

    Zhou, Zhen; Jia, Xiaofeng; Qiang, Xiaodong

    2018-06-01

    An element-free method (EFM) has been demonstrated successfully in elasticity, heat conduction and fatigue crack growth problems. We present the theory of EFM and its numerical applications in seismic modelling and reverse time migration (RTM). Compared with the finite difference method and the finite element method, the EFM has unique advantages: (1) independence of grids in computation and (2) lower expense and more flexibility (because only the information of the nodes and the boundary of the concerned area is required). However, in EFM, due to improper computation and storage of some large sparse matrices, such as the mass matrix and the stiffness matrix, the method is difficult to apply to seismic modelling and RTM for a large velocity model. To solve the problem of storage and computation efficiency, we propose a concept of Gauss points partition and utilise the graphics processing unit to improve the computational efficiency. We employ the compressed sparse row format to compress the intermediate large sparse matrices and attempt to simplify the operations by solving the linear equations with CULA solver. To improve the computation efficiency further, we introduce the concept of the lumped mass matrix. Numerical experiments indicate that the proposed method is accurate and more efficient than the regular EFM.

  18. Analysis hierarchical model for discrete event systems

    NASA Astrophysics Data System (ADS)

    Ciortea, E. M.

    2015-11-01

    The This paper presents the hierarchical model based on discrete event network for robotic systems. Based on the hierarchical approach, Petri network is analysed as a network of the highest conceptual level and the lowest level of local control. For modelling and control of complex robotic systems using extended Petri nets. Such a system is structured, controlled and analysed in this paper by using Visual Object Net ++ package that is relatively simple and easy to use, and the results are shown as representations easy to interpret. The hierarchical structure of the robotic system is implemented on computers analysed using specialized programs. Implementation of hierarchical model discrete event systems, as a real-time operating system on a computer network connected via a serial bus is possible, where each computer is dedicated to local and Petri model of a subsystem global robotic system. Since Petri models are simplified to apply general computers, analysis, modelling, complex manufacturing systems control can be achieved using Petri nets. Discrete event systems is a pragmatic tool for modelling industrial systems. For system modelling using Petri nets because we have our system where discrete event. To highlight the auxiliary time Petri model using transport stream divided into hierarchical levels and sections are analysed successively. Proposed robotic system simulation using timed Petri, offers the opportunity to view the robotic time. Application of goods or robotic and transmission times obtained by measuring spot is obtained graphics showing the average time for transport activity, using the parameters sets of finished products. individually.

  19. Extracting surface diffusion coefficients from batch adsorption measurement data: application of the classic Langmuir kinetics model.

    PubMed

    Chu, Khim Hoong

    2017-11-09

    Surface diffusion coefficients may be estimated by fitting solutions of a diffusion model to batch kinetic data. For non-linear systems, a numerical solution of the diffusion model's governing equations is generally required. We report here the application of the classic Langmuir kinetics model to extract surface diffusion coefficients from batch kinetic data. The use of the Langmuir kinetics model in lieu of the conventional surface diffusion model allows derivation of an analytical expression. The parameter estimation procedure requires determining the Langmuir rate coefficient from which the pertinent surface diffusion coefficient is calculated. Surface diffusion coefficients within the 10 -9 to 10 -6  cm 2 /s range obtained by fitting the Langmuir kinetics model to experimental kinetic data taken from the literature are found to be consistent with the corresponding values obtained from the traditional surface diffusion model. The virtue of this simplified parameter estimation method is that it reduces the computational complexity as the analytical expression involves only an algebraic equation in closed form which is easily evaluated by spreadsheet computation.

  20. Modal kinematics for multisection continuum arms.

    PubMed

    Godage, Isuru S; Medrano-Cerda, Gustavo A; Branson, David T; Guglielmino, Emanuele; Caldwell, Darwin G

    2015-05-13

    This paper presents a novel spatial kinematic model for multisection continuum arms based on mode shape functions (MSF). Modal methods have been used in many disciplines from finite element methods to structural analysis to approximate complex and nonlinear parametric variations with simple mathematical functions. Given certain constraints and required accuracy, this helps to simplify complex phenomena with numerically efficient implementations leading to fast computations. A successful application of the modal approximation techniques to develop a new modal kinematic model for general variable length multisection continuum arms is discussed. The proposed method solves the limitations associated with previous models and introduces a new approach for readily deriving exact, singularity-free and unique MSF's that simplifies the approach and avoids mode switching. The model is able to simulate spatial bending as well as straight arm motions (i.e., pure elongation/contraction), and introduces inverse position and orientation kinematics for multisection continuum arms. A kinematic decoupling feature, splitting position and orientation inverse kinematics is introduced. This type of decoupling has not been presented for these types of robotic arms before. The model also carefully accounts for physical constraints in the joint space to provide enhanced insight into practical mechanics and impose actuator mechanical limitations onto the kinematics thus generating fully realizable results. The proposed method is easily applicable to a broad spectrum of continuum arm designs.

  1. Quantitative Investigation of the Technologies That Support Cloud Computing

    ERIC Educational Resources Information Center

    Hu, Wenjin

    2014-01-01

    Cloud computing is dramatically shaping modern IT infrastructure. It virtualizes computing resources, provides elastic scalability, serves as a pay-as-you-use utility, simplifies the IT administrators' daily tasks, enhances the mobility and collaboration of data, and increases user productivity. We focus on providing generalized black-box…

  2. The Simulation of an Oxidation-Reduction Titration Curve with Computer Algebra

    ERIC Educational Resources Information Center

    Whiteley, Richard V., Jr.

    2015-01-01

    Although the simulation of an oxidation/reduction titration curve is an important exercise in an undergraduate course in quantitative analysis, that exercise is frequently simplified to accommodate computational limitations. With the use of readily available computer algebra systems, however, such curves for complicated systems can be generated…

  3. SeSBench - An initiative to benchmark reactive transport models for environmental subsurface processes

    NASA Astrophysics Data System (ADS)

    Jacques, Diederik

    2017-04-01

    As soil functions are governed by a multitude of interacting hydrological, geochemical and biological processes, simulation tools coupling mathematical models for interacting processes are needed. Coupled reactive transport models are a typical example of such coupled tools mainly focusing on hydrological and geochemical coupling (see e.g. Steefel et al., 2015). Mathematical and numerical complexity for both the tool itself or of the specific conceptual model can increase rapidly. Therefore, numerical verification of such type of models is a prerequisite for guaranteeing reliability and confidence and qualifying simulation tools and approaches for any further model application. In 2011, a first SeSBench -Subsurface Environmental Simulation Benchmarking- workshop was held in Berkeley (USA) followed by four other ones. The objective is to benchmark subsurface environmental simulation models and methods with a current focus on reactive transport processes. The final outcome was a special issue in Computational Geosciences (2015, issue 3 - Reactive transport benchmarks for subsurface environmental simulation) with a collection of 11 benchmarks. Benchmarks, proposed by the participants of the workshops, should be relevant for environmental or geo-engineering applications; the latter were mostly related to radioactive waste disposal issues - excluding benchmarks defined for pure mathematical reasons. Another important feature is the tiered approach within a benchmark with the definition of a single principle problem and different sub problems. The latter typically benchmarked individual or simplified processes (e.g. inert solute transport, simplified geochemical conceptual model) or geometries (e.g. batch or one-dimensional, homogeneous). Finally, three codes should be involved into a benchmark. The SeSBench initiative contributes to confidence building for applying reactive transport codes. Furthermore, it illustrates the use of those type of models for different environmental and geo-engineering applications. SeSBench will organize new workshops to add new benchmarks in a new special issue. Steefel, C. I., et al. (2015). "Reactive transport codes for subsurface environmental simulation." Computational Geosciences 19: 445-478.

  4. Gate-controlled-diodes in silicon-on-sapphire: A computer simulation

    NASA Technical Reports Server (NTRS)

    Gassaway, J. D.

    1974-01-01

    The computer simulation of the electrical behavior of a Gate-Controlled Diode (GCD) fabricated in Silicon-On-Sapphire (SOS) was described. A procedure for determining lifetime profiles from capacitance and reverse current measurements on the GCD was established. Chapter 1 discusses the SOS structure and points out the need of lifetime profiles to assist in device design for GCD's and bipolar transistors. Chapter 2 presents the one-dimensional analytical formula for electrostatic analysis of the SOS-GCD which are useful for data interpretation and setting boundary conditions on a simplified two-dimensional analysis. Chapter 3 gives the results of a two-dimensional analysis which treats the field as one-dimensional until the silicon film is depleted and the field penetrates the sapphire substrate. Chapter 4 describes a more complete two-dimensional model and gives results of programs implementing the model.

  5. Real-time robot deliberation by compilation and monitoring of anytime algorithms

    NASA Technical Reports Server (NTRS)

    Zilberstein, Shlomo

    1994-01-01

    Anytime algorithms are algorithms whose quality of results improves gradually as computation time increases. Certainty, accuracy, and specificity are metrics useful in anytime algorighm construction. It is widely accepted that a successful robotic system must trade off between decision quality and the computational resources used to produce it. Anytime algorithms were designed to offer such a trade off. A model of compilation and monitoring mechanisms needed to build robots that can efficiently control their deliberation time is presented. This approach simplifies the design and implementation of complex intelligent robots, mechanizes the composition and monitoring processes, and provides independent real time robotic systems that automatically adjust resource allocation to yield optimum performance.

  6. Development and evaluation of a fault-tolerant multiprocessor (FTMP) computer. Volume 4: FTMP executive summary

    NASA Technical Reports Server (NTRS)

    Smith, T. B., III; Lala, J. H.

    1984-01-01

    The FTMP architecture is a high reliability computer concept modeled after a homogeneous multiprocessor architecture. Elements of the FTMP are operated in tight synchronism with one another and hardware fault-detection and fault-masking is provided which is transparent to the software. Operating system design and user software design is thus greatly simplified. Performance of the FTMP is also comparable to that of a simplex equivalent due to the efficiency of fault handling hardware. The FTMP project constructed an engineering module of the FTMP, programmed the machine and extensively tested the architecture through fault injection and other stress testing. This testing confirmed the soundness of the FTMP concepts.

  7. Pteros: fast and easy to use open-source C++ library for molecular analysis.

    PubMed

    Yesylevskyy, Semen O

    2012-07-15

    An open-source Pteros library for molecular modeling and analysis of molecular dynamics trajectories for C++ programming language is introduced. Pteros provides a number of routine analysis operations ranging from reading and writing trajectory files and geometry transformations to structural alignment and computation of nonbonded interaction energies. The library features asynchronous trajectory reading and parallel execution of several analysis routines, which greatly simplifies development of computationally intensive trajectory analysis algorithms. Pteros programming interface is very simple and intuitive while the source code is well documented and easily extendible. Pteros is available for free under open-source Artistic License from http://sourceforge.net/projects/pteros/. Copyright © 2012 Wiley Periodicals, Inc.

  8. Simplifying silicon burning: Application of quasi-equilibrium to (alpha) network nucleosynthesis

    NASA Technical Reports Server (NTRS)

    Hix, W. R.; Thielemann, F.-K.; Khokhlov, A. M.; Wheeler, J. C.

    1997-01-01

    While the need for accurate calculation of nucleosynthesis and the resulting rate of thermonuclear energy release within hydrodynamic models of stars and supernovae is clear, the computational expense of these nucleosynthesis calculations often force a compromise in accuracy to reduce the computational cost. To redress this trade-off of accuracy for speed, the authors present an improved nuclear network which takes advantage of quasi- equilibrium in order to reduce the number of independent nuclei, and hence the computational cost of nucleosynthesis, without significant reduction in accuracy. In this paper they will discuss the first application of this method, the further reduction in size of the minimal alpha network. The resultant QSE- reduced alpha network is twice as fast as the conventional alpha network it replaces and requires the tracking of half as many abundance variables, while accurately estimating the rate of energy generation. Such reduction in cost is particularly necessary for future generation of multi-dimensional models for supernovae.

  9. Remote control system for high-perfomance computer simulation of crystal growth by the PFC method

    NASA Astrophysics Data System (ADS)

    Pavlyuk, Evgeny; Starodumov, Ilya; Osipov, Sergei

    2017-04-01

    Modeling of crystallization process by the phase field crystal method (PFC) - one of the important directions of modern computational materials science. In this paper, the practical side of the computer simulation of the crystallization process by the PFC method is investigated. To solve problems using this method, it is necessary to use high-performance computing clusters, data storage systems and other often expensive complex computer systems. Access to such resources is often limited, unstable and accompanied by various administrative problems. In addition, the variety of software and settings of different computing clusters sometimes does not allow researchers to use unified program code. There is a need to adapt the program code for each configuration of the computer complex. The practical experience of the authors has shown that the creation of a special control system for computing with the possibility of remote use can greatly simplify the implementation of simulations and increase the performance of scientific research. In current paper we show the principal idea of such a system and justify its efficiency.

  10. Communication: Biological applications of coupled-cluster frozen-density embedding

    NASA Astrophysics Data System (ADS)

    Heuser, Johannes; Höfener, Sebastian

    2018-04-01

    We report the implementation of the Laplace-transform scaled opposite-spin (LT-SOS) resolution-of-the-identity second-order approximate coupled-cluster singles and doubles (RICC2) combined with frozen-density embedding for excitation energies and molecular properties. In the present work, we furthermore employ the Hartree-Fock density for the interaction energy leading to a simplified Lagrangian which is linear in the Lagrangian multipliers. This approximation has the key advantage of a decoupling of the coupled-cluster amplitude and multipliers, leading also to a significant reduction in computation time. Using the new simplified Lagrangian in combination with efficient wavefunction models such as RICC2 or LT-SOS-RICC2 and density-functional theory (DFT) for the environment molecules (CC2-in-DFT) enables the efficient study of biological applications such as the rhodopsin and visual cone pigments using ab initio methods as routine applications.

  11. A simplified real time method to forecast semi-enclosed basins storm surge

    NASA Astrophysics Data System (ADS)

    Pasquali, D.; Di Risio, M.; De Girolamo, P.

    2015-11-01

    Semi-enclosed basins are often prone to storm surge events. Indeed, their meteorological exposition, the presence of large continental shelf and their shape can lead to strong sea level set-up. A real time system aimed at forecasting storm surge may be of great help to protect human activities (i.e. to forecast flooding due to storm surge events), to manage ports and to safeguard coasts safety. This paper aims at illustrating a simple method able to forecast storm surge events in semi-enclosed basins in real time. The method is based on a mixed approach in which the results obtained by means of a simplified physics based model with low computational costs are corrected by means of statistical techniques. The proposed method is applied to a point of interest located in the Northern part of the Adriatic Sea. The comparison of forecasted levels against observed values shows the satisfactory reliability of the forecasts.

  12. Modeling the pharyngeal pressure during adult nasal high flow therapy.

    PubMed

    Kumar, Haribalan; Spence, Callum J T; Tawhai, Merryn H

    2015-12-01

    Subjects receiving nasal high flow (NHF) via wide-bore nasal cannula may experience different levels of positive pressure depending on the individual response to NHF. In this study, airflow in the nasal airway during NHF-assisted breathing is simulated and nasopharyngeal airway pressure numerically computed, to determine whether the relationship between NHF and pressure can be described by a simple equation. Two geometric models are used for analysis. In the first, 3D airway geometry is reconstructed from computed tomography images of an adult nasal airway. For the second, a simplified geometric model is derived that has the same cross-sectional area as the complex model, but is more readily amenable to analysis. Peak airway pressure is correlated as a function of nasal valve area, nostril area and cannula flow rate, for NHF rates of 20, 40 and 60 L/min. Results show that airway pressure is related by a power law to NHF rate, valve area, and nostril area. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Sharply curved turn around duct flow predictions using spectral partitioning of the turbulent kinetic energy and a pressure modified wall law

    NASA Technical Reports Server (NTRS)

    Santi, L. Michael

    1986-01-01

    Computational predictions of turbulent flow in sharply curved 180 degree turn around ducts are presented. The CNS2D computer code is used to solve the equations of motion for two-dimensional incompressible flows transformed to a nonorthogonal body-fitted coordinate system. This procedure incorporates the pressure velocity correction algorithm SIMPLE-C to iteratively solve a discretized form of the transformed equations. A multiple scale turbulence model based on simplified spectral partitioning is employed to obtain closure. Flow field predictions utilizing the multiple scale model are compared to features predicted by the traditional single scale k-epsilon model. Tuning parameter sensitivities of the multiple scale model applied to turn around duct flows are also determined. In addition, a wall function approach based on a wall law suitable for incompressible turbulent boundary layers under strong adverse pressure gradients is tested. Turn around duct flow characteristics utilizing this modified wall law are presented and compared to results based on a standard wall treatment.

  14. A simplified Forest Inventory and Analysis database: FIADB-Lite

    Treesearch

    Patrick D. Miles

    2008-01-01

    This publication is a simplified version of the Forest Inventory and Analysis Data Base (FIADB) for users who do not need to compute sampling errors and may find the FIADB unnecessarily complex. Possible users include GIS specialists who may be interested only in identifying and retrieving geographic information and per acre values for the set of plots used in...

  15. What's so Simple about Simplified Texts? A Computational and Psycholinguistic Investigation of Text Comprehension and Text Processing

    ERIC Educational Resources Information Center

    Crossley, Scott A.; Yang, Hae Sung; McNamara, Danielle S.

    2014-01-01

    This study uses a moving windows self-paced reading task to assess both text comprehension and processing time of authentic texts and these same texts simplified to beginning and intermediate levels. Forty-eight second language learners each read 9 texts (3 different authentic, beginning, and intermediate level texts). Repeated measures ANOVAs…

  16. A conservative implicit finite difference algorithm for the unsteady transonic full potential equation

    NASA Technical Reports Server (NTRS)

    Steger, J. L.; Caradonna, F. X.

    1980-01-01

    An implicit finite difference procedure is developed to solve the unsteady full potential equation in conservation law form. Computational efficiency is maintained by use of approximate factorization techniques. The numerical algorithm is first order in time and second order in space. A circulation model and difference equations are developed for lifting airfoils in unsteady flow; however, thin airfoil body boundary conditions have been used with stretching functions to simplify the development of the numerical algorithm.

  17. High Performance Computing Technologies for Modeling the Dynamics and Dispersion of Ice Chunks in the Arctic Ocean

    DTIC Science & Technology

    2016-08-23

    SECURITY CLASSIFICATION OF: Hybrid finite element / finite volume based CaMEL shallow water flow solvers have been successfully extended to study wave...effects on ice floes in a simplified 10 sq-km ocean domain. Our solver combines the merits of both the finite element and finite volume methods and...ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 sea ice dynamics, shallow water, finite element , finite volume

  18. Computer-program documentation of an interactive-accounting model to simulate streamflow, water quality, and water-supply operations in a river basin

    USGS Publications Warehouse

    Burns, A.W.

    1988-01-01

    This report describes an interactive-accounting model used to simulate streamflow, chemical-constituent concentrations and loads, and water-supply operations in a river basin. The model uses regression equations to compute flow from incremental (internode) drainage areas. Conservative chemical constituents (typically dissolved solids) also are computed from regression equations. Both flow and water quality loads are accumulated downstream. Optionally, the model simulates the water use and the simplified groundwater systems of a basin. Water users include agricultural, municipal, industrial, and in-stream users , and reservoir operators. Water users list their potential water sources, including direct diversions, groundwater pumpage, interbasin imports, or reservoir releases, in the order in which they will be used. Direct diversions conform to basinwide water law priorities. The model is interactive, and although the input data exist in files, the user can modify them interactively. A major feature of the model is its color-graphic-output options. This report includes a description of the model, organizational charts of subroutines, and examples of the graphics. Detailed format instructions for the input data, example files of input data, definitions of program variables, and listing of the FORTRAN source code are Attachments to the report. (USGS)

  19. Proof-of-Concept Demonstrations for Computation-Based Human Reliability Analysis. Modeling Operator Performance During Flooding Scenarios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joe, Jeffrey Clark; Boring, Ronald Laurids; Herberger, Sarah Elizabeth Marie

    The United States (U.S.) Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) program has the overall objective to help sustain the existing commercial nuclear power plants (NPPs). To accomplish this program objective, there are multiple LWRS “pathways,” or research and development (R&D) focus areas. One LWRS focus area is called the Risk-Informed Safety Margin and Characterization (RISMC) pathway. Initial efforts under this pathway to combine probabilistic and plant multi-physics models to quantify safety margins and support business decisions also included HRA, but in a somewhat simplified manner. HRA experts at Idaho National Laboratory (INL) have been collaborating with othermore » experts to develop a computational HRA approach, called the Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER), for inclusion into the RISMC framework. The basic premise of this research is to leverage applicable computational techniques, namely simulation and modeling, to develop and then, using RAVEN as a controller, seamlessly integrate virtual operator models (HUNTER) with 1) the dynamic computational MOOSE runtime environment that includes a full-scope plant model, and 2) the RISMC framework PRA models already in use. The HUNTER computational HRA approach is a hybrid approach that leverages past work from cognitive psychology, human performance modeling, and HRA, but it is also a significant departure from existing static and even dynamic HRA methods. This report is divided into five chapters that cover the development of an external flooding event test case and associated statistical modeling considerations.« less

  20. Design and performance evaluation of a simplified dynamic model for combined sewer overflows in pumped sewer systems

    NASA Astrophysics Data System (ADS)

    van Daal-Rombouts, Petra; Sun, Siao; Langeveld, Jeroen; Bertrand-Krajewski, Jean-Luc; Clemens, François

    2016-07-01

    Optimisation or real time control (RTC) studies in wastewater systems increasingly require rapid simulations of sewer systems in extensive catchments. To reduce the simulation time calibrated simplified models are applied, with the performance generally based on the goodness of fit of the calibration. In this research the performance of three simplified and a full hydrodynamic (FH) model for two catchments are compared based on the correct determination of CSO event occurrences and of the total discharged volumes to the surface water. Simplified model M1 consists of a rainfall runoff outflow (RRO) model only. M2 combines the RRO model with a static reservoir model for the sewer behaviour. M3 comprises the RRO model and a dynamic reservoir model. The dynamic reservoir characteristics were derived from FH model simulations. It was found that M2 and M3 are able to describe the sewer behaviour of the catchments, contrary to M1. The preferred model structure depends on the quality of the information (geometrical database and monitoring data) available for the design and calibration of the model. Finally, calibrated simplified models are shown to be preferable to uncalibrated FH models when performing optimisation or RTC studies.

  1. Efficient finite element modelling for the investigation of the dynamic behaviour of a structure with bolted joints

    NASA Astrophysics Data System (ADS)

    Omar, R.; Rani, M. N. Abdul; Yunus, M. A.; Mirza, W. I. I. Wan Iskandar; Zin, M. S. Mohd

    2018-04-01

    A simple structure with bolted joints consists of the structural components, bolts and nuts. There are several methods to model the structures with bolted joints, however there is no reliable, efficient and economic modelling methods that can accurately predict its dynamics behaviour. Explained in this paper is an investigation that was conducted to obtain an appropriate modelling method for bolted joints. This was carried out by evaluating four different finite element (FE) models of the assembled plates and bolts namely the solid plates-bolts model, plates without bolt model, hybrid plates-bolts model and simplified plates-bolts model. FE modal analysis was conducted for all four initial FE models of the bolted joints. Results of the FE modal analysis were compared with the experimental modal analysis (EMA) results. EMA was performed to extract the natural frequencies and mode shapes of the test physical structure with bolted joints. Evaluation was made by comparing the number of nodes, number of elements, elapsed computer processing unit (CPU) time, and the total percentage of errors of each initial FE model when compared with EMA result. The evaluation showed that the simplified plates-bolts model could most accurately predict the dynamic behaviour of the structure with bolted joints. This study proved that the reliable, efficient and economic modelling of bolted joints, mainly the representation of the bolting, has played a crucial element in ensuring the accuracy of the dynamic behaviour prediction.

  2. A continuum theory for multicomponent chromatography modeling.

    PubMed

    Pfister, David; Morbidelli, Massimo; Nicoud, Roger-Marc

    2016-05-13

    A continuum theory is proposed for modeling multicomponent chromatographic systems under linear conditions. The model is based on the description of complex mixtures, possibly involving tens or hundreds of solutes, by a continuum. The present approach is shown to be very efficient when dealing with a large number of similar components presenting close elution behaviors and whose individual analytical characterization is impossible. Moreover, approximating complex mixtures by continuous distributions of solutes reduces the required number of model parameters to the few ones specific to the characterization of the selected continuous distributions. Therefore, in the frame of the continuum theory, the simulation of large multicomponent systems gets simplified and the computational effectiveness of the chromatographic model is thus dramatically improved. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Model-based estimation for dynamic cardiac studies using ECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiao, P.C.; Rogers, W.L.; Clinthorne, N.H.

    1994-06-01

    In this paper, the authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (Emission Computed Tomography). The authors construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. The authors also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performancemore » to the Cramer-Rao lower bound. Finally, model assumptions and potential uses of the joint estimation strategy are discussed.« less

  4. High-fidelity meshes from tissue samples for diffusion MRI simulations.

    PubMed

    Panagiotaki, Eleftheria; Hall, Matt G; Zhang, Hui; Siow, Bernard; Lythgoe, Mark F; Alexander, Daniel C

    2010-01-01

    This paper presents a method for constructing detailed geometric models of tissue microstructure for synthesizing realistic diffusion MRI data. We construct three-dimensional mesh models from confocal microscopy image stacks using the marching cubes algorithm. Random-walk simulations within the resulting meshes provide synthetic diffusion MRI measurements. Experiments optimise simulation parameters and complexity of the meshes to achieve accuracy and reproducibility while minimizing computation time. Finally we assess the quality of the synthesized data from the mesh models by comparison with scanner data as well as synthetic data from simple geometric models and simplified meshes that vary only in two dimensions. The results support the extra complexity of the three-dimensional mesh compared to simpler models although sensitivity to the mesh resolution is quite robust.

  5. Statistical genetics and evolution of quantitative traits

    NASA Astrophysics Data System (ADS)

    Neher, Richard A.; Shraiman, Boris I.

    2011-10-01

    The distribution and heritability of many traits depends on numerous loci in the genome. In general, the astronomical number of possible genotypes makes the system with large numbers of loci difficult to describe. Multilocus evolution, however, greatly simplifies in the limit of weak selection and frequent recombination. In this limit, populations rapidly reach quasilinkage equilibrium (QLE) in which the dynamics of the full genotype distribution, including correlations between alleles at different loci, can be parametrized by the allele frequencies. This review provides a simplified exposition of the concept and mathematics of QLE which is central to the statistical description of genotypes in sexual populations. Key results of quantitative genetics such as the generalized Fisher’s “fundamental theorem,” along with Wright’s adaptive landscape, are shown to emerge within QLE from the dynamics of the genotype distribution. This is followed by a discussion under what circumstances QLE is applicable, and what the breakdown of QLE implies for the population structure and the dynamics of selection. Understanding the fundamental aspects of multilocus evolution obtained through simplified models may be helpful in providing conceptual and computational tools to address the challenges arising in the studies of complex quantitative phenotypes of practical interest.

  6. A comprehensive pipeline for multi-resolution modeling of the mitral valve: Validation, computational efficiency, and predictive capability.

    PubMed

    Drach, Andrew; Khalighi, Amir H; Sacks, Michael S

    2018-02-01

    Multiple studies have demonstrated that the pathological geometries unique to each patient can affect the durability of mitral valve (MV) repairs. While computational modeling of the MV is a promising approach to improve the surgical outcomes, the complex MV geometry precludes use of simplified models. Moreover, the lack of complete in vivo geometric information presents significant challenges in the development of patient-specific computational models. There is thus a need to determine the level of detail necessary for predictive MV models. To address this issue, we have developed a novel pipeline for building attribute-rich computational models of MV with varying fidelity directly from the in vitro imaging data. The approach combines high-resolution geometric information from loaded and unloaded states to achieve a high level of anatomic detail, followed by mapping and parametric embedding of tissue attributes to build a high-resolution, attribute-rich computational models. Subsequent lower resolution models were then developed and evaluated by comparing the displacements and surface strains to those extracted from the imaging data. We then identified the critical levels of fidelity for building predictive MV models in the dilated and repaired states. We demonstrated that a model with a feature size of about 5 mm and mesh size of about 1 mm was sufficient to predict the overall MV shape, stress, and strain distributions with high accuracy. However, we also noted that more detailed models were found to be needed to simulate microstructural events. We conclude that the developed pipeline enables sufficiently complex models for biomechanical simulations of MV in normal, dilated, repaired states. Copyright © 2017 John Wiley & Sons, Ltd.

  7. Dynamic response of a collidant impacting a low pressure airbag

    NASA Astrophysics Data System (ADS)

    Dreher, Peter A.

    There are many uses of low pressure airbags, both military and commercial. Many of these applications have been hampered by inadequate and inaccurate modeling tools. This dissertation contains the derivation of a four degree-of-freedom system of differential equations from physical laws of mass and energy conservation, force equilibrium, and the Ideal Gas Law. Kinematic equations were derived to model a cylindrical airbag as a single control volume impacted by a parallelepiped collidant. An efficient numerical procedure was devised to solve the simplified system of equations in a manner amenable to discovering design trends. The largest public airbag experiment, both in scale and scope, was designed and built to collect data on low-pressure airbag responses, otherwise unavailable in the literature. The experimental results were compared to computational simulations to validate the simplified numerical model. Experimental response trends are presented that will aid airbag designers. The two objectives of using a low pressure airbag to demonstrate the feasibility to (1) accelerate a munition to 15 feet per second velocity from a bomb bay, and (2) decelerate humans hitting trucks below the human tolerance level of 50 G's, were both met.

  8. Triangular node for Transmission-Line Modeling (TLM) applied to bio-heat transfer.

    PubMed

    Milan, Hugo F M; Gebremedhin, Kifle G

    2016-12-01

    Transmission-Line Modeling (TLM) is a numerical method used to solve complex and time-domain bio-heat transfer problems. In TLM, rectangles are used to discretize two-dimensional problems. The drawback in using rectangular shapes is that instead of refining only the domain of interest, a large additional domain will also be refined in the x and y axes, which results in increased computational time and memory space. In this paper, we developed a triangular node for TLM applied to bio-heat transfer that does not have the drawback associated with the rectangular nodes. The model includes heat source, blood perfusion (advection), boundary conditions and initial conditions. The boundary conditions could be adiabatic, temperature, heat flux, or convection. A matrix equation for TLM, which simplifies the solution of time-domain problems or solves steady-state problems, was also developed. The predicted results were compared against results obtained from the solution of a simplified two-dimensional problem, and they agreed within 1% for a mesh length of triangular faces of 59µm±9µm (mean±standard deviation) and a time step of 1ms. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. DynaSim: A MATLAB Toolbox for Neural Modeling and Simulation

    PubMed Central

    Sherfey, Jason S.; Soplata, Austin E.; Ardid, Salva; Roberts, Erik A.; Stanley, David A.; Pittman-Polletta, Benjamin R.; Kopell, Nancy J.

    2018-01-01

    DynaSim is an open-source MATLAB/GNU Octave toolbox for rapid prototyping of neural models and batch simulation management. It is designed to speed up and simplify the process of generating, sharing, and exploring network models of neurons with one or more compartments. Models can be specified by equations directly (similar to XPP or the Brian simulator) or by lists of predefined or custom model components. The higher-level specification supports arbitrarily complex population models and networks of interconnected populations. DynaSim also includes a large set of features that simplify exploring model dynamics over parameter spaces, running simulations in parallel using both multicore processors and high-performance computer clusters, and analyzing and plotting large numbers of simulated data sets in parallel. It also includes a graphical user interface (DynaSim GUI) that supports full functionality without requiring user programming. The software has been implemented in MATLAB to enable advanced neural modeling using MATLAB, given its popularity and a growing interest in modeling neural systems. The design of DynaSim incorporates a novel schema for model specification to facilitate future interoperability with other specifications (e.g., NeuroML, SBML), simulators (e.g., NEURON, Brian, NEST), and web-based applications (e.g., Geppetto) outside MATLAB. DynaSim is freely available at http://dynasimtoolbox.org. This tool promises to reduce barriers for investigating dynamics in large neural models, facilitate collaborative modeling, and complement other tools being developed in the neuroinformatics community. PMID:29599715

  10. DynaSim: A MATLAB Toolbox for Neural Modeling and Simulation.

    PubMed

    Sherfey, Jason S; Soplata, Austin E; Ardid, Salva; Roberts, Erik A; Stanley, David A; Pittman-Polletta, Benjamin R; Kopell, Nancy J

    2018-01-01

    DynaSim is an open-source MATLAB/GNU Octave toolbox for rapid prototyping of neural models and batch simulation management. It is designed to speed up and simplify the process of generating, sharing, and exploring network models of neurons with one or more compartments. Models can be specified by equations directly (similar to XPP or the Brian simulator) or by lists of predefined or custom model components. The higher-level specification supports arbitrarily complex population models and networks of interconnected populations. DynaSim also includes a large set of features that simplify exploring model dynamics over parameter spaces, running simulations in parallel using both multicore processors and high-performance computer clusters, and analyzing and plotting large numbers of simulated data sets in parallel. It also includes a graphical user interface (DynaSim GUI) that supports full functionality without requiring user programming. The software has been implemented in MATLAB to enable advanced neural modeling using MATLAB, given its popularity and a growing interest in modeling neural systems. The design of DynaSim incorporates a novel schema for model specification to facilitate future interoperability with other specifications (e.g., NeuroML, SBML), simulators (e.g., NEURON, Brian, NEST), and web-based applications (e.g., Geppetto) outside MATLAB. DynaSim is freely available at http://dynasimtoolbox.org. This tool promises to reduce barriers for investigating dynamics in large neural models, facilitate collaborative modeling, and complement other tools being developed in the neuroinformatics community.

  11. Utilizing dimensional analysis with observed data to determine the significance of hydrodynamic solutions in coastal hydrology

    USGS Publications Warehouse

    Swain, Eric D.; Decker, Jeremy D.; Hughes, Joseph D.

    2014-01-01

    In this paper, the authors present an analysis of the magnitude of the temporal and spatial acceleration (inertial) terms in the surface-water flow equations and determine the conditions under which these inertial terms have sufficient magnitude to be required in the computations. Data from two South Florida field sites are examined and the relative magnitudes of temporal acceleration, spatial acceleration, and the gravity and friction terms are compared. Parameters are derived by using dimensionless numbers and applied to quantify the significance of the hydrodynamic effects. The time series of the ratio of the inertial and gravity terms from field sites are presented and compared with both a simplified indicator parameter and a more complex parameter called the Hydrodynamic Significance Number (HSN). Two test-case models were developed by using the SWIFT2D hydrodynamic simulator to examine flow behavior with and without the inertial terms and compute the HSN. The first model represented one of the previously-mentioned field sites during gate operations of a structure-managed coastal canal. The second model was a synthetic test case illustrating the drainage of water down a sloped surface from an initial stage while under constant flow. The analyses indicate that the times of substantial hydrodynamic effects are sporadic but significant. The simplified indicator parameter correlates much better with the hydrodynamic effect magnitude for a constant width channel such as Miami Canal than at the non-uniform North River. Higher HSN values indicate flow situations where the inertial terms are large and need to be taken into account.

  12. Evaluation of the Ross fast solution of Richards’ equation in unfavourable conditions for standard finite element methods

    NASA Astrophysics Data System (ADS)

    Crevoisier, David; Chanzy, André; Voltz, Marc

    2009-06-01

    Ross [Ross PJ. Modeling soil water and solute transport - fast, simplified numerical solutions. Agron J 2003;95:1352-61] developed a fast, simplified method for solving Richards' equation. This non-iterative 1D approach, using Brooks and Corey [Brooks RH, Corey AT. Hydraulic properties of porous media. Hydrol. papers, Colorado St. Univ., Fort Collins; 1964] hydraulic functions, allows a significant reduction in computing time while maintaining the accuracy of the results. The first aim of this work is to confirm these results in a more extensive set of problems, including those that would lead to serious numerical difficulties for the standard numerical method. The second aim is to validate a generalisation of the Ross method to other mathematical representations of hydraulic functions. The Ross method is compared with the standard finite element model, Hydrus-1D [Simunek J, Sejna M, Van Genuchten MTh. The HYDRUS-1D and HYDRUS-2D codes for estimating unsaturated soil hydraulic and solutes transport parameters. Agron Abstr 357; 1999]. Computing time, accuracy of results and robustness of numerical schemes are monitored in 1D simulations involving different types of homogeneous soils, grids and hydrological conditions. The Ross method associated with modified Van Genuchten hydraulic functions [Vogel T, Cislerova M. On the reliability of unsaturated hydraulic conductivity calculated from the moisture retention curve. Transport Porous Media 1988;3:1-15] proves in every tested scenario to be more robust numerically, and the compromise of computing time/accuracy is seen to be particularly improved on coarse grids. Ross method run from 1.25 to 14 times faster than Hydrus-1D.

  13. Simplified models for dark matter face their consistent completions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonçalves, Dorival; Machado, Pedro A. N.; No, Jose Miguel

    Simplified dark matter models have been recently advocated as a powerful tool to exploit the complementarity between dark matter direct detection, indirect detection and LHC experimental probes. Focusing on pseudoscalar mediators between the dark and visible sectors, we show that the simplified dark matter model phenomenology departs significantly from that of consistentmore » $${SU(2)_{\\mathrm{L}} \\times U(1)_{\\mathrm{Y}}}$$ gauge invariant completions. We discuss the key physics simplified models fail to capture, and its impact on LHC searches. Notably, we show that resonant mono-Z searches provide competitive sensitivities to standard mono-jet analyses at $13$ TeV LHC.« less

  14. Validation of Simplified Load Equations Through Loads Measurement and Modeling of a Small Horizontal-Axis Wind Turbine Tower

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dana, Scott; Van Dam, Jeroen J; Damiani, Rick R

    As part of an ongoing effort to improve the modeling and prediction of small wind turbine dynamics, the National Renewable Energy Laboratory (NREL) tested a small horizontal-axis wind turbine in the field at the National Wind Technology Center. The test turbine was a 2.1-kW downwind machine mounted on an 18-m multi-section fiberglass composite tower. The tower was instrumented and monitored for approximately 6 months. The collected data were analyzed to assess the turbine and tower loads and further validate the simplified loads equations from the International Electrotechnical Commission (IEC) 61400-2 design standards. Field-measured loads were also compared to the outputmore » of an aeroelastic model of the turbine. In particular, we compared fatigue loads as measured in the field, predicted by the aeroelastic model, and calculated using the simplified design equations. Ultimate loads at the tower base were assessed using both the simplified design equations and the aeroelastic model output. The simplified design equations in IEC 61400-2 do not accurately model fatigue loads and a discussion about the simplified design equations is discussed.« less

  15. Emerald: an object-based language for distributed programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hutchinson, N.C.

    1987-01-01

    Distributed systems have become more common, however constructing distributed applications remains a very difficult task. Numerous operating systems and programming languages have been proposed that attempt to simplify the programming of distributed applications. Here a programing language called Emerald is presented that simplifies distributed programming by extending the concepts of object-based languages to the distributed environment. Emerald supports a single model of computation: the object. Emerald objects include private entities such as integers and Booleans, as well as shared, distributed entities such as compilers, directories, and entire file systems. Emerald objects may move between machines in the system, but objectmore » invocation is location independent. The uniform semantic model used for describing all Emerald objects makes the construction of distributed applications in Emerald much simpler than in systems where the differences in implementation between local and remote entities are visible in the language semantics. Emerald incorporates a type system that deals only with the specification of objects - ignoring differences in implementation. Thus, two different implementations of the same abstraction may be freely mixed.« less

  16. Quaternions in computer vision and robotics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pervin, E.; Webb, J.A.

    1982-01-01

    Computer vision and robotics suffer from not having good tools for manipulating three-dimensional objects. Vectors, coordinate geometry, and trigonometry all have deficiencies. Quaternions can be used to solve many of these problems. Many properties of quaternions that are relevant to computer vision and robotics are developed. Examples are given showing how quaternions can be used to simplify derivations in computer vision and robotics.

  17. Effect of motion inputs on the wear prediction of artificial hip joints

    PubMed Central

    Liu, Feng; Fisher, John; Jin, Zhongmin

    2013-01-01

    Hip joint simulators have been largely used to assess the wear performance of joint implants. Due to the complexity of joint movement, the motion mechanism adopted in simulators varies. The motion condition is particularly important for ultra-high molecular weight polyethylene (UHMWPE) since polyethylene wear can be substantially increased by the bearing cross-shear motion. Computational wear modelling has been improved recently for the conventional UHMWPE used in total hip joint replacements. A new polyethylene wear law is an explicit function of the contact area of the bearing and the sliding distance, and the effect of multidirectional motion on wear has been quantified by a factor, cross-shear ratio. In this study, the full simulated walking cycle condition based on a walking measurement and two simplified motions, including the ISO standard motion and a simplified ProSim hip simulator motion, were considered as the inputs for wear modelling based on the improved wear model. Both the full simulation and simplified motions generated the comparable multidirectional motion required to reproduce the physiological wear of the bearing in vivo. The predicted volumetric wear of the ProSim simulator motion and the ISO motion conditions for the walking cycle were 13% and 4% lower, respectively, than that of the measured walking condition. The maximum linear wear depths were almost the same, and the areas of the wear depth distribution were 13% and 7% lower for the ProSim simulator and the ISO condition, respectively, compared with that of the measured walking cycle motion condition. PMID:25540472

  18. Analysis of temperature distribution in liquid-cooled turbine blades

    NASA Technical Reports Server (NTRS)

    Livingood, John N B; Brown, W Byron

    1952-01-01

    The temperature distribution in liquid-cooled turbine blades determines the amount of cooling required to reduce the blade temperature to permissible values at specified locations. This report presents analytical methods for computing temperature distributions in liquid-cooled turbine blades, or in simplified shapes used to approximate sections of the blade. The individual analyses are first presented in terms of their mathematical development. By means of numerical examples, comparisons are made between simplified and more complete solutions and the effects of several variables are examined. Nondimensional charts to simplify some temperature-distribution calculations are also given.

  19. On the coverage of the pMSSM by simplified model results

    NASA Astrophysics Data System (ADS)

    Ambrogi, Federico; Kraml, Sabine; Kulkarni, Suchita; Laa, Ursula; Lessa, Andre; Waltenberger, Wolfgang

    2018-03-01

    We investigate to which extent the SUSY search results published by ATLAS and CMS in the context of simplified models actually cover the more realistic scenarios of a full model. Concretely, we work within the phenomenological MSSM (pMSSM) with 19 free parameters and compare the constraints obtained from SModelS v1.1.1 with those from the ATLAS pMSSM study in arXiv:1508.06608. We find that about 40-45% of the points excluded by ATLAS escape the currently available simplified model constraints. For these points we identify the most relevant topologies which are not tested by the current simplified model results. In particular, we find that topologies with asymmetric branches, including 3-jet signatures from gluino-squark associated production, could be important for improving the current constraining power of simplified models results. Furthermore, for a better coverage of light stops and sbottoms, constraints for decays via heavier neutralinos and charginos, which subsequently decay visibly to the lightest neutralino are also needed.

  20. A simplified approach to the pooled analysis of calibration of clinical prediction rules for systematic reviews of validation studies

    PubMed Central

    Dimitrov, Borislav D; Motterlini, Nicola; Fahey, Tom

    2015-01-01

    Objective Estimating calibration performance of clinical prediction rules (CPRs) in systematic reviews of validation studies is not possible when predicted values are neither published nor accessible or sufficient or no individual participant or patient data are available. Our aims were to describe a simplified approach for outcomes prediction and calibration assessment and evaluate its functionality and validity. Study design and methods: Methodological study of systematic reviews of validation studies of CPRs: a) ABCD2 rule for prediction of 7 day stroke; and b) CRB-65 rule for prediction of 30 day mortality. Predicted outcomes in a sample validation study were computed by CPR distribution patterns (“derivation model”). As confirmation, a logistic regression model (with derivation study coefficients) was applied to CPR-based dummy variables in the validation study. Meta-analysis of validation studies provided pooled estimates of “predicted:observed” risk ratios (RRs), 95% confidence intervals (CIs), and indexes of heterogeneity (I2) on forest plots (fixed and random effects models), with and without adjustment of intercepts. The above approach was also applied to the CRB-65 rule. Results Our simplified method, applied to ABCD2 rule in three risk strata (low, 0–3; intermediate, 4–5; high, 6–7 points), indicated that predictions are identical to those computed by univariate, CPR-based logistic regression model. Discrimination was good (c-statistics =0.61–0.82), however, calibration in some studies was low. In such cases with miscalibration, the under-prediction (RRs =0.73–0.91, 95% CIs 0.41–1.48) could be further corrected by intercept adjustment to account for incidence differences. An improvement of both heterogeneities and P-values (Hosmer-Lemeshow goodness-of-fit test) was observed. Better calibration and improved pooled RRs (0.90–1.06), with narrower 95% CIs (0.57–1.41) were achieved. Conclusion Our results have an immediate clinical implication in situations when predicted outcomes in CPR validation studies are lacking or deficient by describing how such predictions can be obtained by everyone using the derivation study alone, without any need for highly specialized knowledge or sophisticated statistics. PMID:25931829

  1. Simplified method for numerical modeling of fiber lasers.

    PubMed

    Shtyrina, O V; Yarutkina, I A; Fedoruk, M P

    2014-12-29

    A simplified numerical approach to modeling of dissipative dispersion-managed fiber lasers is examined. We present a new numerical iteration algorithm for finding the periodic solutions of the system of nonlinear ordinary differential equations describing the intra-cavity dynamics of the dissipative soliton characteristics in dispersion-managed fiber lasers. We demonstrate that results obtained using simplified model are in good agreement with full numerical modeling based on the corresponding partial differential equations.

  2. The transformation of aerodynamic stability derivatives by symbolic mathematical computation

    NASA Technical Reports Server (NTRS)

    Howard, J. C.

    1975-01-01

    The formulation of mathematical models of aeronautical systems for simulation or other purposes, involves the transformation of aerodynamic stability derivatives. It is shown that these derivatives transform like the components of a second order tensor having one index of covariance and one index of contravariance. Moreover, due to the equivalence of covariant and contravariant transformations in orthogonal Cartesian systems of coordinates, the transformations can be treated as doubly covariant or doubly contravariant, if this simplifies the formulation. It is shown that the tensor properties of these derivatives can be used to facilitate their transformation by symbolic mathematical computation, and the use of digital computers equipped with formula manipulation compilers. When the tensor transformations are mechanised in the manner described, man-hours are saved and the errors to which human operators are prone can be avoided.

  3. The origins of computer weather prediction and climate modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lynch, Peter

    2008-03-20

    Numerical simulation of an ever-increasing range of geophysical phenomena is adding enormously to our understanding of complex processes in the Earth system. The consequences for mankind of ongoing climate change will be far-reaching. Earth System Models are capable of replicating climate regimes of past millennia and are the best means we have of predicting the future of our climate. The basic ideas of numerical forecasting and climate modeling were developed about a century ago, long before the first electronic computer was constructed. There were several major practical obstacles to be overcome before numerical prediction could be put into practice. Amore » fuller understanding of atmospheric dynamics allowed the development of simplified systems of equations; regular radiosonde observations of the free atmosphere and, later, satellite data, provided the initial conditions; stable finite difference schemes were developed; and powerful electronic computers provided a practical means of carrying out the prodigious calculations required to predict the changes in the weather. Progress in weather forecasting and in climate modeling over the past 50 years has been dramatic. In this presentation, we will trace the history of computer forecasting through the ENIAC integrations to the present day. The useful range of deterministic prediction is increasing by about one day each decade, and our understanding of climate change is growing rapidly as Earth System Models of ever-increasing sophistication are developed.« less

  4. The origins of computer weather prediction and climate modeling

    NASA Astrophysics Data System (ADS)

    Lynch, Peter

    2008-03-01

    Numerical simulation of an ever-increasing range of geophysical phenomena is adding enormously to our understanding of complex processes in the Earth system. The consequences for mankind of ongoing climate change will be far-reaching. Earth System Models are capable of replicating climate regimes of past millennia and are the best means we have of predicting the future of our climate. The basic ideas of numerical forecasting and climate modeling were developed about a century ago, long before the first electronic computer was constructed. There were several major practical obstacles to be overcome before numerical prediction could be put into practice. A fuller understanding of atmospheric dynamics allowed the development of simplified systems of equations; regular radiosonde observations of the free atmosphere and, later, satellite data, provided the initial conditions; stable finite difference schemes were developed; and powerful electronic computers provided a practical means of carrying out the prodigious calculations required to predict the changes in the weather. Progress in weather forecasting and in climate modeling over the past 50 years has been dramatic. In this presentation, we will trace the history of computer forecasting through the ENIAC integrations to the present day. The useful range of deterministic prediction is increasing by about one day each decade, and our understanding of climate change is growing rapidly as Earth System Models of ever-increasing sophistication are developed.

  5. Development of a computational model on the neural activity patterns of a visual working memory in a hierarchical feedforward Network

    NASA Astrophysics Data System (ADS)

    An, Soyoung; Choi, Woochul; Paik, Se-Bum

    2015-11-01

    Understanding the mechanism of information processing in the human brain remains a unique challenge because the nonlinear interactions between the neurons in the network are extremely complex and because controlling every relevant parameter during an experiment is difficult. Therefore, a simulation using simplified computational models may be an effective approach. In the present study, we developed a general model of neural networks that can simulate nonlinear activity patterns in the hierarchical structure of a neural network system. To test our model, we first examined whether our simulation could match the previously-observed nonlinear features of neural activity patterns. Next, we performed a psychophysics experiment for a simple visual working memory task to evaluate whether the model could predict the performance of human subjects. Our studies show that the model is capable of reproducing the relationship between memory load and performance and may contribute, in part, to our understanding of how the structure of neural circuits can determine the nonlinear neural activity patterns in the human brain.

  6. Human factors model concerning the man-machine interface of mining crewstations

    NASA Technical Reports Server (NTRS)

    Rider, James P.; Unger, Richard L.

    1989-01-01

    The U.S. Bureau of Mines is developing a computer model to analyze the human factors aspect of mining machine operator compartments. The model will be used as a research tool and as a design aid. It will have the capability to perform the following: simulated anthropometric or reach assessment, visibility analysis, illumination analysis, structural analysis of the protective canopy, operator fatigue analysis, and computation of an ingress-egress rating. The model will make extensive use of graphics to simplify data input and output. Two dimensional orthographic projections of the machine and its operator compartment are digitized and the data rebuilt into a three dimensional representation of the mining machine. Anthropometric data from either an individual or any size population may be used. The model is intended for use by equipment manufacturers and mining companies during initial design work on new machines. In addition to its use in machine design, the model should prove helpful as an accident investigation tool and for determining the effects of machine modifications made in the field on the critical areas of visibility and control reach ability.

  7. Dynamic modeling method for infrared smoke based on enhanced discrete phase model

    NASA Astrophysics Data System (ADS)

    Zhang, Zhendong; Yang, Chunling; Zhang, Yan; Zhu, Hongbo

    2018-03-01

    The dynamic modeling of infrared (IR) smoke plays an important role in IR scene simulation systems and its accuracy directly influences the system veracity. However, current IR smoke models cannot provide high veracity, because certain physical characteristics are frequently ignored in fluid simulation; simplifying the discrete phase as a continuous phase and ignoring the IR decoy missile-body spinning. To address this defect, this paper proposes a dynamic modeling method for IR smoke, based on an enhanced discrete phase model (DPM). A mathematical simulation model based on an enhanced DPM is built and a dynamic computing fluid mesh is generated. The dynamic model of IR smoke is then established using an extended equivalent-blackbody-molecule model. Experiments demonstrate that this model realizes a dynamic method for modeling IR smoke with higher veracity.

  8. On firework blasts and qualitative parameter dependency.

    PubMed

    Zohdi, T I

    2016-01-01

    In this paper, a mathematical model is developed to qualitatively simulate the progressive time-evolution of a blast from a simple firework. Estimates are made for the blast radius that one can expect for a given amount of detonation energy and pyrotechnic display material. The model balances the released energy from the initial blast pulse with the subsequent kinetic energy and then computes the trajectory of the material under the influence of the drag from the surrounding air, gravity and possible buoyancy. Under certain simplifying assumptions, the model can be solved for analytically. The solution serves as a guide to identifying key parameters that control the evolving blast envelope. Three-dimensional examples are given.

  9. On firework blasts and qualitative parameter dependency

    PubMed Central

    Zohdi, T. I.

    2016-01-01

    In this paper, a mathematical model is developed to qualitatively simulate the progressive time-evolution of a blast from a simple firework. Estimates are made for the blast radius that one can expect for a given amount of detonation energy and pyrotechnic display material. The model balances the released energy from the initial blast pulse with the subsequent kinetic energy and then computes the trajectory of the material under the influence of the drag from the surrounding air, gravity and possible buoyancy. Under certain simplifying assumptions, the model can be solved for analytically. The solution serves as a guide to identifying key parameters that control the evolving blast envelope. Three-dimensional examples are given. PMID:26997903

  10. Force user's manual: A portable, parallel FORTRAN

    NASA Technical Reports Server (NTRS)

    Jordan, Harry F.; Benten, Muhammad S.; Arenstorf, Norbert S.; Ramanan, Aruna V.

    1990-01-01

    The use of Force, a parallel, portable FORTRAN on shared memory parallel computers is described. Force simplifies writing code for parallel computers and, once the parallel code is written, it is easily ported to computers on which Force is installed. Although Force is nearly the same for all computers, specific details are included for the Cray-2, Cray-YMP, Convex 220, Flex/32, Encore, Sequent, Alliant computers on which it is installed.

  11. Computer program optimizes design of nuclear radiation shields

    NASA Technical Reports Server (NTRS)

    Lahti, G. P.

    1971-01-01

    Computer program, OPEX 2, determines minimum weight, volume, or cost for shields. Program incorporates improved coding, simplified data input, spherical geometry, and an expanded output. Method is capable of altering dose-thickness relationship when a shield layer has been removed.

  12. Computer programs simplify optical system analysis

    NASA Technical Reports Server (NTRS)

    1965-01-01

    The optical ray-trace computer program performs geometrical ray tracing. The energy-trace program calculates the relative monochromatic flux density on a specific target area. This program uses the ray-trace program as a subroutine to generate a representation of the optical system.

  13. CYBER-205 Devectorizer

    NASA Technical Reports Server (NTRS)

    Lakeotes, Christopher D.

    1990-01-01

    DEVECT (CYBER-205 Devectorizer) is CYBER-205 FORTRAN source-language-preprocessor computer program reducing vector statements to standard FORTRAN. In addition, DEVECT has many other standard and optional features simplifying conversion of vector-processor programs for CYBER 200 to other computers. Written in FORTRAN IV.

  14. BluePyOpt: Leveraging Open Source Software and Cloud Infrastructure to Optimise Model Parameters in Neuroscience.

    PubMed

    Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B; Schürmann, Felix; Segev, Idan; Markram, Henry

    2016-01-01

    At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases.

  15. From Inverse Problems in Mathematical Physiology to Quantitative Differential Diagnoses

    PubMed Central

    Zenker, Sven; Rubin, Jonathan; Clermont, Gilles

    2007-01-01

    The improved capacity to acquire quantitative data in a clinical setting has generally failed to improve outcomes in acutely ill patients, suggesting a need for advances in computer-supported data interpretation and decision making. In particular, the application of mathematical models of experimentally elucidated physiological mechanisms could augment the interpretation of quantitative, patient-specific information and help to better target therapy. Yet, such models are typically complex and nonlinear, a reality that often precludes the identification of unique parameters and states of the model that best represent available data. Hypothesizing that this non-uniqueness can convey useful information, we implemented a simplified simulation of a common differential diagnostic process (hypotension in an acute care setting), using a combination of a mathematical model of the cardiovascular system, a stochastic measurement model, and Bayesian inference techniques to quantify parameter and state uncertainty. The output of this procedure is a probability density function on the space of model parameters and initial conditions for a particular patient, based on prior population information together with patient-specific clinical observations. We show that multimodal posterior probability density functions arise naturally, even when unimodal and uninformative priors are used. The peaks of these densities correspond to clinically relevant differential diagnoses and can, in the simplified simulation setting, be constrained to a single diagnosis by assimilating additional observations from dynamical interventions (e.g., fluid challenge). We conclude that the ill-posedness of the inverse problem in quantitative physiology is not merely a technical obstacle, but rather reflects clinical reality and, when addressed adequately in the solution process, provides a novel link between mathematically described physiological knowledge and the clinical concept of differential diagnoses. We outline possible steps toward translating this computational approach to the bedside, to supplement today's evidence-based medicine with a quantitatively founded model-based medicine that integrates mechanistic knowledge with patient-specific information. PMID:17997590

  16. The YORP effect on 25 143 Itokawa

    NASA Astrophysics Data System (ADS)

    Breiter, S.; Bartczak, P.; Czekaj, M.; Oczujda, B.; Vokrouhlický, D.

    2009-11-01

    Context: The asteroid 25143 Itokawa is one of the candidates for the detection of the Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) effect in the rotation period. Previous studies were carried out up to the 196 608 facets triangulation model and were not able to provide a good theoretical estimate of this effect, raising questions about the influence of the mesh resolution and the centre of mass location on the evolution the rotation period. Aims: The YORP effect on Itokawa is computed for different topography models up to the highest resolution Gaskell mesh of 3 145 728 triangular faces in an attempt to find the best possible YORP estimate. Other, lower resolution models are also studied and the question of the dependence of the rotation period drift on the density distribution inhomogeneities is reexamined. A comparison is made with 433 Eros models possessing a similar resolution. Methods: The Rubincam approximation (zero conductivity) is assumed in the numerical simulation of the YORP effect in rotation period. The mean thermal radiation torques are summed over triangular facets assuming Keplerian heliocentric motion and uniform rotation around a body-fixed axis. Results: There is no evidence of YORP convergence in Gaskell model family. Differently simplified meshes may converge quickly to their parent models, but this does not prove the quality of YORP computed from the latter. We confirm the high sensitivity of the YORP effect to the fine details of the surface for 25 143 Itokawa and 433 Eros. The sensitivity of the Itokawa YORP to the centre of mass shift is weaker than in earlier works, but instead the results prove to be sensitive to the spin axis orientation in the body frame. Conclusions: Either the sensitivity of the YORP effect is a physical phenomenon and all present predictions are questionable, or the present thermal models are too simplified.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hobbs, Michael L.

    We previously developed a PETN thermal decomposition model that accurately predicts thermal ignition and detonator failure [1]. This model was originally developed for CALORE [2] and required several complex user subroutines. Recently, a simplified version of the PETN decomposition model was implemented into ARIA [3] using a general chemistry framework without need for user subroutines. Detonator failure was also predicted with this new model using ENCORE. The model was simplified by 1) basing the model on moles rather than mass, 2) simplifying the thermal conductivity model, and 3) implementing ARIA’s new phase change model. This memo briefly describes the model,more » implementation, and validation.« less

  18. Computation of turbulence and dispersion of cork in the NETL riser

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiradilok, Veeraya; Gidaspow, Dimitri; Breault, R.W.

    The knowledge of dispersion coefficients is essential for reliable design of gasifiers. However, a literature review had shown that dispersion coefficients in fluidized beds differ by more than five orders of magnitude. This study presents a comparison of the computed axial solids dispersion coefficients for cork particles to the NETL riser cork data. The turbulence properties, the Reynolds stresses, the granular temperature spectra and the radial and axial gas and solids dispersion coefficients are computed. The standard kinetic theory model described in Gidaspow’s 1994 book, Multiphase Flow and Fluidization, Academic Press and the IIT and Fluent codes were used tomore » compute the measured axial solids volume fraction profiles for flow of cork particles in the NETL riser. The Johnson–Jackson boundary conditions were used. Standard drag correlations were used. This study shows that the computed solids volume fractions for the low flux flow are within the experimental error of those measured, using a two-dimensional model. At higher solids fluxes the simulated solids volume fractions are close to the experimental measurements, but deviate significantly at the top of the riser. This disagreement is due to use of simplified geometry in the two-dimensional simulation. There is a good agreement between the experiment and the three-dimensional simulation for a high flux condition. This study concludes that the axial and radial gas and solids dispersion coefficients in risers operating in the turbulent flow regime can be computed using a multiphase computational fluid dynamics model.« less

  19. 8760-Based Method for Representing Variable Generation Capacity Value in Capacity Expansion Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frew, Bethany A

    Capacity expansion models (CEMs) are widely used to evaluate the least-cost portfolio of electricity generators, transmission, and storage needed to reliably serve load over many years or decades. CEMs can be computationally complex and are often forced to estimate key parameters using simplified methods to achieve acceptable solve times or for other reasons. In this paper, we discuss one of these parameters -- capacity value (CV). We first provide a high-level motivation for and overview of CV. We next describe existing modeling simplifications and an alternate approach for estimating CV that utilizes hourly '8760' data of load and VG resources.more » We then apply this 8760 method to an established CEM, the National Renewable Energy Laboratory's (NREL's) Regional Energy Deployment System (ReEDS) model (Eurek et al. 2016). While this alternative approach for CV is not itself novel, it contributes to the broader CEM community by (1) demonstrating how a simplified 8760 hourly method, which can be easily implemented in other power sector models when data is available, more accurately captures CV trends than a statistical method within the ReEDS CEM, and (2) providing a flexible modeling framework from which other 8760-based system elements (e.g., demand response, storage, and transmission) can be added to further capture important dynamic interactions, such as curtailment.« less

  20. A STUDY OF PREDICTED BONE MARROW DISTRIBUTION ON CALCULATED MARROW DOSE FROM EXTERNAL RADIATION EXPOSURES USING TWO SETS OF IMAGE DATA FOR THE SAME INDIVIDUAL

    PubMed Central

    Caracappa, Peter F.; Chao, T. C. Ephraim; Xu, X. George

    2010-01-01

    Red bone marrow is among the tissues of the human body that are most sensitive to ionizing radiation, but red bone marrow cannot be distinguished from yellow bone marrow by normal radiographic means. When using a computational model of the body constructed from computed tomography (CT) images for radiation dose, assumptions must be applied to calculate the dose to the red bone marrow. This paper presents an analysis of two methods of calculating red bone marrow distribution: 1) a homogeneous mixture of red and yellow bone marrow throughout the skeleton, and 2) International Commission on Radiological Protection cellularity factors applied to each bone segment. A computational dose model was constructed from the CT image set of the Visible Human Project and compared to the VIP-Man model, which was derived from color photographs of the same individual. These two data sets for the same individual provide the unique opportunity to compare the methods applied to the CT-based model against the observed distribution of red bone marrow for that individual. The mass of red bone marrow in each bone segment was calculated using both methods. The effect of the different red bone marrow distributions was analyzed by calculating the red bone marrow dose using the EGS4 Monte Carlo code for parallel beams of monoenergetic photons over an energy range of 30 keV to 6 MeV, cylindrical (simplified CT) sources centered about the head and abdomen over an energy range of 30 keV to 1 MeV, and a whole-body electron irradiation treatment protocol for 3.9 MeV electrons. Applying the method with cellularity factors improves the average difference in the estimation of mass in each bone segment as compared to the mass in VIP-Man by 45% over the homogenous mixture method. Red bone marrow doses calculated by the two methods are similar for parallel photon beams at high energy (above about 200 keV), but differ by as much as 40% at lower energies. The calculated red bone marrow doses differ significantly for simplified CT and electron beam irradiation, since the computed red bone marrow dose is a strong function of the cellularity factor applied to bone segments within the primary radiation beam. These results demonstrate the importance of properly applying realistic cellularity factors to computation dose models of the human body. PMID:19430219

  1. A study of predicted bone marrow distribution on calculated marrow dose from external radiation exposures using two sets of image data for the same individual.

    PubMed

    Caracappa, Peter F; Chao, T C Ephraim; Xu, X George

    2009-06-01

    Red bone marrow is among the tissues of the human body that are most sensitive to ionizing radiation, but red bone marrow cannot be distinguished from yellow bone marrow by normal radiographic means. When using a computational model of the body constructed from computed tomography (CT) images for radiation dose, assumptions must be applied to calculate the dose to the red bone marrow. This paper presents an analysis of two methods of calculating red bone marrow distribution: 1) a homogeneous mixture of red and yellow bone marrow throughout the skeleton, and 2) International Commission on Radiological Protection cellularity factors applied to each bone segment. A computational dose model was constructed from the CT image set of the Visible Human Project and compared to the VIP-Man model, which was derived from color photographs of the same individual. These two data sets for the same individual provide the unique opportunity to compare the methods applied to the CT-based model against the observed distribution of red bone marrow for that individual. The mass of red bone marrow in each bone segment was calculated using both methods. The effect of the different red bone marrow distributions was analyzed by calculating the red bone marrow dose using the EGS4 Monte Carlo code for parallel beams of monoenergetic photons over an energy range of 30 keV to 6 MeV, cylindrical (simplified CT) sources centered about the head and abdomen over an energy range of 30 keV to 1 MeV, and a whole-body electron irradiation treatment protocol for 3.9 MeV electrons. Applying the method with cellularity factors improves the average difference in the estimation of mass in each bone segment as compared to the mass in VIP-Man by 45% over the homogenous mixture method. Red bone marrow doses calculated by the two methods are similar for parallel photon beams at high energy (above about 200 keV), but differ by as much as 40% at lower energies. The calculated red bone marrow doses differ significantly for simplified CT and electron beam irradiation, since the computed red bone marrow dose is a strong function of the cellularity factor applied to bone segments within the primary radiation beam. These results demonstrate the importance of properly applying realistic cellularity factors to computation dose models of the human body.

  2. A Hele-Shaw-Cahn-Hilliard Model for Incompressible Two-Phase Flows with Different Densities

    NASA Astrophysics Data System (ADS)

    Dedè, Luca; Garcke, Harald; Lam, Kei Fong

    2017-07-01

    Topology changes in multi-phase fluid flows are difficult to model within a traditional sharp interface theory. Diffuse interface models turn out to be an attractive alternative to model two-phase flows. Based on a Cahn-Hilliard-Navier-Stokes model introduced by Abels et al. (Math Models Methods Appl Sci 22(3):1150013, 2012), which uses a volume-averaged velocity, we derive a diffuse interface model in a Hele-Shaw geometry, which in the case of non-matched densities, simplifies an earlier model of Lee et al. (Phys Fluids 14(2):514-545, 2002). We recover the classical Hele-Shaw model as a sharp interface limit of the diffuse interface model. Furthermore, we show the existence of weak solutions and present several numerical computations including situations with rising bubbles and fingering instabilities.

  3. A compact model for electroosmotic flows in microfluidic devices

    NASA Astrophysics Data System (ADS)

    Qiao, R.; Aluru, N. R.

    2002-09-01

    A compact model to compute flow rate and pressure in microfluidic devices is presented. The microfluidic flow can be driven by either an applied electric field or a combined electric field and pressure gradient. A step change in the ζ-potential on a channel wall is treated by a pressure source in the compact model. The pressure source is obtained from the pressure Poisson equation and conservation of mass principle. In the proposed compact model, the complex fluidic network is simplified by an electrical circuit. The compact model can predict the flow rate, pressure distribution and other basic characteristics in microfluidic channels quickly with good accuracy when compared to detailed numerical simulation. Using the compact model, fluidic mixing and dispersion control are studied in a complex microfluidic network.

  4. Assessment of different models for computing the probability of a clear line of sight

    NASA Astrophysics Data System (ADS)

    Bojin, Sorin; Paulescu, Marius; Badescu, Viorel

    2017-12-01

    This paper is focused on modeling the morphological properties of the cloud fields in terms of the probability of a clear line of sight (PCLOS). PCLOS is defined as the probability that a line of sight between observer and a given point of the celestial vault goes freely without intersecting a cloud. A variety of PCLOS models assuming the cloud shape hemisphere, semi-ellipsoid and ellipsoid are tested. The effective parameters (cloud aspect ratio and absolute cloud fraction) are extracted from high-resolution series of sunshine number measurements. The performance of the PCLOS models is evaluated from the perspective of their ability in retrieving the point cloudiness. The advantages and disadvantages of the tested models are discussed, aiming to a simplified parameterization of PCLOS models.

  5. A review on locomotion robophysics: the study of movement at the intersection of robotics, soft matter and dynamical systems.

    PubMed

    Aguilar, Jeffrey; Zhang, Tingnan; Qian, Feifei; Kingsbury, Mark; McInroe, Benjamin; Mazouchova, Nicole; Li, Chen; Maladen, Ryan; Gong, Chaohui; Travers, Matt; Hatton, Ross L; Choset, Howie; Umbanhowar, Paul B; Goldman, Daniel I

    2016-11-01

    Discovery of fundamental principles which govern and limit effective locomotion (self-propulsion) is of intellectual interest and practical importance. Human technology has created robotic moving systems that excel in movement on and within environments of societal interest: paved roads, open air and water. However, such devices cannot yet robustly and efficiently navigate (as animals do) the enormous diversity of natural environments which might be of future interest for autonomous robots; examples include vertical surfaces like trees and cliffs, heterogeneous ground like desert rubble and brush, turbulent flows found near seashores, and deformable/flowable substrates like sand, mud and soil. In this review we argue for the creation of a physics of moving systems-a 'locomotion robophysics'-which we define as the pursuit of principles of self-generated motion. Robophysics can provide an important intellectual complement to the discipline of robotics, largely the domain of researchers from engineering and computer science. The essential idea is that we must complement the study of complex robots in complex situations with systematic study of simplified robotic devices in controlled laboratory settings and in simplified theoretical models. We must thus use the methods of physics to examine both locomotor successes and failures using parameter space exploration, systematic control, and techniques from dynamical systems. Using examples from our and others' research, we will discuss how such robophysical studies have begun to aid engineers in the creation of devices that have begun to achieve life-like locomotor abilities on and within complex environments, have inspired interesting physics questions in low dimensional dynamical systems, geometric mechanics and soft matter physics, and have been useful to develop models for biological locomotion in complex terrain. The rapidly decreasing cost of constructing robot models with easy access to significant computational power bodes well for scientists and engineers to engage in a discipline which can readily integrate experiment, theory and computation.

  6. A review on locomotion robophysics: the study of movement at the intersection of robotics, soft matter and dynamical systems

    NASA Astrophysics Data System (ADS)

    Aguilar, Jeffrey; Zhang, Tingnan; Qian, Feifei; Kingsbury, Mark; McInroe, Benjamin; Mazouchova, Nicole; Li, Chen; Maladen, Ryan; Gong, Chaohui; Travers, Matt; Hatton, Ross L.; Choset, Howie; Umbanhowar, Paul B.; Goldman, Daniel I.

    2016-11-01

    Discovery of fundamental principles which govern and limit effective locomotion (self-propulsion) is of intellectual interest and practical importance. Human technology has created robotic moving systems that excel in movement on and within environments of societal interest: paved roads, open air and water. However, such devices cannot yet robustly and efficiently navigate (as animals do) the enormous diversity of natural environments which might be of future interest for autonomous robots; examples include vertical surfaces like trees and cliffs, heterogeneous ground like desert rubble and brush, turbulent flows found near seashores, and deformable/flowable substrates like sand, mud and soil. In this review we argue for the creation of a physics of moving systems—a ‘locomotion robophysics’—which we define as the pursuit of principles of self-generated motion. Robophysics can provide an important intellectual complement to the discipline of robotics, largely the domain of researchers from engineering and computer science. The essential idea is that we must complement the study of complex robots in complex situations with systematic study of simplified robotic devices in controlled laboratory settings and in simplified theoretical models. We must thus use the methods of physics to examine both locomotor successes and failures using parameter space exploration, systematic control, and techniques from dynamical systems. Using examples from our and others’ research, we will discuss how such robophysical studies have begun to aid engineers in the creation of devices that have begun to achieve life-like locomotor abilities on and within complex environments, have inspired interesting physics questions in low dimensional dynamical systems, geometric mechanics and soft matter physics, and have been useful to develop models for biological locomotion in complex terrain. The rapidly decreasing cost of constructing robot models with easy access to significant computational power bodes well for scientists and engineers to engage in a discipline which can readily integrate experiment, theory and computation.

  7. A stratospheric aerosol model with perturbations induced by the space shuttle particulate effluents

    NASA Technical Reports Server (NTRS)

    Rosen, J. M.; Hofmann, D. J.

    1977-01-01

    A one dimensional steady state stratospheric aerosol model is developed that considers the subsequent perturbations caused by including the expected space shuttle particulate effluents. Two approaches to the basic modeling effort were made: in one, enough simplifying assumptions were introduced so that a more or less exact solution to the descriptive equations could be obtained; in the other approach very few simplifications were made and a computer technique was used to solve the equations. The most complex form of the model contains the effects of sedimentation, diffusion, particle growth and coagulation. Results of the perturbation calculations show that there will probably be an immeasurably small increase in the stratospheric aerosol concentration for particles larger than about 0.15 micrometer radius.

  8. Identification of Vehicle Axle Loads from Bridge Dynamic Responses

    NASA Astrophysics Data System (ADS)

    ZHU, X. Q.; LAW, S. S.

    2000-09-01

    A method is presented to identify moving loads on a bridge deck modelled as an orthotropic rectangular plate. The dynamic behavior of the bridge deck under moving loads is analyzed using the orthotropic plate theory and modal superposition principle, and Tikhonov regularization procedure is applied to provide bounds to the identified forces in the time domain. The identified results using a beam model and a plate model of the bridge deck are compared, and the conditions under which the bridge deck can be simplified as an equivalent beam model are discussed. Computation simulation and laboratory tests show the effectiveness and the validity of the proposed method in identifying forces travelling along the central line or at an eccentric path on the bridge deck.

  9. Software for Brain Network Simulations: A Comparative Study

    PubMed Central

    Tikidji-Hamburyan, Ruben A.; Narayana, Vikram; Bozkus, Zeki; El-Ghazawi, Tarek A.

    2017-01-01

    Numerical simulations of brain networks are a critical part of our efforts in understanding brain functions under pathological and normal conditions. For several decades, the community has developed many software packages and simulators to accelerate research in computational neuroscience. In this article, we select the three most popular simulators, as determined by the number of models in the ModelDB database, such as NEURON, GENESIS, and BRIAN, and perform an independent evaluation of these simulators. In addition, we study NEST, one of the lead simulators of the Human Brain Project. First, we study them based on one of the most important characteristics, the range of supported models. Our investigation reveals that brain network simulators may be biased toward supporting a specific set of models. However, all simulators tend to expand the supported range of models by providing a universal environment for the computational study of individual neurons and brain networks. Next, our investigations on the characteristics of computational architecture and efficiency indicate that all simulators compile the most computationally intensive procedures into binary code, with the aim of maximizing their computational performance. However, not all simulators provide the simplest method for module development and/or guarantee efficient binary code. Third, a study of their amenability for high-performance computing reveals that NEST can almost transparently map an existing model on a cluster or multicore computer, while NEURON requires code modification if the model developed for a single computer has to be mapped on a computational cluster. Interestingly, parallelization is the weakest characteristic of BRIAN, which provides no support for cluster computations and limited support for multicore computers. Fourth, we identify the level of user support and frequency of usage for all simulators. Finally, we carry out an evaluation using two case studies: a large network with simplified neural and synaptic models and a small network with detailed models. These two case studies allow us to avoid any bias toward a particular software package. The results indicate that BRIAN provides the most concise language for both cases considered. Furthermore, as expected, NEST mostly favors large network models, while NEURON is better suited for detailed models. Overall, the case studies reinforce our general observation that simulators have a bias in the computational performance toward specific types of the brain network models. PMID:28775687

  10. Computational modeling of radiofrequency ablation: evaluation on ex vivo data using ultrasound monitoring

    NASA Astrophysics Data System (ADS)

    Audigier, Chloé; Kim, Younsu; Dillow, Austin; Boctor, Emad M.

    2017-03-01

    Radiofrequency ablation (RFA) is the most widely used minimally invasive ablative therapy for liver cancer, but it is challenged by a lack of patient-specific monitoring. Inter-patient tissue variability and the presence of blood vessels make the prediction of the RFA difficult. A monitoring tool which can be personalized for a given patient during the intervention would be helpful to achieve a complete tumor ablation. However, the clinicians do not have access to such a tool, which results in incomplete treatment and a large number of recurrences. Computational models can simulate the phenomena and mechanisms governing this therapy. The temperature evolution as well as the resulted ablation can be modeled. When combined together with intraoperative measurements, computational modeling becomes an accurate and powerful tool to gain quantitative understanding and to enable improvements in the ongoing clinical settings. This paper shows how computational models of RFA can be evaluated using intra-operative measurements. First, simulations are used to demonstrate the feasibility of the method, which is then evaluated on two ex vivo datasets. RFA is simulated on a simplified geometry to generate realistic longitudinal temperature maps and the resulted necrosis. Computed temperatures are compared with the temperature evolution recorded using thermometers, and with temperatures monitored by ultrasound (US) in a 2D plane containing the ablation tip. Two ablations are performed on two cadaveric bovine livers, and we achieve error of 2.2 °C on average between the computed and the thermistors temperature and 1.4 °C and 2.7 °C on average between the temperature computed and monitored by US during the ablation at two different time points (t = 240 s and t = 900 s).

  11. Power flow in normal human voice production

    NASA Astrophysics Data System (ADS)

    Krane, Michael

    2016-11-01

    The principal mechanisms of energy utilization in voicing are quantified using a simplified model, in order to better define voice efficiency. A control volume analysis of energy utilization in phonation is presented to identify the energy transfer mechanisms in terms of their function. Conversion of subglottal airstream potential energy into useful work done (vocal fold vibration, flow work, sound radiation), and into heat (sound radiation absorbed by the lungs, glottal jet dissipation) are described. An approximate numerical model is used to compute the contributions of each of these mechanisms, as a function of subglottal pressure, for normal phonation. Acknowledge support of NIH Grant 2R01DC005642-10A1.

  12. New computer system simplifies programming of mathematical equations

    NASA Technical Reports Server (NTRS)

    Reinfelds, J.; Seitz, R. N.; Wood, L. H.

    1966-01-01

    Automatic Mathematical Translator /AMSTRAN/ permits scientists or engineers to enter mathematical equations in their natural mathematical format and to obtain an immediate graphical display of the solution. This automatic-programming, on-line, multiterminal computer system allows experienced programmers to solve nonroutine problems.

  13. A simplified computer program for the prediction of the linear stability behavior of liquid propellant combustors

    NASA Technical Reports Server (NTRS)

    Mitchell, C. E.; Eckert, K.

    1979-01-01

    A program for predicting the linear stability of liquid propellant rocket engines is presented. The underlying model assumptions and analytical steps necessary for understanding the program and its input and output are also given. The rocket engine is modeled as a right circular cylinder with an injector with a concentrated combustion zone, a nozzle, finite mean flow, and an acoustic admittance, or the sensitive time lag theory. The resulting partial differential equations are combined into two governing integral equations by the use of the Green's function method. These equations are solved using a successive approximation technique for the small amplitude (linear) case. The computational method used as well as the various user options available are discussed. Finally, a flow diagram, sample input and output for a typical application and a complete program listing for program MODULE are presented.

  14. Aggregative Learning Method and Its Application for Communication Quality Evaluation

    NASA Astrophysics Data System (ADS)

    Akhmetov, Dauren F.; Kotaki, Minoru

    2007-12-01

    In this paper, so-called Aggregative Learning Method (ALM) is proposed to improve and simplify the learning and classification abilities of different data processing systems. It provides a universal basis for design and analysis of mathematical models of wide class. A procedure was elaborated for time series model reconstruction and analysis for linear and nonlinear cases. Data approximation accuracy (during learning phase) and data classification quality (during recall phase) are estimated from introduced statistic parameters. The validity and efficiency of the proposed approach have been demonstrated through its application for monitoring of wireless communication quality, namely, for Fixed Wireless Access (FWA) system. Low memory and computation resources were shown to be needed for the procedure realization, especially for data classification (recall) stage. Characterized with high computational efficiency and simple decision making procedure, the derived approaches can be useful for simple and reliable real-time surveillance and control system design.

  15. Modeling cortical circuits.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rohrer, Brandon Robinson; Rothganger, Fredrick H.; Verzi, Stephen J.

    2010-09-01

    The neocortex is perhaps the highest region of the human brain, where audio and visual perception takes place along with many important cognitive functions. An important research goal is to describe the mechanisms implemented by the neocortex. There is an apparent regularity in the structure of the neocortex [Brodmann 1909, Mountcastle 1957] which may help simplify this task. The work reported here addresses the problem of how to describe the putative repeated units ('cortical circuits') in a manner that is easily understood and manipulated, with the long-term goal of developing a mathematical and algorithmic description of their function. The approachmore » is to reduce each algorithm to an enhanced perceptron-like structure and describe its computation using difference equations. We organize this algorithmic processing into larger structures based on physiological observations, and implement key modeling concepts in software which runs on parallel computing hardware.« less

  16. Recombination of open-f-shell tungsten ions

    NASA Astrophysics Data System (ADS)

    Krantz, C.; Badnell, N. R.; Müller, A.; Schippers, S.; Wolf, A.

    2017-03-01

    We review experimental and theoretical efforts aimed at a detailed understanding of the recombination of electrons with highly charged tungsten ions characterised by an open 4f sub-shell. Highly charged tungsten occurs as a plasma contaminant in ITER-like tokamak experiments, where it acts as an unwanted cooling agent. Modelling of the charge state populations in a plasma requires reliable thermal rate coefficients for charge-changing electron collisions. The electron recombination of medium-charged tungsten species with open 4f sub-shells is especially challenging to compute reliably. Storage-ring experiments have been conducted that yielded recombination rate coefficients at high energy resolution and well-understood systematics. Significant deviations compared to simplified, but prevalent, computational models have been found. A new class of ab initio numerical calculations has been developed that provides reliable predictions of the total plasma recombination rate coefficients for these ions.

  17. Riemannian geometry of Hamiltonian chaos: hints for a general theory.

    PubMed

    Cerruti-Sola, Monica; Ciraolo, Guido; Franzosi, Roberto; Pettini, Marco

    2008-10-01

    We aim at assessing the validity limits of some simplifying hypotheses that, within a Riemmannian geometric framework, have provided an explanation of the origin of Hamiltonian chaos and have made it possible to develop a method of analytically computing the largest Lyapunov exponent of Hamiltonian systems with many degrees of freedom. Therefore, a numerical hypotheses testing has been performed for the Fermi-Pasta-Ulam beta model and for a chain of coupled rotators. These models, for which analytic computations of the largest Lyapunov exponents have been carried out in the mentioned Riemannian geometric framework, appear as paradigmatic examples to unveil the reason why the main hypothesis of quasi-isotropy of the mechanical manifolds sometimes breaks down. The breakdown is expected whenever the topology of the mechanical manifolds is nontrivial. This is an important step forward in view of developing a geometric theory of Hamiltonian chaos of general validity.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luszczek, Piotr R; Tomov, Stanimire Z; Dongarra, Jack J

    We present an efficient and scalable programming model for the development of linear algebra in heterogeneous multi-coprocessor environments. The model incorporates some of the current best design and implementation practices for the heterogeneous acceleration of dense linear algebra (DLA). Examples are given as the basis for solving linear systems' algorithms - the LU, QR, and Cholesky factorizations. To generate the extreme level of parallelism needed for the efficient use of coprocessors, algorithms of interest are redesigned and then split into well-chosen computational tasks. The tasks execution is scheduled over the computational components of a hybrid system of multi-core CPUs andmore » coprocessors using a light-weight runtime system. The use of lightweight runtime systems keeps scheduling overhead low, while enabling the expression of parallelism through otherwise sequential code. This simplifies the development efforts and allows the exploration of the unique strengths of the various hardware components.« less

  19. Estimating the timing and location of shallow rainfall-induced landslides using a model for transient, unsaturated infiltration

    USGS Publications Warehouse

    Baum, Rex L.; Godt, Jonathan W.; Savage, William Z.

    2010-01-01

    Shallow rainfall-induced landslides commonly occur under conditions of transient infiltration into initially unsaturated soils. In an effort to predict the timing and location of such landslides, we developed a model of the infiltration process using a two-layer system that consists of an unsaturated zone above a saturated zone and implemented this model in a geographic information system (GIS) framework. The model links analytical solutions for transient, unsaturated, vertical infiltration above the water table to pressure-diffusion solutions for pressure changes below the water table. The solutions are coupled through a transient water table that rises as water accumulates at the base of the unsaturated zone. This scheme, though limited to simplified soil-water characteristics and moist initial conditions, greatly improves computational efficiency over numerical models in spatially distributed modeling applications. Pore pressures computed by these coupled models are subsequently used in one-dimensional slope-stability computations to estimate the timing and locations of slope failures. Applied over a digital landscape near Seattle, Washington, for an hourly rainfall history known to trigger shallow landslides, the model computes a factor of safety for each grid cell at any time during a rainstorm. The unsaturated layer attenuates and delays the rainfall-induced pore-pressure response of the model at depth, consistent with observations at an instrumented hillside near Edmonds, Washington. This attenuation results in realistic estimates of timing for the onset of slope instability (7 h earlier than observed landslides, on average). By considering the spatial distribution of physical properties, the model predicts the primary source areas of landslides.

  20. Statistical linearization for multi-input/multi-output nonlinearities

    NASA Technical Reports Server (NTRS)

    Lin, Ching-An; Cheng, Victor H. L.

    1991-01-01

    Formulas are derived for the computation of the random input-describing functions for MIMO nonlinearities; these straightforward and rigorous derivations are based on the optimal mean square linear approximation. The computations involve evaluations of multiple integrals. It is shown that, for certain classes of nonlinearities, multiple-integral evaluations are obviated and the computations are significantly simplified.

Top