Model-Invariant Hybrid Computations of Separated Flows for RCA Standard Test Cases
NASA Technical Reports Server (NTRS)
Woodruff, Stephen
2016-01-01
NASA's Revolutionary Computational Aerosciences (RCA) subproject has identified several smooth-body separated flows as standard test cases to emphasize the challenge these flows present for computational methods and their importance to the aerospace community. Results of computations of two of these test cases, the NASA hump and the FAITH experiment, are presented. The computations were performed with the model-invariant hybrid LES-RANS formulation, implemented in the NASA code VULCAN-CFD. The model- invariant formulation employs gradual LES-RANS transitions and compensation for model variation to provide more accurate and efficient hybrid computations. Comparisons revealed that the LES-RANS transitions employed in these computations were sufficiently gradual that the compensating terms were unnecessary. Agreement with experiment was achieved only after reducing the turbulent viscosity to mitigate the effect of numerical dissipation. The stream-wise evolution of peak Reynolds shear stress was employed as a measure of turbulence dynamics in separated flows useful for evaluating computations.
ERIC Educational Resources Information Center
Hanna, Philip; Allen, Angela; Kane, Russell; Anderson, Neil; McGowan, Aidan; Collins, Matthew; Hutchison, Malcolm
2015-01-01
This paper outlines a means of improving the employability skills of first-year university students through a closely integrated model of employer engagement within computer science modules. The outlined approach illustrates how employability skills, including communication, teamwork and time management skills, can be contextualised in a manner…
Control Law Design in a Computational Aeroelasticity Environment
NASA Technical Reports Server (NTRS)
Newsom, Jerry R.; Robertshaw, Harry H.; Kapania, Rakesh K.
2003-01-01
A methodology for designing active control laws in a computational aeroelasticity environment is given. The methodology involves employing a systems identification technique to develop an explicit state-space model for control law design from the output of a computational aeroelasticity code. The particular computational aeroelasticity code employed in this paper solves the transonic small disturbance aerodynamic equation using a time-accurate, finite-difference scheme. Linear structural dynamics equations are integrated simultaneously with the computational fluid dynamics equations to determine the time responses of the structure. These structural responses are employed as the input to a modern systems identification technique that determines the Markov parameters of an "equivalent linear system". The Eigensystem Realization Algorithm is then employed to develop an explicit state-space model of the equivalent linear system. The Linear Quadratic Guassian control law design technique is employed to design a control law. The computational aeroelasticity code is modified to accept control laws and perform closed-loop simulations. Flutter control of a rectangular wing model is chosen to demonstrate the methodology. Various cases are used to illustrate the usefulness of the methodology as the nonlinearity of the aeroelastic system is increased through increased angle-of-attack changes.
Development and application of computational aerothermodynamics flowfield computer codes
NASA Technical Reports Server (NTRS)
Venkatapathy, Ethiraj
1993-01-01
Computations are presented for one-dimensional, strong shock waves that are typical of those that form in front of a reentering spacecraft. The fluid mechanics and thermochemistry are modeled using two different approaches. The first employs traditional continuum techniques in solving the Navier-Stokes equations. The second-approach employs a particle simulation technique (the direct simulation Monte Carlo method, DSMC). The thermochemical models employed in these two techniques are quite different. The present investigation presents an evaluation of thermochemical models for nitrogen under hypersonic flow conditions. Four separate cases are considered. The cases are governed, respectively, by the following: vibrational relaxation; weak dissociation; strong dissociation; and weak ionization. In near-continuum, hypersonic flow, the nonequilibrium thermochemical models employed in continuum and particle simulations produce nearly identical solutions. Further, the two approaches are evaluated successfully against available experimental data for weakly and strongly dissociating flows.
NASA Astrophysics Data System (ADS)
Hanna, Philip; Allen, Angela; Kane, Russell; Anderson, Neil; McGowan, Aidan; Collins, Matthew; Hutchison, Malcolm
2015-07-01
This paper outlines a means of improving the employability skills of first-year university students through a closely integrated model of employer engagement within computer science modules. The outlined approach illustrates how employability skills, including communication, teamwork and time management skills, can be contextualised in a manner that directly relates to student learning but can still be linked forward into employment. The paper tests the premise that developing employability skills early within the curriculum will result in improved student engagement and learning within later modules. The paper concludes that embedding employer participation within first-year models can help relate a distant notion of employability into something of more immediate relevance in terms of how students can best approach learning. Further, by enhancing employability skills early within the curriculum, it becomes possible to improve academic attainment within later modules.
ERIC Educational Resources Information Center
Chan, Kit Yu Karen; Yang, Sylvia; Maliska, Max E.; Grunbaum, Daniel
2012-01-01
The National Science Education Standards have highlighted the importance of active learning and reflection for contemporary scientific methods in K-12 classrooms, including the use of models. Computer modeling and visualization are tools that researchers employ in their scientific inquiry process, and often computer models are used in…
Recruitment of Foreigners in the Market for Computer Scientists in the United States
Bound, John; Braga, Breno; Golden, Joseph M.
2016-01-01
We present and calibrate a dynamic model that characterizes the labor market for computer scientists. In our model, firms can recruit computer scientists from recently graduated college students, from STEM workers working in other occupations or from a pool of foreign talent. Counterfactual simulations suggest that wages for computer scientists would have been 2.8–3.8% higher, and the number of Americans employed as computers scientists would have been 7.0–13.6% higher in 2004 if firms could not hire more foreigners than they could in 1994. In contrast, total CS employment would have been 3.8–9.0% lower, and consequently output smaller. PMID:27170827
A distributed computing model for telemetry data processing
NASA Astrophysics Data System (ADS)
Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.
1994-05-01
We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.
A distributed computing model for telemetry data processing
NASA Technical Reports Server (NTRS)
Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.
1994-01-01
We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.
A demonstrative model of a lunar base simulation on a personal computer
NASA Technical Reports Server (NTRS)
1985-01-01
The initial demonstration model of a lunar base simulation is described. This initial model was developed on the personal computer level to demonstrate feasibility and technique before proceeding to a larger computer-based model. Lotus Symphony Version 1.1 software was used to base the demonstration model on an personal computer with an MS-DOS operating system. The personal computer-based model determined the applicability of lunar base modeling techniques developed at an LSPI/NASA workshop. In addition, the personnal computer-based demonstration model defined a modeling structure that could be employed on a larger, more comprehensive VAX-based lunar base simulation. Refinement of this personal computer model and the development of a VAX-based model is planned in the near future.
Molecular Modeling of Environmentally Important Processes: Reduction Potentials
ERIC Educational Resources Information Center
Lewis, Anne; Bumpus, John A.; Truhlar, Donald G.; Cramer, Christopher J.
2004-01-01
The increasing use of computational quantum chemistry in the modeling of environmentally important processes is described. The employment of computational quantum mechanics for the prediction of oxidation-reduction potential for solutes in an aqueous medium is discussed.
A New Formulation for Hybrid LES-RANS Computations
NASA Technical Reports Server (NTRS)
Woodruff, Stephen L.
2013-01-01
Ideally, a hybrid LES-RANS computation would employ LES only where necessary to make up for the failure of the RANS model to provide sufficient accuracy or to provide time-dependent information. Current approaches are fairly restrictive in the placement of LES and RANS regions; an LES-RANS transition in a boundary layer, for example, yields an unphysical log-layer shift. A hybrid computation is formulated here to allow greater control over the placement of LES and RANS regions and the transitions between them. The concept of model invariance is introduced, which provides a basis for interpreting hybrid results within an LES-RANS transition zone. Consequences of imposing model invariance include the addition of terms to the governing equations that compensate for unphysical gradients created as the model changes between RANS and LES. Computational results illustrate the increased accuracy of the approach and its insensitivity to the location of the transition and to the blending function employed.
On the Solution of the Three-Dimensional Flowfield About a Flow-Through Nacelle. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Compton, William Bernard
1985-01-01
The solution of the three dimensional flow field for a flow through nacelle was studied. Both inviscid and viscous inviscid interacting solutions were examined. Inviscid solutions were obtained with two different computational procedures for solving the three dimensional Euler equations. The first procedure employs an alternating direction implicit numerical algorithm, and required the development of a complete computational model for the nacelle problem. The second computational technique employs a fourth order Runge-Kutta numerical algorithm which was modified to fit the nacelle problem. Viscous effects on the flow field were evaluated with a viscous inviscid interacting computational model. This model was constructed by coupling the explicit Euler solution procedure with a flag entrainment boundary layer solution procedure in a global iteration scheme. The computational techniques were used to compute the flow field for a long duct turbofan engine nacelle at free stream Mach numbers of 0.80 and 0.94 and angles of attack of 0 and 4 deg.
ERIC Educational Resources Information Center
Sins, Patrick H. M.; Savelsbergh, Elwin R.; van Joolingen, Wouter R.
2005-01-01
Although computer modelling is widely advocated as a way to offer students a deeper understanding of complex phenomena, the process of modelling is rather complex itself and needs scaffolding. In order to offer adequate support, a thorough understanding of the reasoning processes students employ and of difficulties they encounter during a…
Noise Estimation in Electroencephalogram Signal by Using Volterra Series Coefficients
Hassani, Malihe; Karami, Mohammad Reza
2015-01-01
The Volterra model is widely used for nonlinearity identification in practical applications. In this paper, we employed Volterra model to find the nonlinearity relation between electroencephalogram (EEG) signal and the noise that is a novel approach to estimate noise in EEG signal. We show that by employing this method. We can considerably improve the signal to noise ratio by the ratio of at least 1.54. An important issue in implementing Volterra model is its computation complexity, especially when the degree of nonlinearity is increased. Hence, in many applications it is urgent to reduce the complexity of computation. In this paper, we use the property of EEG signal and propose a new and good approximation of delayed input signal to its adjacent samples in order to reduce the computation of finding Volterra series coefficients. The computation complexity is reduced by the ratio of at least 1/3 when the filter memory is 3. PMID:26284176
A locally p-adaptive approach for Large Eddy Simulation of compressible flows in a DG framework
NASA Astrophysics Data System (ADS)
Tugnoli, Matteo; Abbà, Antonella; Bonaventura, Luca; Restelli, Marco
2017-11-01
We investigate the possibility of reducing the computational burden of LES models by employing local polynomial degree adaptivity in the framework of a high-order DG method. A novel degree adaptation technique especially featured to be effective for LES applications is proposed and its effectiveness is compared to that of other criteria already employed in the literature. The resulting locally adaptive approach allows to achieve significant reductions in computational cost of representative LES computations.
Numerical Analysis of Crack Tip Plasticity and History Effects under Mixed Mode Conditions
NASA Astrophysics Data System (ADS)
Lopez-Crespo, Pablo; Pommier, Sylvie
The plastic behaviour in the crack tip region has a strong influence on the fatigue life of engineering components. In general, residual stresses developed as a consequence of the plasticity being constrained around the crack tip have a significant role on both the direction of crack propagation and the propagation rate. Finite element methods (FEM) are commonly employed in order to model plasticity. However, if millions of cycles need to be modelled to predict the fatigue behaviour of a component, the method becomes computationally too expensive. By employing a multiscale approach, very precise analyses computed by FEM can be brought to a global scale. The data generated using the FEM enables us to identify a global cyclic elastic-plastic model for the crack tip region. Once this model is identified, it can be employed directly, with no need of additional FEM computations, resulting in fast computations. This is done by partitioning local displacement fields computed by FEM into intensity factors (global data) and spatial fields. A Karhunen-Loeve algorithm developed for image processing was employed for this purpose. In addition, the partitioning is done such as to distinguish into elastic and plastic components. Each of them is further divided into opening mode and shear mode parts. The plastic flow direction was determined with the above approach on a centre cracked panel subjected to a wide range of mixed-mode loading conditions. It was found to agree well with the maximum tangential stress criterion developed by Erdogan and Sih, provided that the loading direction is corrected for residual stresses. In this approach, residual stresses are measured at the global scale through internal intensity factors.
Procedures for the computation of unsteady transonic flows including viscous effects
NASA Technical Reports Server (NTRS)
Rizzetta, D. P.
1982-01-01
Modifications of the code LTRAN2, developed by Ballhaus and Goorjian, which account for viscous effects in the computation of planar unsteady transonic flows are presented. Two models are considered and their theoretical development and numerical implementation is discussed. Computational examples employing both models are compared with inviscid solutions and with experimental data. Use of the modified code is described.
Computing Support for Basic Research in Perception and Cognition
1988-12-07
hearing aids and cochlear implants, this suggests that certain types of proposed coding schemes, specifically those employing periodicity tuning in...developing a computer model of the interaction of declarative and procedural knowledge in skill acquisition. In the Visual Psychophysics Laboratory... Psycholinguistics - Laboratory a computer model of text comprehension and recall has been constructed and several - experiments have been completed that verify basic
NASA Astrophysics Data System (ADS)
Doulgerakis, Matthaios; Eggebrecht, Adam; Wojtkiewicz, Stanislaw; Culver, Joseph; Dehghani, Hamid
2017-12-01
Parameter recovery in diffuse optical tomography is a computationally expensive algorithm, especially when used for large and complex volumes, as in the case of human brain functional imaging. The modeling of light propagation, also known as the forward problem, is the computational bottleneck of the recovery algorithm, whereby the lack of a real-time solution is impeding practical and clinical applications. The objective of this work is the acceleration of the forward model, within a diffusion approximation-based finite-element modeling framework, employing parallelization to expedite the calculation of light propagation in realistic adult head models. The proposed methodology is applicable for modeling both continuous wave and frequency-domain systems with the results demonstrating a 10-fold speed increase when GPU architectures are available, while maintaining high accuracy. It is shown that, for a very high-resolution finite-element model of the adult human head with ˜600,000 nodes, consisting of heterogeneous layers, light propagation can be calculated at ˜0.25 s/excitation source.
Carney, Timothy Jay; Morgan, Geoffrey P.; Jones, Josette; McDaniel, Anna M.; Weaver, Michael; Weiner, Bryan; Haggstrom, David A.
2014-01-01
Our conceptual model demonstrates our goal to investigate the impact of clinical decision support (CDS) utilization on cancer screening improvement strategies in the community health care (CHC) setting. We employed a dual modeling technique using both statistical and computational modeling to evaluate impact. Our statistical model used the Spearman’s Rho test to evaluate the strength of relationship between our proximal outcome measures (CDS utilization) against our distal outcome measure (provider self-reported cancer screening improvement). Our computational model relied on network evolution theory and made use of a tool called Construct-TM to model the use of CDS measured by the rate of organizational learning. We employed the use of previously collected survey data from community health centers Cancer Health Disparities Collaborative (HDCC). Our intent is to demonstrate the added valued gained by using a computational modeling tool in conjunction with a statistical analysis when evaluating the impact a health information technology, in the form of CDS, on health care quality process outcomes such as facility-level screening improvement. Significant simulated disparities in organizational learning over time were observed between community health centers beginning the simulation with high and low clinical decision support capability. PMID:24953241
Investigation of supersonic jet plumes using an improved two-equation turbulence model
NASA Technical Reports Server (NTRS)
Lakshmanan, B.; Abdol-Hamid, Khaled S.
1994-01-01
Supersonic jet plumes were studied using a two-equation turbulence model employing corrections for compressible dissipation and pressure-dilatation. A space-marching procedure based on an upwind numerical scheme was used to solve the governing equations and turbulence transport equations. The computed results indicate that two-equation models employing corrections for compressible dissipation and pressure-dilatation yield improved agreement with the experimental data. In addition, the numerical study demonstrates that the computed results are sensitive to the effect of grid refinement and insensitive to the type of velocity profiles used at the inflow boundary for the cases considered in the present study.
Evaluation of the chondral modeling theory using fe-simulation and numeric shape optimization
Plochocki, Jeffrey H; Ward, Carol V; Smith, Douglas E
2009-01-01
The chondral modeling theory proposes that hydrostatic pressure within articular cartilage regulates joint size, shape, and congruence through regional variations in rates of tissue proliferation.The purpose of this study is to develop a computational model using a nonlinear two-dimensional finite element analysis in conjunction with numeric shape optimization to evaluate the chondral modeling theory. The model employed in this analysis is generated from an MR image of the medial portion of the tibiofemoral joint in a subadult male. Stress-regulated morphological changes are simulated until skeletal maturity and evaluated against the chondral modeling theory. The computed results are found to support the chondral modeling theory. The shape-optimized model exhibits increased joint congruence, broader stress distributions in articular cartilage, and a relative decrease in joint diameter. The results for the computational model correspond well with experimental data and provide valuable insights into the mechanical determinants of joint growth. The model also provides a crucial first step toward developing a comprehensive model that can be employed to test the influence of mechanical variables on joint conformation. PMID:19438771
Soft computing techniques toward modeling the water supplies of Cyprus.
Iliadis, L; Maris, F; Tachos, S
2011-10-01
This research effort aims in the application of soft computing techniques toward water resources management. More specifically, the target is the development of reliable soft computing models capable of estimating the water supply for the case of "Germasogeia" mountainous watersheds in Cyprus. Initially, ε-Regression Support Vector Machines (ε-RSVM) and fuzzy weighted ε-RSVMR models have been developed that accept five input parameters. At the same time, reliable artificial neural networks have been developed to perform the same job. The 5-fold cross validation approach has been employed in order to eliminate bad local behaviors and to produce a more representative training data set. Thus, the fuzzy weighted Support Vector Regression (SVR) combined with the fuzzy partition has been employed in an effort to enhance the quality of the results. Several rational and reliable models have been produced that can enhance the efficiency of water policy designers. Copyright © 2011 Elsevier Ltd. All rights reserved.
Large-scale inverse model analyses employing fast randomized data reduction
NASA Astrophysics Data System (ADS)
Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan
2017-08-01
When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.
Performance Models for Split-execution Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humble, Travis S; McCaskey, Alex; Schrock, Jonathan
Split-execution computing leverages the capabilities of multiple computational models to solve problems, but splitting program execution across different computational models incurs costs associated with the translation between domains. We analyze the performance of a split-execution computing system developed from conventional and quantum processing units (QPUs) by using behavioral models that track resource usage. We focus on asymmetric processing models built using conventional CPUs and a family of special-purpose QPUs that employ quantum computing principles. Our performance models account for the translation of a classical optimization problem into the physical representation required by the quantum processor while also accounting for hardwaremore » limitations and conventional processor speed and memory. We conclude that the bottleneck in this split-execution computing system lies at the quantum-classical interface and that the primary time cost is independent of quantum processor behavior.« less
Computational Electromagnetic Modeling of SansEC(Trade Mark) Sensors
NASA Technical Reports Server (NTRS)
Smith, Laura J.; Dudley, Kenneth L.; Szatkowski, George N.
2011-01-01
This paper describes the preliminary effort to apply computational design tools to aid in the development of an electromagnetic SansEC resonant sensor composite materials damage detection system. The computational methods and models employed on this research problem will evolve in complexity over time and will lead to the development of new computational methods and experimental sensor systems that demonstrate the capability to detect, diagnose, and monitor the damage of composite materials and structures on aerospace vehicles.
Management Sciences Division Annual Report (10th)
1993-01-01
of the Weapon System Management Information System (WSMIS). TheI Aircraft Sustainability Model ( ASM ) is the computational technique employed by...provisioning. We enhanced the capabilities of RBIRD by using the Aircraft Sustainability Model ( ASM ) for the spares calculation. ASM offers many... ASM for several years to 3 compute spares for war. It is also fully compatible with the Air Force’s peacetime spares computation system (D041). This
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lowery, P.S.; Lessor, D.L.
Waste glass melter and in situ vitrification (ISV) processes represent the combination of electrical thermal, and fluid flow phenomena to produce a stable waste-from product. Computational modeling of the thermal and fluid flow aspects of these processes provides a useful tool for assessing the potential performance of proposed system designs. These computations can be performed at a fraction of the cost of experiment. Consequently, computational modeling of vitrification systems can also provide and economical means for assessing the suitability of a proposed process application. The computational model described in this paper employs finite difference representations of the basic continuum conservationmore » laws governing the thermal, fluid flow, and electrical aspects of the vitrification process -- i.e., conservation of mass, momentum, energy, and electrical charge. The resulting code is a member of the TEMPEST family of codes developed at the Pacific Northwest Laboratory (operated by Battelle for the US Department of Energy). This paper provides an overview of the numerical approach employed in TEMPEST. In addition, results from several TEMPEST simulations of sample waste glass melter and ISV processes are provided to illustrate the insights to be gained from computational modeling of these processes. 3 refs., 13 figs.« less
ERIC Educational Resources Information Center
Singh, Gurmukh
2012-01-01
The present article is primarily targeted for the advanced college/university undergraduate students of chemistry/physics education, computational physics/chemistry, and computer science. The most recent software system such as MS Visual Studio .NET version 2010 is employed to perform computer simulations for modeling Bohr's quantum theory of…
Blood Flow in Idealized Vascular Access for Hemodialysis: A Review of Computational Studies.
Ene-Iordache, Bogdan; Remuzzi, Andrea
2017-09-01
Although our understanding of the failure mechanism of vascular access for hemodialysis has increased substantially, this knowledge has not translated into successful therapies. Despite advances in technology, it is recognized that vascular access is difficult to maintain, due to complications such as intimal hyperplasia. Computational studies have been used to estimate hemodynamic changes induced by vascular access creation. Due to the heterogeneity of patient-specific geometries, and difficulties with obtaining reliable models of access vessels, idealized models were often employed. In this review we analyze the knowledge gained with the use of computational such simplified models. A review of the literature was conducted, considering studies employing a computational fluid dynamics approach to gain insights into the flow field phenotype that develops in idealized models of vascular access. Several important discoveries have originated from idealized model studies, including the detrimental role of disturbed flow and turbulent flow, and the beneficial role of spiral flow in intimal hyperplasia. The general flow phenotype was consistent among studies, but findings were not treated homogeneously since they paralleled achievements in cardiovascular biomechanics which spanned over the last two decades. Computational studies in idealized models are important for studying local blood flow features and evaluating new concepts that may improve the patency of vascular access for hemodialysis. For future studies we strongly recommend numerical modelling targeted at accurately characterizing turbulent flows and multidirectional wall shear disturbances.
Doulgerakis, Matthaios; Eggebrecht, Adam; Wojtkiewicz, Stanislaw; Culver, Joseph; Dehghani, Hamid
2017-12-01
Parameter recovery in diffuse optical tomography is a computationally expensive algorithm, especially when used for large and complex volumes, as in the case of human brain functional imaging. The modeling of light propagation, also known as the forward problem, is the computational bottleneck of the recovery algorithm, whereby the lack of a real-time solution is impeding practical and clinical applications. The objective of this work is the acceleration of the forward model, within a diffusion approximation-based finite-element modeling framework, employing parallelization to expedite the calculation of light propagation in realistic adult head models. The proposed methodology is applicable for modeling both continuous wave and frequency-domain systems with the results demonstrating a 10-fold speed increase when GPU architectures are available, while maintaining high accuracy. It is shown that, for a very high-resolution finite-element model of the adult human head with ∼600,000 nodes, consisting of heterogeneous layers, light propagation can be calculated at ∼0.25 s/excitation source. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Modeling Mendel's Laws on Inheritance in Computational Biology and Medical Sciences
ERIC Educational Resources Information Center
Singh, Gurmukh; Siddiqui, Khalid; Singh, Mankiran; Singh, Satpal
2011-01-01
The current research article is based on a simple and practical way of employing the computational power of widely available, versatile software MS Excel 2007 to perform interactive computer simulations for undergraduate/graduate students in biology, biochemistry, biophysics, microbiology, medicine in college and university classroom setting. To…
Fractional modeling of viscoelasticity in 3D cerebral arteries and aneurysms
NASA Astrophysics Data System (ADS)
Yu, Yue; Perdikaris, Paris; Karniadakis, George Em
2016-10-01
We develop efficient numerical methods for fractional order PDEs, and employ them to investigate viscoelastic constitutive laws for arterial wall mechanics. Recent simulations using one-dimensional models [1] have indicated that fractional order models may offer a more powerful alternative for modeling the arterial wall response, exhibiting reduced sensitivity to parametric uncertainties compared with the integer-calculus-based models. Here, we study three-dimensional (3D) fractional PDEs that naturally model the continuous relaxation properties of soft tissue, and for the first time employ them to simulate flow structure interactions for patient-specific brain aneurysms. To deal with the high memory requirements and in order to accelerate the numerical evaluation of hereditary integrals, we employ a fast convolution method [2] that reduces the memory cost to O (log (N)) and the computational complexity to O (Nlog (N)). Furthermore, we combine the fast convolution with high-order backward differentiation to achieve third-order time integration accuracy. We confirm that in 3D viscoelastic simulations, the integer order models strongly depends on the relaxation parameters, while the fractional order models are less sensitive. As an application to long-time simulations in complex geometries, we also apply the method to modeling fluid-structure interaction of a 3D patient-specific compliant cerebral artery with an aneurysm. Taken together, our findings demonstrate that fractional calculus can be employed effectively in modeling complex behavior of materials in realistic 3D time-dependent problems if properly designed efficient algorithms are employed to overcome the extra memory requirements and computational complexity associated with the non-local character of fractional derivatives.
Fractional modeling of viscoelasticity in 3D cerebral arteries and aneurysms
Perdikaris, Paris; Karniadakis, George Em
2017-01-01
We develop efficient numerical methods for fractional order PDEs, and employ them to investigate viscoelastic constitutive laws for arterial wall mechanics. Recent simulations using one-dimensional models [1] have indicated that fractional order models may offer a more powerful alternative for modeling the arterial wall response, exhibiting reduced sensitivity to parametric uncertainties compared with the integer-calculus-based models. Here, we study three-dimensional (3D) fractional PDEs that naturally model the continuous relaxation properties of soft tissue, and for the first time employ them to simulate flow structure interactions for patient-specific brain aneurysms. To deal with the high memory requirements and in order to accelerate the numerical evaluation of hereditary integrals, we employ a fast convolution method [2] that reduces the memory cost to O(log(N)) and the computational complexity to O(N log(N)). Furthermore, we combine the fast convolution with high-order backward differentiation to achieve third-order time integration accuracy. We confirm that in 3D viscoelastic simulations, the integer order models strongly depends on the relaxation parameters, while the fractional order models are less sensitive. As an application to long-time simulations in complex geometries, we also apply the method to modeling fluid–structure interaction of a 3D patient-specific compliant cerebral artery with an aneurysm. Taken together, our findings demonstrate that fractional calculus can be employed effectively in modeling complex behavior of materials in realistic 3D time-dependent problems if properly designed efficient algorithms are employed to overcome the extra memory requirements and computational complexity associated with the non-local character of fractional derivatives. PMID:29104310
Task-based data-acquisition optimization for sparse image reconstruction systems
NASA Astrophysics Data System (ADS)
Chen, Yujia; Lou, Yang; Kupinski, Matthew A.; Anastasio, Mark A.
2017-03-01
Conventional wisdom dictates that imaging hardware should be optimized by use of an ideal observer (IO) that exploits full statistical knowledge of the class of objects to be imaged, without consideration of the reconstruction method to be employed. However, accurate and tractable models of the complete object statistics are often difficult to determine in practice. Moreover, in imaging systems that employ compressive sensing concepts, imaging hardware and (sparse) image reconstruction are innately coupled technologies. We have previously proposed a sparsity-driven ideal observer (SDIO) that can be employed to optimize hardware by use of a stochastic object model that describes object sparsity. The SDIO and sparse reconstruction method can therefore be "matched" in the sense that they both utilize the same statistical information regarding the class of objects to be imaged. To efficiently compute SDIO performance, the posterior distribution is estimated by use of computational tools developed recently for variational Bayesian inference. Subsequently, the SDIO test statistic can be computed semi-analytically. The advantages of employing the SDIO instead of a Hotelling observer are systematically demonstrated in case studies in which magnetic resonance imaging (MRI) data acquisition schemes are optimized for signal detection tasks.
Insights into Parkinson's disease from computational models of the basal ganglia.
Humphries, Mark D; Obeso, Jose Angel; Dreyer, Jakob Kisbye
2018-04-17
Movement disorders arise from the complex interplay of multiple changes to neural circuits. Successful treatments for these disorders could interact with these complex changes in myriad ways, and as a consequence their mechanisms of action and their amelioration of symptoms are incompletely understood. Using Parkinson's disease as a case study, we review here how computational models are a crucial tool for taming this complexity, across causative mechanisms, consequent neural dynamics and treatments. For mechanisms, we review models that capture the effects of losing dopamine on basal ganglia function; for dynamics, we discuss models that have transformed our understanding of how beta-band (15-30 Hz) oscillations arise in the parkinsonian basal ganglia. For treatments, we touch on the breadth of computational modelling work trying to understand the therapeutic actions of deep brain stimulation. Collectively, models from across all levels of description are providing a compelling account of the causes, symptoms and treatments for Parkinson's disease. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Network aggregation in transportation planning models
DOT National Transportation Integrated Search
1979-06-01
This report contains six papers addressed at mathematical and computation aspects of an extraction aggregation model often employed in transportation planning studies. This model concerns the optimal flowing of an extracted subnetwork of a given netw...
Computation of Sound Generated by Viscous Flow Over a Circular Cylinder
NASA Technical Reports Server (NTRS)
Cox, Jared S.; Rumsey, Christopher L.; Brentner, Kenneth S.; Younis, Bassam A.
1997-01-01
The Lighthill acoustic analogy approach combined with Reynolds-averaged Navier Stokes is used to predict the sound generated by unsteady viscous flow past a circular cylinder assuming a correlation length of 10 cylinder diameters. The two-dimensional unsteady flow field is computed using two Navier-Stokes codes at a low Mach number over a range of Reynolds numbers from 100 to 5 million. Both laminar flow as well as turbulent flow with a variety of eddy viscosity turbulence models are employed. Mean drag and Strouhal number are examined, and trends similar to experiments are observed. Computing the noise within the Reynolds number regime where transition to turbulence occurs near the separation point is problematic: laminar flow exhibits chaotic behavior and turbulent flow exhibits strong dependence on the turbulence model employed. Comparisons of far-field noise with experiment at a Reynolds number of 90,000, therefore, vary significantly, depending on the turbulence model. At a high Reynolds number outside this regime, three different turbulence models yield self-consistent results.
Computation of Vortex Shedding and Radiated Sound for a Circular Cylinder
NASA Technical Reports Server (NTRS)
Cox, Jared S.; Brentner, Kenneth S.; Rumsey, Christopher L.; Younis, Bassam A.
1997-01-01
The Lighthill acoustic analogy approach combined with Reynolds-averaged Navier Stokes is used to predict the sound generated by unsteady viscous flow past a circular cylinder assuming a correlation length of ten cylinder diameters. The two- dimensional unsteady ow field is computed using two Navier-Stokes codes at a low Mach number over a range of Reynolds numbers from 100 to 5 million. Both laminar ow as well as turbulent ow with a variety of eddy viscosity turbulence models are employed. Mean drag and Strouhal number are examined, and trends similar to experiments are observed. Computing the noise within the Reynolds number regime where transition to turbulence occurs near the separation point is problematic: laminar flow exhibits chaotic behavior and turbulent ow exhibits strong dependence on the turbulence model employed. Comparisons of far-field noise with experiment at a Reynolds number of 90,000, therefore, vary significantly, depending on the turbulence model. At a high Reynolds number outside this regime, three different turbulence models yield self-consistent results.
Iteration and Prototyping in Creating Technical Specifications.
ERIC Educational Resources Information Center
Flynt, John P.
1994-01-01
Claims that the development process for computer software can be greatly aided by the writers of specifications if they employ basic iteration and prototyping techniques. Asserts that computer software configuration management practices provide ready models for iteration and prototyping. (HB)
Simulating the Thermal Response of High Explosives on Time Scales of Days to Microseconds
NASA Astrophysics Data System (ADS)
Yoh, Jack J.; McClelland, Matthew A.
2004-07-01
We present an overview of computational techniques for simulating the thermal cookoff of high explosives using a multi-physics hydrodynamics code, ALE3D. Recent improvements to the code have aided our computational capability in modeling the response of energetic materials systems exposed to extreme thermal environments, such as fires. We consider an idealized model process for a confined explosive involving the transition from slow heating to rapid deflagration in which the time scale changes from days to hundreds of microseconds. The heating stage involves thermal expansion and decomposition according to an Arrhenius kinetics model while a pressure-dependent burn model is employed during the explosive phase. We describe and demonstrate the numerical strategies employed to make the transition from slow to fast dynamics.
NASA Astrophysics Data System (ADS)
Lin, Yuchun; Baumketner, Andrij; Deng, Shaozhong; Xu, Zhenli; Jacobs, Donald; Cai, Wei
2009-10-01
In this paper, a new solvation model is proposed for simulations of biomolecules in aqueous solutions that combines the strengths of explicit and implicit solvent representations. Solute molecules are placed in a spherical cavity filled with explicit water, thus providing microscopic detail where it is most needed. Solvent outside of the cavity is modeled as a dielectric continuum whose effect on the solute is treated through the reaction field corrections. With this explicit/implicit model, the electrostatic potential represents a solute molecule in an infinite bath of solvent, thus avoiding unphysical interactions between periodic images of the solute commonly used in the lattice-sum explicit solvent simulations. For improved computational efficiency, our model employs an accurate and efficient multiple-image charge method to compute reaction fields together with the fast multipole method for the direct Coulomb interactions. To minimize the surface effects, periodic boundary conditions are employed for nonelectrostatic interactions. The proposed model is applied to study liquid water. The effect of model parameters, which include the size of the cavity, the number of image charges used to compute reaction field, and the thickness of the buffer layer, is investigated in comparison with the particle-mesh Ewald simulations as a reference. An optimal set of parameters is obtained that allows for a faithful representation of many structural, dielectric, and dynamic properties of the simulated water, while maintaining manageable computational cost. With controlled and adjustable accuracy of the multiple-image charge representation of the reaction field, it is concluded that the employed model achieves convergence with only one image charge in the case of pure water. Future applications to pKa calculations, conformational sampling of solvated biomolecules and electrolyte solutions are briefly discussed.
Hadwin, Paul J; Peterson, Sean D
2017-04-01
The Bayesian framework for parameter inference provides a basis from which subject-specific reduced-order vocal fold models can be generated. Previously, it has been shown that a particle filter technique is capable of producing estimates and associated credibility intervals of time-varying reduced-order vocal fold model parameters. However, the particle filter approach is difficult to implement and has a high computational cost, which can be barriers to clinical adoption. This work presents an alternative estimation strategy based upon Kalman filtering aimed at reducing the computational cost of subject-specific model development. The robustness of this approach to Gaussian and non-Gaussian noise is discussed. The extended Kalman filter (EKF) approach is found to perform very well in comparison with the particle filter technique at dramatically lower computational cost. Based upon the test cases explored, the EKF is comparable in terms of accuracy to the particle filter technique when greater than 6000 particles are employed; if less particles are employed, the EKF actually performs better. For comparable levels of accuracy, the solution time is reduced by 2 orders of magnitude when employing the EKF. By virtue of the approximations used in the EKF, however, the credibility intervals tend to be slightly underpredicted.
Hierarchial parallel computer architecture defined by computational multidisciplinary mechanics
NASA Technical Reports Server (NTRS)
Padovan, Joe; Gute, Doug; Johnson, Keith
1989-01-01
The goal is to develop an architecture for parallel processors enabling optimal handling of multi-disciplinary computation of fluid-solid simulations employing finite element and difference schemes. The goals, philosphical and modeling directions, static and dynamic poly trees, example problems, interpolative reduction, the impact on solvers are shown in viewgraph form.
ERIC Educational Resources Information Center
Ke, Fengfeng
2008-01-01
This article reports findings on a study of educational computer games used within various classroom situations. Employing an across-stage, mixed method model, the study examined whether educational computer games, in comparison to traditional paper-and-pencil drills, would be more effective in facilitating comprehensive math learning outcomes,…
Aeroelastic Uncertainty Quantification Studies Using the S4T Wind Tunnel Model
NASA Technical Reports Server (NTRS)
Nikbay, Melike; Heeg, Jennifer
2017-01-01
This paper originates from the joint efforts of an aeroelastic study team in the Applied Vehicle Technology Panel from NATO Science and Technology Organization, with the Task Group number AVT-191, titled "Application of Sensitivity Analysis and Uncertainty Quantification to Military Vehicle Design." We present aeroelastic uncertainty quantification studies using the SemiSpan Supersonic Transport wind tunnel model at the NASA Langley Research Center. The aeroelastic study team decided treat both structural and aerodynamic input parameters as uncertain and represent them as samples drawn from statistical distributions, propagating them through aeroelastic analysis frameworks. Uncertainty quantification processes require many function evaluations to asses the impact of variations in numerous parameters on the vehicle characteristics, rapidly increasing the computational time requirement relative to that required to assess a system deterministically. The increased computational time is particularly prohibitive if high-fidelity analyses are employed. As a remedy, the Istanbul Technical University team employed an Euler solver in an aeroelastic analysis framework, and implemented reduced order modeling with Polynomial Chaos Expansion and Proper Orthogonal Decomposition to perform the uncertainty propagation. The NASA team chose to reduce the prohibitive computational time by employing linear solution processes. The NASA team also focused on determining input sample distributions.
NASA Technical Reports Server (NTRS)
Herraez, Miguel; Bergan, Andrew C.; Gonzalez, Carlos; Lopes, Claudio S.
2017-01-01
In this work, the fiber kinking phenomenon, which is known as the failure mechanism that takes place when a fiber reinforced polymer is loaded under longitudinal compression, is studied. A computational micromechanics model is employed to interrogate the assumptions of a recently developed mesoscale continuum damage mechanics (CDM) model for fiber kinking based on the deformation gradient decomposition (DGD) and the LaRC04 failure criteria.
NASA Astrophysics Data System (ADS)
Pineda, M.; Stamatakis, M.
2017-07-01
Modeling the kinetics of surface catalyzed reactions is essential for the design of reactors and chemical processes. The majority of microkinetic models employ mean-field approximations, which lead to an approximate description of catalytic kinetics by assuming spatially uncorrelated adsorbates. On the other hand, kinetic Monte Carlo (KMC) methods provide a discrete-space continuous-time stochastic formulation that enables an accurate treatment of spatial correlations in the adlayer, but at a significant computation cost. In this work, we use the so-called cluster mean-field approach to develop higher order approximations that systematically increase the accuracy of kinetic models by treating spatial correlations at a progressively higher level of detail. We further demonstrate our approach on a reduced model for NO oxidation incorporating first nearest-neighbor lateral interactions and construct a sequence of approximations of increasingly higher accuracy, which we compare with KMC and mean-field. The latter is found to perform rather poorly, overestimating the turnover frequency by several orders of magnitude for this system. On the other hand, our approximations, while more computationally intense than the traditional mean-field treatment, still achieve tremendous computational savings compared to KMC simulations, thereby opening the way for employing them in multiscale modeling frameworks.
Investigation of Particle Deposition in Internal Cooling Cavities of a Nozzle Guide Vane
NASA Astrophysics Data System (ADS)
Casaday, Brian Patrick
Experimental and computational studies were conducted regarding particle deposition in the internal film cooling cavities of nozzle guide vanes. An experimental facility was fabricated to simulate particle deposition on an impingement liner and upstream surface of a nozzle guide vane wall. The facility supplied particle-laden flow at temperatures up to 1000°F (540°C) to a simplified impingement cooling test section. The heated flow passed through a perforated impingement plate and impacted on a heated flat wall. The particle-laden impingement jets resulted in the buildup of deposit cones associated with individual impingement jets. The deposit growth rate increased with increasing temperature and decreasing impinging velocities. For some low flow rates or high flow temperatures, the deposit cones heights spanned the entire gap between the impingement plate and wall, and grew through the impingement holes. For high flow rates, deposit structures were removed by shear forces from the flow. At low temperatures, deposit formed not only as individual cones, but as ridges located at the mid-planes between impinging jets. A computational model was developed to predict the deposit buildup seen in the experiments. The test section geometry and fluid flow from the experiment were replicated computationally and an Eulerian-Lagrangian particle tracking technique was employed. Several particle sticking models were employed and tested for adequacy. Sticking models that accurately predicted locations and rates in external deposition experiments failed to predict certain structures or rates seen in internal applications. A geometry adaptation technique was employed and the effect on deposition prediction was discussed. A new computational sticking model was developed that predicts deposition rates based on the local wall shear. The growth patterns were compared to experiments under different operating conditions. Of all the sticking models employed, the model based on wall shear, in conjunction with geometry adaptation, proved to be the most accurate in predicting the forms of deposit growth. It was the only model that predicted the changing deposition trends based on flow temperature or Reynolds number, and is recommended for further investigation and application in the modeling of deposition in internal cooling cavities.
A ferrofluid based energy harvester: Computational modeling, analysis, and experimental validation
NASA Astrophysics Data System (ADS)
Liu, Qi; Alazemi, Saad F.; Daqaq, Mohammed F.; Li, Gang
2018-03-01
A computational model is described and implemented in this work to analyze the performance of a ferrofluid based electromagnetic energy harvester. The energy harvester converts ambient vibratory energy into an electromotive force through a sloshing motion of a ferrofluid. The computational model solves the coupled Maxwell's equations and Navier-Stokes equations for the dynamic behavior of the magnetic field and fluid motion. The model is validated against experimental results for eight different configurations of the system. The validated model is then employed to study the underlying mechanisms that determine the electromotive force of the energy harvester. Furthermore, computational analysis is performed to test the effect of several modeling aspects, such as three-dimensional effect, surface tension, and type of the ferrofluid-magnetic field coupling on the accuracy of the model prediction.
A method for the computational modeling of the physics of heart murmurs
NASA Astrophysics Data System (ADS)
Seo, Jung Hee; Bakhshaee, Hani; Garreau, Guillaume; Zhu, Chi; Andreou, Andreas; Thompson, William R.; Mittal, Rajat
2017-05-01
A computational method for direct simulation of the generation and propagation of blood flow induced sounds is proposed. This computational hemoacoustic method is based on the immersed boundary approach and employs high-order finite difference methods to resolve wave propagation and scattering accurately. The current method employs a two-step, one-way coupled approach for the sound generation and its propagation through the tissue. The blood flow is simulated by solving the incompressible Navier-Stokes equations using the sharp-interface immersed boundary method, and the equations corresponding to the generation and propagation of the three-dimensional elastic wave corresponding to the murmur are resolved with a high-order, immersed boundary based, finite-difference methods in the time-domain. The proposed method is applied to a model problem of aortic stenosis murmur and the simulation results are verified and validated by comparing with known solutions as well as experimental measurements. The murmur propagation in a realistic model of a human thorax is also simulated by using the computational method. The roles of hemodynamics and elastic wave propagation on the murmur are discussed based on the simulation results.
A simplified solar cell array modelling program
NASA Technical Reports Server (NTRS)
Hughes, R. D.
1982-01-01
As part of the energy conversion/self sufficiency efforts of DSN engineering, it was necessary to have a simplified computer model of a solar photovoltaic (PV) system. This article describes the analysis and simplifications employed in the development of a PV cell array computer model. The analysis of the incident solar radiation, steady state cell temperature and the current-voltage characteristics of a cell array are discussed. A sample cell array was modelled and the results are presented.
Quantum computation with coherent spin states and the close Hadamard problem
NASA Astrophysics Data System (ADS)
Adcock, Mark R. A.; Høyer, Peter; Sanders, Barry C.
2016-04-01
We study a model of quantum computation based on the continuously parameterized yet finite-dimensional Hilbert space of a spin system. We explore the computational powers of this model by analyzing a pilot problem we refer to as the close Hadamard problem. We prove that the close Hadamard problem can be solved in the spin system model with arbitrarily small error probability in a constant number of oracle queries. We conclude that this model of quantum computation is suitable for solving certain types of problems. The model is effective for problems where symmetries between the structure of the information associated with the problem and the structure of the unitary operators employed in the quantum algorithm can be exploited.
Application of CARS to scramjet combustion
NASA Technical Reports Server (NTRS)
Antcliff, R. R.
1987-01-01
A coherent anti-Stokes Raman spectroscopic (CARS) instrument has been developed for measuring simultaneously temperature and N2 - O2 species concentration in hostile flame environments. A folded BOXCARS arrangement was employed to obtain high spatial resolution. Polarization discrimination against the nonresonant background decreased the lower limits of O2 detectivity. The instrument has been primarily employed for validation of computational fluid-dynamics computer-model codes. Comparisons have been made to both the CHARNAL and TEACH codes on a hydrogen diffusion flame with good results.
Modeling the state dependent impulse control for computer virus propagation under media coverage
NASA Astrophysics Data System (ADS)
Liang, Xiyin; Pei, Yongzhen; Lv, Yunfei
2018-02-01
A state dependent impulsive control model is proposed to model the spread of computer virus incorporating media coverage. By the successor function, the sufficient conditions for the existence and uniqueness of order-1 periodic solution are presented first. Secondly, for two classes of periodic solutions, the geometric property of successor function and the analogue of the Poincaré criterion are employed to obtain the stability results. These results show that the number of the infective computers is under the threshold all the time. Finally, the theoretic and numerical analysis show that media coverage can delay the spread of computer virus.
Computer aided radiation analysis for manned spacecraft
NASA Technical Reports Server (NTRS)
Appleby, Matthew H.; Griffin, Brand N.; Tanner, Ernest R., II; Pogue, William R.; Golightly, Michael J.
1991-01-01
In order to assist in the design of radiation shielding an analytical tool is presented that can be employed in combination with CAD facilities and NASA transport codes. The nature of radiation in space is described, and the operational requirements for protection are listed as background information for the use of the technique. The method is based on the Boeing radiation exposure model (BREM) for combining NASA radiation transport codes and CAD facilities, and the output is given as contour maps of the radiation-shield distribution so that dangerous areas can be identified. Computational models are used to solve the 1D Boltzmann transport equation and determine the shielding needs for the worst-case scenario. BREM can be employed directly with the radiation computations to assess radiation protection during all phases of design which saves time and ultimately spacecraft weight.
Investigation of laminar to turbulent transition phenomena effects on impingement heat transfer
NASA Astrophysics Data System (ADS)
Isman, Mustafa Kemal; Morris, Philip J.; Can, Muhiddin
2016-10-01
Turbulent impinging air flow is investigated numerically by using the ANSYS-CFX® code. All computations are performed by considering three-dimensional, steady, and incompressible flow. Three different Reynolds averaged Navier-Stokes (RANS) turbulence models and two Reynolds stress models (RSM's) are employed. Furthermore three different laminar to turbulent transition (LTT) models are employed with the shear stress transport (SST) and the baseline (BSL) models. Results show that predictions of the SST and two RSM's are very close each other and these models' results are in better agreement with the experimental data when all Reynolds numbers used in this study are considered. Secondary maxima in Nusselt number can be seen only if the LTT formula is employed with SST and BSL models.
Employing a Modified Diffuser Momentum Model to Simulate Ventilation of the Orion CEV (DRAFT)
NASA Technical Reports Server (NTRS)
Straus, John; Ball, Tyler; OHara, William; Barido, Richard
2011-01-01
Computational Fluid Dynamics (CFD) is used to model the flow field in the Orion CEV cabin. The CFD model employs a momentum model used to account for the effect of supply grilles on the supply flow. The momentum model is modified to account for non-uniform velocity profiles at the approach of the supply grille. The modified momentum model is validated against a detailed vane-resolved model before inclusion into the Orion CEV cabin model. Results for this comparison, as well as that of a single ventilation configuration are presented.
Rudd, Michael E.
2014-01-01
Previous work has demonstrated that perceived surface reflectance (lightness) can be modeled in simple contexts in a quantitatively exact way by assuming that the visual system first extracts information about local, directed steps in log luminance, then spatially integrates these steps along paths through the image to compute lightness (Rudd and Zemach, 2004, 2005, 2007). This method of computing lightness is called edge integration. Recent evidence (Rudd, 2013) suggests that human vision employs a default strategy to integrate luminance steps only along paths from a common background region to the targets whose lightness is computed. This implies a role for gestalt grouping in edge-based lightness computation. Rudd (2010) further showed the perceptual weights applied to edges in lightness computation can be influenced by the observer's interpretation of luminance steps as resulting from either spatial variation in surface reflectance or illumination. This implies a role for top-down factors in any edge-based model of lightness (Rudd and Zemach, 2005). Here, I show how the separate influences of grouping and attention on lightness can be modeled in tandem by a cortical mechanism that first employs top-down signals to spatially select regions of interest for lightness computation. An object-based network computation, involving neurons that code for border-ownership, then automatically sets the neural gains applied to edge signals surviving the earlier spatial selection stage. Only the borders that survive both processing stages are spatially integrated to compute lightness. The model assumptions are consistent with those of the cortical lightness model presented earlier by Rudd (2010, 2013), and with neurophysiological data indicating extraction of local edge information in V1, network computations to establish figure-ground relations and border ownership in V2, and edge integration to encode lightness and darkness signals in V4. PMID:25202253
Rudd, Michael E
2014-01-01
Previous work has demonstrated that perceived surface reflectance (lightness) can be modeled in simple contexts in a quantitatively exact way by assuming that the visual system first extracts information about local, directed steps in log luminance, then spatially integrates these steps along paths through the image to compute lightness (Rudd and Zemach, 2004, 2005, 2007). This method of computing lightness is called edge integration. Recent evidence (Rudd, 2013) suggests that human vision employs a default strategy to integrate luminance steps only along paths from a common background region to the targets whose lightness is computed. This implies a role for gestalt grouping in edge-based lightness computation. Rudd (2010) further showed the perceptual weights applied to edges in lightness computation can be influenced by the observer's interpretation of luminance steps as resulting from either spatial variation in surface reflectance or illumination. This implies a role for top-down factors in any edge-based model of lightness (Rudd and Zemach, 2005). Here, I show how the separate influences of grouping and attention on lightness can be modeled in tandem by a cortical mechanism that first employs top-down signals to spatially select regions of interest for lightness computation. An object-based network computation, involving neurons that code for border-ownership, then automatically sets the neural gains applied to edge signals surviving the earlier spatial selection stage. Only the borders that survive both processing stages are spatially integrated to compute lightness. The model assumptions are consistent with those of the cortical lightness model presented earlier by Rudd (2010, 2013), and with neurophysiological data indicating extraction of local edge information in V1, network computations to establish figure-ground relations and border ownership in V2, and edge integration to encode lightness and darkness signals in V4.
Computing the Power-Density Spectrum for an Engineering Model
NASA Technical Reports Server (NTRS)
Dunn, H. J.
1982-01-01
Computer program for calculating of power-density spectrum (PDS) from data base generated by Advanced Continuous Simulation Language (ACSL) uses algorithm that employs fast Fourier transform (FFT) to calculate PDS of variable. Accomplished by first estimating autocovariance function of variable and then taking FFT of smoothed autocovariance function to obtain PDS. Fast-Fourier-transform technique conserves computer resources.
NASA Astrophysics Data System (ADS)
Hvizdoš, Dávid; Váňa, Martin; Houfek, Karel; Greene, Chris H.; Rescigno, Thomas N.; McCurdy, C. William; Čurík, Roman
2018-02-01
We present a simple two-dimensional model of the indirect dissociative recombination process. The model has one electronic and one nuclear degree of freedom and it can be solved to high precision, without making any physically motivated approximations, by employing the exterior complex scaling method together with the finite-elements method and discrete variable representation. The approach is applied to solve a model for dissociative recombination of H2 + in the singlet ungerade channels, and the results serve as a benchmark to test validity of several physical approximations commonly used in the computational modeling of dissociative recombination for real molecular targets. The second, approximate, set of calculations employs a combination of multichannel quantum defect theory and frame transformation into a basis of Siegert pseudostates. The cross sections computed with the two methods are compared in detail for collision energies from 0 to 2 eV.
Horizon sensor errors calculated by computer models compared with errors measured in orbit
NASA Technical Reports Server (NTRS)
Ward, K. A.; Hogan, R.; Andary, J.
1982-01-01
Using a computer program to model the earth's horizon and to duplicate the signal processing procedure employed by the ESA (Earth Sensor Assembly), errors due to radiance variation have been computed for a particular time of the year. Errors actually occurring in flight at the same time of year are inferred from integrated rate gyro data for a satellite of the TIROS series of NASA weather satellites (NOAA-A). The predicted performance is compared with actual flight history.
Towards a predictive thermal explosion model for energetic materials
NASA Astrophysics Data System (ADS)
Yoh, Jack J.; McClelland, Matthew A.; Maienschein, Jon L.; Wardell, Jeffrey F.
2005-01-01
We present an overview of models and computational strategies for simulating the thermal response of high explosives using a multi-physics hydrodynamics code, ALE3D. Recent improvements to the code have aided our computational capability in modeling the behavior of energetic materials systems exposed to strong thermal environments such as fires. We apply these models and computational techniques to a thermal explosion experiment involving the slow heating of a confined explosive. The model includes the transition from slow heating to rapid deflagration in which the time scale decreases from days to hundreds of microseconds. Thermal, mechanical, and chemical effects are modeled during all phases of this process. The heating stage involves thermal expansion and decomposition according to an Arrhenius kinetics model while a pressure-dependent burn model is employed during the explosive phase. We describe and demonstrate the numerical strategies employed to make the transition from slow to fast dynamics. In addition, we investigate the sensitivity of wall expansion rates to numerical strategies and parameters. Results from a one-dimensional model show that violence is influenced by the presence of a gap between the explosive and container. In addition, a comparison is made between 2D model and measured results for the explosion temperature and tube wall expansion profiles.
The Helicopter Antenna Radiation Prediction Code (HARP)
NASA Technical Reports Server (NTRS)
Klevenow, F. T.; Lynch, B. G.; Newman, E. H.; Rojas, R. G.; Scheick, J. T.; Shamansky, H. T.; Sze, K. Y.
1990-01-01
The first nine months effort in the development of a user oriented computer code, referred to as the HARP code, for analyzing the radiation from helicopter antennas is described. The HARP code uses modern computer graphics to aid in the description and display of the helicopter geometry. At low frequencies the helicopter is modeled by polygonal plates, and the method of moments is used to compute the desired patterns. At high frequencies the helicopter is modeled by a composite ellipsoid and flat plates, and computations are made using the geometrical theory of diffraction. The HARP code will provide a user friendly interface, employing modern computer graphics, to aid the user to describe the helicopter geometry, select the method of computation, construct the desired high or low frequency model, and display the results.
An adaptive model order reduction by proper snapshot selection for nonlinear dynamical problems
NASA Astrophysics Data System (ADS)
Nigro, P. S. B.; Anndif, M.; Teixeira, Y.; Pimenta, P. M.; Wriggers, P.
2016-04-01
Model Order Reduction (MOR) methods are employed in many fields of Engineering in order to reduce the processing time of complex computational simulations. A usual approach to achieve this is the application of Galerkin projection to generate representative subspaces (reduced spaces). However, when strong nonlinearities in a dynamical system are present and this technique is employed several times along the simulation, it can be very inefficient. This work proposes a new adaptive strategy, which ensures low computational cost and small error to deal with this problem. This work also presents a new method to select snapshots named Proper Snapshot Selection (PSS). The objective of the PSS is to obtain a good balance between accuracy and computational cost by improving the adaptive strategy through a better snapshot selection in real time (online analysis). With this method, it is possible a substantial reduction of the subspace, keeping the quality of the model without the use of the Proper Orthogonal Decomposition (POD).
Parallelization of fine-scale computation in Agile Multiscale Modelling Methodology
NASA Astrophysics Data System (ADS)
Macioł, Piotr; Michalik, Kazimierz
2016-10-01
Nowadays, multiscale modelling of material behavior is an extensively developed area. An important obstacle against its wide application is high computational demands. Among others, the parallelization of multiscale computations is a promising solution. Heterogeneous multiscale models are good candidates for parallelization, since communication between sub-models is limited. In this paper, the possibility of parallelization of multiscale models based on Agile Multiscale Methodology framework is discussed. A sequential, FEM based macroscopic model has been combined with concurrently computed fine-scale models, employing a MatCalc thermodynamic simulator. The main issues, being investigated in this work are: (i) the speed-up of multiscale models with special focus on fine-scale computations and (ii) on decreasing the quality of computations enforced by parallel execution. Speed-up has been evaluated on the basis of Amdahl's law equations. The problem of `delay error', rising from the parallel execution of fine scale sub-models, controlled by the sequential macroscopic sub-model is discussed. Some technical aspects of combining third-party commercial modelling software with an in-house multiscale framework and a MPI library are also discussed.
ERIC Educational Resources Information Center
Feinberg, William E.
1988-01-01
This article describes a monte carlo computer simulation of affirmative action employment policies. The counterintuitive results of the model are explained through a thought device involving urns and marbles. States that such model simulations have implications for social policy. (BSR)
Satellite broadcasting system study
NASA Technical Reports Server (NTRS)
1972-01-01
The study to develop a system model and computer program representative of broadcasting satellite systems employing community-type receiving terminals is reported. The program provides a user-oriented tool for evaluating performance/cost tradeoffs, synthesizing minimum cost systems for a given set of system requirements, and performing sensitivity analyses to identify critical parameters and technology. The performance/ costing philosophy and what is meant by a minimum cost system is shown graphically. Topics discussed include: main line control program, ground segment model, space segment model, cost models and launch vehicle selection. Several examples of minimum cost systems resulting from the computer program are presented. A listing of the computer program is also included.
GPU-computing in econophysics and statistical physics
NASA Astrophysics Data System (ADS)
Preis, T.
2011-03-01
A recent trend in computer science and related fields is general purpose computing on graphics processing units (GPUs), which can yield impressive performance. With multiple cores connected by high memory bandwidth, today's GPUs offer resources for non-graphics parallel processing. This article provides a brief introduction into the field of GPU computing and includes examples. In particular computationally expensive analyses employed in financial market context are coded on a graphics card architecture which leads to a significant reduction of computing time. In order to demonstrate the wide range of possible applications, a standard model in statistical physics - the Ising model - is ported to a graphics card architecture as well, resulting in large speedup values.
EUV/soft x-ray spectra for low B neutron stars
NASA Technical Reports Server (NTRS)
Romani, Roger W.; Rajagopal, Mohan; Rogers, Forrest J.; Iglesias, Carlos A.
1995-01-01
Recent ROSAT and EUVE detections of spin-powered neutron stars suggest that many emit 'thermal' radiation, peaking in the EUV/soft X-ray band. These data constrain the neutron stars' thermal history, but interpretation requires comparison with model atmosphere computations, since emergent spectra depend strongly on the surface composition and magnetic field. As recent opacity computations show substantial change to absorption cross sections at neutron star photospheric conditions, we report here on new model atmosphere computations employing such data. The results are compared with magnetic atmosphere models and applied to PSR J0437-4715, a low field neutron star.
Reducing software mass through behavior control. [of planetary roving robots
NASA Technical Reports Server (NTRS)
Miller, David P.
1992-01-01
Attention is given to the tradeoff between communication and computation as regards a planetary rover (both these subsystems are very power-intensive, and both can be the major driver of the rover's power subsystem, and therefore the minimum mass and size of the rover). Software techniques that can be used to reduce the requirements on both communciation and computation, allowing the overall robot mass to be greatly reduced, are discussed. Novel approaches to autonomous control, called behavior control, employ an entirely different approach, and for many tasks will yield a similar or superior level of autonomy to traditional control techniques, while greatly reducing the computational demand. Traditional systems have several expensive processes that operate serially, while behavior techniques employ robot capabilities that run in parallel. Traditional systems make extensive world models, while behavior control systems use minimal world models or none at all.
Three-dimensional wideband electromagnetic modeling on massively parallel computers
NASA Astrophysics Data System (ADS)
Alumbaugh, David L.; Newman, Gregory A.; Prevost, Lydie; Shadid, John N.
1996-01-01
A method is presented for modeling the wideband, frequency domain electromagnetic (EM) response of a three-dimensional (3-D) earth to dipole sources operating at frequencies where EM diffusion dominates the response (less than 100 kHz) up into the range where propagation dominates (greater than 10 MHz). The scheme employs the modified form of the vector Helmholtz equation for the scattered electric fields to model variations in electrical conductivity, dielectric permitivity and magnetic permeability. The use of the modified form of the Helmholtz equation allows for perfectly matched layer ( PML) absorbing boundary conditions to be employed through the use of complex grid stretching. Applying the finite difference operator to the modified Helmholtz equation produces a linear system of equations for which the matrix is sparse and complex symmetrical. The solution is obtained using either the biconjugate gradient (BICG) or quasi-minimum residual (QMR) methods with preconditioning; in general we employ the QMR method with Jacobi scaling preconditioning due to stability. In order to simulate larger, more realistic models than has been previously possible, the scheme has been modified to run on massively parallel (MP) computer architectures. Execution on the 1840-processor Intel Paragon has indicated a maximum model size of 280 × 260 × 200 cells with a maximum flop rate of 14.7 Gflops. Three different geologic models are simulated to demonstrate the use of the code for frequencies ranging from 100 Hz to 30 MHz and for different source types and polarizations. The simulations show that the scheme is correctly able to model the air-earth interface and the jump in the electric and magnetic fields normal to discontinuities. For frequencies greater than 10 MHz, complex grid stretching must be employed to incorporate absorbing boundaries while below this normal (real) grid stretching can be employed.
A computational model for three-dimensional incompressible wall jets with large cross flow
NASA Technical Reports Server (NTRS)
Murphy, W. D.; Shankar, V.; Malmuth, N. D.
1979-01-01
A computational model for the flow field of three dimensional incompressible wall jets prototypic of thrust augmenting ejectors with large cross flow is presented. The formulation employs boundary layer equations in an orthogonal curvilinear coordinate system. Simulation of laminar as well as turbulen wall jets is reported. Quantification of jet spreading, jet growth, nominal separation, and jet shrink effects due to corss flow are discussed.
Python as a federation tool for GENESIS 3.0.
Cornelis, Hugo; Rodriguez, Armando L; Coop, Allan D; Bower, James M
2012-01-01
The GENESIS simulation platform was one of the first broad-scale modeling systems in computational biology to encourage modelers to develop and share model features and components. Supported by a large developer community, it participated in innovative simulator technologies such as benchmarking, parallelization, and declarative model specification and was the first neural simulator to define bindings for the Python scripting language. An important feature of the latest version of GENESIS is that it decomposes into self-contained software components complying with the Computational Biology Initiative federated software architecture. This architecture allows separate scripting bindings to be defined for different necessary components of the simulator, e.g., the mathematical solvers and graphical user interface. Python is a scripting language that provides rich sets of freely available open source libraries. With clean dynamic object-oriented designs, they produce highly readable code and are widely employed in specialized areas of software component integration. We employ a simplified wrapper and interface generator to examine an application programming interface and make it available to a given scripting language. This allows independent software components to be 'glued' together and connected to external libraries and applications from user-defined Python or Perl scripts. We illustrate our approach with three examples of Python scripting. (1) Generate and run a simple single-compartment model neuron connected to a stand-alone mathematical solver. (2) Interface a mathematical solver with GENESIS 3.0 to explore a neuron morphology from either an interactive command-line or graphical user interface. (3) Apply scripting bindings to connect the GENESIS 3.0 simulator to external graphical libraries and an open source three dimensional content creation suite that supports visualization of models based on electron microscopy and their conversion to computational models. Employed in this way, the stand-alone software components of the GENESIS 3.0 simulator provide a framework for progressive federated software development in computational neuroscience.
Python as a Federation Tool for GENESIS 3.0
Cornelis, Hugo; Rodriguez, Armando L.; Coop, Allan D.; Bower, James M.
2012-01-01
The GENESIS simulation platform was one of the first broad-scale modeling systems in computational biology to encourage modelers to develop and share model features and components. Supported by a large developer community, it participated in innovative simulator technologies such as benchmarking, parallelization, and declarative model specification and was the first neural simulator to define bindings for the Python scripting language. An important feature of the latest version of GENESIS is that it decomposes into self-contained software components complying with the Computational Biology Initiative federated software architecture. This architecture allows separate scripting bindings to be defined for different necessary components of the simulator, e.g., the mathematical solvers and graphical user interface. Python is a scripting language that provides rich sets of freely available open source libraries. With clean dynamic object-oriented designs, they produce highly readable code and are widely employed in specialized areas of software component integration. We employ a simplified wrapper and interface generator to examine an application programming interface and make it available to a given scripting language. This allows independent software components to be ‘glued’ together and connected to external libraries and applications from user-defined Python or Perl scripts. We illustrate our approach with three examples of Python scripting. (1) Generate and run a simple single-compartment model neuron connected to a stand-alone mathematical solver. (2) Interface a mathematical solver with GENESIS 3.0 to explore a neuron morphology from either an interactive command-line or graphical user interface. (3) Apply scripting bindings to connect the GENESIS 3.0 simulator to external graphical libraries and an open source three dimensional content creation suite that supports visualization of models based on electron microscopy and their conversion to computational models. Employed in this way, the stand-alone software components of the GENESIS 3.0 simulator provide a framework for progressive federated software development in computational neuroscience. PMID:22276101
NASA Astrophysics Data System (ADS)
Shi, X.; Utada, H.; Jiaying, W.
2009-12-01
The vector finite-element method combined with divergence corrections based on the magnetic field H, referred to as VFEH++ method, is developed to simulate the magnetotelluric (MT) responses of 3-D conductivity models. The advantages of the new VFEH++ method are the use of edge-elements to eliminate the vector parasites and the divergence corrections to explicitly guarantee the divergence-free conditions in the whole modeling domain. 3-D MT topographic responses are modeling using the new VFEH++ method, and are compared with those calculated by other numerical methods. The results show that MT responses can be modeled highly accurate using the VFEH+ +method. The VFEH++ algorithm is also employed for the 3-D MT data inversion incorporating topography. The 3-D MT inverse problem is formulated as a minimization problem of the regularized misfit function. In order to avoid the huge memory requirement and very long time for computing the Jacobian sensitivity matrix for Gauss-Newton method, we employ the conjugate gradient (CG) approach to solve the inversion equation. In each iteration of CG algorithm, the cost computation is the product of the Jacobian sensitivity matrix with a model vector x or its transpose with a data vector y, which can be transformed into two pseudo-forwarding modeling. This avoids the full explicitly Jacobian matrix calculation and storage which leads to considerable savings in the memory required by the inversion program in PC computer. The performance of CG algorithm will be illustrated by several typical 3-D models with horizontal earth surface and topographic surfaces. The results show that the VFEH++ and CG algorithms can be effectively employed to 3-D MT field data inversion.
Report on twisted nematic and supertwisted nematic device characterization program
NASA Technical Reports Server (NTRS)
1995-01-01
In this study we measured the optical characteristics of normally white twisted nematic (NWTN) and super twisted nematic (STN ) cells. Though no dynamic computer model was available, the static observations were compared with computer simulated behavior. The measurements were taken as a function of both viewing angle and applied voltage and included in the static case not only luminance but also contrast ratio and chromaticity . We employed the computer model Twist Cell Optics, developed at Kent State in conjunction with this study, and whose optical modeling foundation, Iike the ViDEOS program, is the 4 x 4 matrix method of Berreman. In order to resolve discrepancies between the experimental and modeled data the optical parameters of the individual cell components, where not known, were determined using refractometry, profilometry, and various forms of ellipsometry. The resulting agreement between experiment and model is quite good due primarily to a better understanding of the structure and optics of dichroic sheet polarizers. A description of the model and test cells employed are given in section 2. Section 3 contains the experimental data gathered and section 4 gives examples of the fit between model and experiment. Also included with this report are a pair of papers which resulted from the research and which detail the polarizer properties and some of the cell characterization methods.
Multiphysics Thrust Chamber Modeling for Nuclear Thermal Propulsion
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Cheng, Gary; Chen, Yen-Sen
2006-01-01
The objective of this effort is to develop an efficient and accurate thermo-fluid computational methodology to predict environments for a solid-core, nuclear thermal engine thrust chamber. The computational methodology is based on an unstructured-grid, pressure-based computational fluid dynamics formulation. A two-pronged approach is employed in this effort: A detailed thermo-fluid analysis on a multi-channel flow element for mid-section corrosion investigation; and a global modeling of the thrust chamber to understand the effect of heat transfer on thrust performance. Preliminary results on both aspects are presented.
NASA Technical Reports Server (NTRS)
Platt, M. E.; Lewis, E. E.; Boehm, F.
1991-01-01
A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.
Intelligence and Accidents: A Multilevel Model
2006-05-06
individuals with low scores. Analysis Procedures The HLM 6 computer program (Raudenbush, Bryk, Cheong, & Congdon , 2004) was employed to conduct the...Cheong, Y. F., & Congdon , R. (2004). HLM 6: Hierarchical linear and nonlinear modeling. Chicago: Scientific Software International. Reynolds, D. H
Employing static excitation control and tie line reactance to stabilize wind turbine generators
NASA Technical Reports Server (NTRS)
Hwang, H. H.; Mozeico, H. V.; Guo, T.
1978-01-01
An analytical representation of a wind turbine generator is presented which employs blade pitch angle feedback control. A mathematical model was formulated. With the functioning MOD-0 wind turbine serving as a practical case study, results of computer simulations of the model as applied to the problem of dynamic stability at rated load are also presented. The effect of the tower shadow was included in the input to the system. Different configurations of the drive train, and optimal values of the tie line reactance were used in the simulations. Computer results revealed that a static excitation control system coupled with optimal values of the tie line reactance would effectively reduce oscillations of the power output, without the use of a slip clutch.
Krishnamoorthy, Gautham
2010-10-15
Decoupled radiative heat transfer calculations of 30 cm-diameter toluene and heptane pool fires are performed employing the discrete ordinates method. The composition and temperature fields within the fires are created from detailed experimental measurements of soot volume fractions based on absorption and emission, temperature statistics and correlations found in the literature. The measured temperature variance data is utilized to compute the temperature self-correlation term for modeling turbulence-radiation interactions. In the toluene pool fire, the presence of cold soot near the fuel surface is found to suppress the average radiation feedback to the pool surface by 27%. The performances of four gray and three non-gray radiative property models for the gases are also compared. The average variations in radiative transfer predictions due to differences in the spectroscopic and experimental databases employed in the property model formulations are found to be between 10% and 20%. Clear differences between the gray and non-gray modeling strategies are seen when the mean beam length is computed based on traditionally employed geometric relations. Therefore, a correction to the mean beam length is proposed to improve the agreement between gray and non-gray modeling in simulations of open pool fires. 2010 Elsevier B.V. All rights reserved.
Human-computer interaction in multitask situations
NASA Technical Reports Server (NTRS)
Rouse, W. B.
1977-01-01
Human-computer interaction in multitask decisionmaking situations is considered, and it is proposed that humans and computers have overlapping responsibilities. Queueing theory is employed to model this dynamic approach to the allocation of responsibility between human and computer. Results of simulation experiments are used to illustrate the effects of several system variables including number of tasks, mean time between arrivals of action-evoking events, human-computer speed mismatch, probability of computer error, probability of human error, and the level of feedback between human and computer. Current experimental efforts are discussed and the practical issues involved in designing human-computer systems for multitask situations are considered.
Application of linear regression analysis in accuracy assessment of rolling force calculations
NASA Astrophysics Data System (ADS)
Poliak, E. I.; Shim, M. K.; Kim, G. S.; Choo, W. Y.
1998-10-01
Efficient operation of the computational models employed in process control systems require periodical assessment of the accuracy of their predictions. Linear regression is proposed as a tool which allows separate systematic and random prediction errors from those related to measurements. A quantitative characteristic of the model predictive ability is introduced in addition to standard statistical tests for model adequacy. Rolling force calculations are considered as an example for the application. However, the outlined approach can be used to assess the performance of any computational model.
Analysis of explicit model predictive control for path-following control
2018-01-01
In this paper, explicit Model Predictive Control(MPC) is employed for automated lane-keeping systems. MPC has been regarded as the key to handle such constrained systems. However, the massive computational complexity of MPC, which employs online optimization, has been a major drawback that limits the range of its target application to relatively small and/or slow problems. Explicit MPC can reduce this computational burden using a multi-parametric quadratic programming technique(mp-QP). The control objective is to derive an optimal front steering wheel angle at each sampling time so that autonomous vehicles travel along desired paths, including straight, circular, and clothoid parts, at high entry speeds. In terms of the design of the proposed controller, a method of choosing weighting matrices in an optimization problem and the range of horizons for path-following control are described through simulations. For the verification of the proposed controller, simulation results obtained using other control methods such as MPC, Linear-Quadratic Regulator(LQR), and driver model are employed, and CarSim, which reflects the features of a vehicle more realistically than MATLAB/Simulink, is used for reliable demonstration. PMID:29534080
Analysis of explicit model predictive control for path-following control.
Lee, Junho; Chang, Hyuk-Jun
2018-01-01
In this paper, explicit Model Predictive Control(MPC) is employed for automated lane-keeping systems. MPC has been regarded as the key to handle such constrained systems. However, the massive computational complexity of MPC, which employs online optimization, has been a major drawback that limits the range of its target application to relatively small and/or slow problems. Explicit MPC can reduce this computational burden using a multi-parametric quadratic programming technique(mp-QP). The control objective is to derive an optimal front steering wheel angle at each sampling time so that autonomous vehicles travel along desired paths, including straight, circular, and clothoid parts, at high entry speeds. In terms of the design of the proposed controller, a method of choosing weighting matrices in an optimization problem and the range of horizons for path-following control are described through simulations. For the verification of the proposed controller, simulation results obtained using other control methods such as MPC, Linear-Quadratic Regulator(LQR), and driver model are employed, and CarSim, which reflects the features of a vehicle more realistically than MATLAB/Simulink, is used for reliable demonstration.
Applications of hybrid genetic algorithms in seismic tomography
NASA Astrophysics Data System (ADS)
Soupios, Pantelis; Akca, Irfan; Mpogiatzis, Petros; Basokur, Ahmet T.; Papazachos, Constantinos
2011-11-01
Almost all earth sciences inverse problems are nonlinear and involve a large number of unknown parameters, making the application of analytical inversion methods quite restrictive. In practice, most analytical methods are local in nature and rely on a linearized form of the problem equations, adopting an iterative procedure which typically employs partial derivatives in order to optimize the starting (initial) model by minimizing a misfit (penalty) function. Unfortunately, especially for highly non-linear cases, the final model strongly depends on the initial model, hence it is prone to solution-entrapment in local minima of the misfit function, while the derivative calculation is often computationally inefficient and creates instabilities when numerical approximations are used. An alternative is to employ global techniques which do not rely on partial derivatives, are independent of the misfit form and are computationally robust. Such methods employ pseudo-randomly generated models (sampling an appropriately selected section of the model space) which are assessed in terms of their data-fit. A typical example is the class of methods known as genetic algorithms (GA), which achieves the aforementioned approximation through model representation and manipulations, and has attracted the attention of the earth sciences community during the last decade, with several applications already presented for several geophysical problems. In this paper, we examine the efficiency of the combination of the typical regularized least-squares and genetic methods for a typical seismic tomography problem. The proposed approach combines a local (LOM) and a global (GOM) optimization method, in an attempt to overcome the limitations of each individual approach, such as local minima and slow convergence, respectively. The potential of both optimization methods is tested and compared, both independently and jointly, using the several test models and synthetic refraction travel-time date sets that employ the same experimental geometry, wavelength and geometrical characteristics of the model anomalies. Moreover, real data from a crosswell tomographic project for the subsurface mapping of an ancient wall foundation are used for testing the efficiency of the proposed algorithm. The results show that the combined use of both methods can exploit the benefits of each approach, leading to improved final models and producing realistic velocity models, without significantly increasing the required computation time.
Moncho, Salvador; Autschbach, Jochen
2010-01-12
A benchmark study for relativistic density functional calculations of NMR spin-spin coupling constants has been performed. The test set contained 47 complexes with heavy metal atoms (W, Pt, Hg, Tl, Pb) with a total of 88 coupling constants involving one or two heavy metal atoms. One-, two-, three-, and four-bond spin-spin couplings have been computed at different levels of theory (nonhybrid vs hybrid DFT, scalar vs two-component relativistic). The computational model was based on geometries fully optimized at the BP/TZP scalar relativistic zeroth-order regular approximation (ZORA) and the conductor-like screening model (COSMO) to include solvent effects. The NMR computations also employed the continuum solvent model. Computations in the gas phase were performed in order to assess the importance of the solvation model. The relative median deviations between various computational models and experiment were found to range between 13% and 21%, with the highest-level computational model (hybrid density functional computations including scalar plus spin-orbit relativistic effects, the COSMO solvent model, and a Gaussian finite-nucleus model) performing best.
Heat Transfer on a Flat Plate with Uniform and Step Temperature Distributions
NASA Technical Reports Server (NTRS)
Bahrami, Parviz A.
2005-01-01
Heat transfer associated with turbulent flow on a step-heated or cooled section of a flat plate at zero angle of attack with an insulated starting section was computationally modeled using the GASP Navier-Stokes code. The algebraic eddy viscosity model of Baldwin-Lomax and the turbulent two-equation models, the K- model and the Shear Stress Turbulent model (SST), were employed. The variations from uniformity of the imposed experimental temperature profile were incorporated in the computations. The computations yielded satisfactory agreement with the experimental results for all three models. The Baldwin- Lomax model showed the closest agreement in heat transfer, whereas the SST model was higher and the K-omega model was yet higher than the experiments. In addition to the step temperature distribution case, computations were also carried out for a uniformly heated or cooled plate. The SST model showed the closest agreement with the Von Karman analogy, whereas the K-omega model was higher and the Baldwin-Lomax was lower.
Computer simulation of earthquakes
NASA Technical Reports Server (NTRS)
Cohen, S. C.
1976-01-01
Two computer simulation models of earthquakes were studied for the dependence of the pattern of events on the model assumptions and input parameters. Both models represent the seismically active region by mechanical blocks which are connected to one another and to a driving plate. The blocks slide on a friction surface. In the first model elastic forces were employed and time independent friction to simulate main shock events. The size, length, and time and place of event occurrence were influenced strongly by the magnitude and degree of homogeniety in the elastic and friction parameters of the fault region. Periodically reoccurring similar events were frequently observed in simulations with near homogeneous parameters along the fault, whereas, seismic gaps were a common feature of simulations employing large variations in the fault parameters. The second model incorporated viscoelastic forces and time-dependent friction to account for aftershock sequences. The periods between aftershock events increased with time and the aftershock region was confined to that which moved in the main event.
Education in a Research University
ERIC Educational Resources Information Center
Arrow, Kenneth J. Ed.; And Others
This collection of 30 essays on the character, administration, and management of research universities research university emphasizes the perspective of statistics and operations research: The essays are: "A Robust Faculty Planning Model" (Frederick Biedenweg); "Looking Back at Computer Models Employed in the Stanford University…
QSAR Modeling: Where Have You Been? Where Are You Going To?.
Quantitative structure–activity relationship modeling is one of the major computational tools employed in medicinal chemistry. However, throughout its entire history it has drawn both praise and criticism concerning its reliability, limitations, successes, and failures. In this...
2014-01-01
computational and empirical dosimetric tools [31]. For the computational dosimetry, we employed finite-dif- ference time- domain (FDTD) modeling techniques to...temperature-time data collected for a well exposed to THz radiation using finite-difference time- domain (FDTD) modeling techniques and thermocouples... like )). Alter- ation in the expression of such genes underscores the signif- 62 IEEE TRANSACTIONS ON TERAHERTZ SCIENCE AND TECHNOLOGY, VOL. 6, NO. 1
Report from the Integrated Modeling Panel at the Workshop on the Science of Ignition on NIF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marinak, M; Lamb, D
2012-07-03
This section deals with multiphysics radiation hydrodynamics codes used to design and simulate targets in the ignition campaign. These topics encompass all the physical processes they model, and include consideration of any approximations necessary due to finite computer resources. The section focuses on what developments would have the highest impact on reducing uncertainties in modeling most relevant to experimental observations. It considers how the ICF codes should be employed in the ignition campaign. This includes a consideration of how the experiments can be best structured to test the physical models the codes employ.
Ziegler, Sigurd; Pedersen, Mads L; Mowinckel, Athanasia M; Biele, Guido
2016-12-01
Attention deficit hyperactivity disorder (ADHD) is characterized by altered decision-making (DM) and reinforcement learning (RL), for which competing theories propose alternative explanations. Computational modelling contributes to understanding DM and RL by integrating behavioural and neurobiological findings, and could elucidate pathogenic mechanisms behind ADHD. This review of neurobiological theories of ADHD describes predictions for the effect of ADHD on DM and RL as described by the drift-diffusion model of DM (DDM) and a basic RL model. Empirical studies employing these models are also reviewed. While theories often agree on how ADHD should be reflected in model parameters, each theory implies a unique combination of predictions. Empirical studies agree with the theories' assumptions of a lowered DDM drift rate in ADHD, while findings are less conclusive for boundary separation. The few studies employing RL models support a lower choice sensitivity in ADHD, but not an altered learning rate. The discussion outlines research areas for further theoretical refinement in the ADHD field. Copyright © 2016 Elsevier Ltd. All rights reserved.
Airfoil Shape Optimization based on Surrogate Model
NASA Astrophysics Data System (ADS)
Mukesh, R.; Lingadurai, K.; Selvakumar, U.
2018-02-01
Engineering design problems always require enormous amount of real-time experiments and computational simulations in order to assess and ensure the design objectives of the problems subject to various constraints. In most of the cases, the computational resources and time required per simulation are large. In certain cases like sensitivity analysis, design optimisation etc where thousands and millions of simulations have to be carried out, it leads to have a life time of difficulty for designers. Nowadays approximation models, otherwise called as surrogate models (SM), are more widely employed in order to reduce the requirement of computational resources and time in analysing various engineering systems. Various approaches such as Kriging, neural networks, polynomials, Gaussian processes etc are used to construct the approximation models. The primary intention of this work is to employ the k-fold cross validation approach to study and evaluate the influence of various theoretical variogram models on the accuracy of the surrogate model construction. Ordinary Kriging and design of experiments (DOE) approaches are used to construct the SMs by approximating panel and viscous solution algorithms which are primarily used to solve the flow around airfoils and aircraft wings. The method of coupling the SMs with a suitable optimisation scheme to carryout an aerodynamic design optimisation process for airfoil shapes is also discussed.
ERIC Educational Resources Information Center
de la Torre, Jose Garcia; Cifre, Jose G. Hernandez; Martinez, M. Carmen Lopez
2008-01-01
This paper describes a computational exercise at undergraduate level that demonstrates the employment of Monte Carlo simulation to study the conformational statistics of flexible polymer chains, and to predict solution properties. Three simple chain models, including excluded volume interactions, have been implemented in a public-domain computer…
ERIC Educational Resources Information Center
Rapeepisarn, Kowit; Wong, Kok Wai; Fung, Chun Che; Khine, Myint Swe
2008-01-01
When designing Educational Computer Games, designers usually consider target age, interactivity, interface and other related issues. They rarely explore the genres which should employ into one type of educational game. Recently, some digital game-based researchers made attempt to combine game genre with learning theory. Different researchers use…
An Examination of Sampling Characteristics of Some Analytic Factor Transformation Techniques.
ERIC Educational Resources Information Center
Skakun, Ernest N.; Hakstian, A. Ralph
Two population raw data matrices were constructed by computer simulation techniques. Each consisted of 10,000 subjects and 12 variables, and each was constructed according to an underlying factorial model consisting of four major common factors, eight minor common factors, and 12 unique factors. The computer simulation techniques were employed to…
Tying Theory To Practice: Cognitive Aspects of Computer Interaction in the Design Process.
ERIC Educational Resources Information Center
Mikovec, Amy E.; Dake, Dennis M.
The new medium of computer-aided design requires changes to the creative problem-solving methodologies typically employed in the development of new visual designs. Most theoretical models of creative problem-solving suggest a linear progression from preparation and incubation to some type of evaluative study of the "inspiration." These…
Combining-Ability Determinations for Incomplete Mating Designs
E.B. Snyder
1975-01-01
It is shown how general combining ability values (GCA's) from cross-, open-, and self-pollinated progeny can be derived in a single analysis. Breeding values are employed to facilitate explaining genetic models of the expected family means and the derivation of the GCA's. A FORTRAN computer program also includes computation of specific combining ability...
Student Modeling and Ab Initio Language Learning.
ERIC Educational Resources Information Center
Heift, Trude; Schulze, Mathias
2003-01-01
Provides examples of student modeling techniques that have been employed in computer-assisted language learning over the past decade. Describes two systems for learning German: "German Tutor" and "Geroline." Shows how a student model can support computerized adaptive language testing for diagnostic purposes in a Web-based language learning…
Aerodynamic-structural model of offwind yacht sails
NASA Astrophysics Data System (ADS)
Mairs, Christopher M.
An aerodynamic-structural model of offwind yacht sails was created that is useful in predicting sail forces. Two sails were examined experimentally and computationally at several wind angles to explore a variety of flow regimes. The accuracy of the numerical solutions was measured by comparing to experimental results. The two sails examined were a Code 0 and a reaching asymmetric spinnaker. During experiment, balance, wake, and sail shape data were recorded for both sails in various configurations. Two computational steps were used to evaluate the computational model. First, an aerodynamic flow model that includes viscosity effects was used to examine the experimental flying shapes that were recorded. Second, the aerodynamic model was combined with a nonlinear, structural, finite element analysis (FEA) model. The aerodynamic and structural models were used iteratively to predict final flying shapes of offwind sails, starting with the design shapes. The Code 0 has relatively low camber and is used at small angles of attack. It was examined experimentally and computationally at a single angle of attack in two trim configurations, a baseline and overtrimmed setting. Experimentally, the Code 0 was stable and maintained large flow attachment regions. The digitized flying shapes from experiment were examined in the aerodynamic model. Force area predictions matched experimental results well. When the aerodynamic-structural tool was employed, the predictive capability was slightly worse. The reaching asymmetric spinnaker has higher camber and operates at higher angles of attack than the Code 0. Experimentally and computationally, it was examined at two angles of attack. Like the Code 0, at each wind angle, baseline and overtrimmed settings were examined. Experimentally, sail oscillations and large flow detachment regions were encountered. The computational analysis began by examining the experimental flying shapes in the aerodynamic model. In the baseline setting, the computational force predictions were fair at both wind angles examined. Force predictions were much improved in the overtrimmed setting when the sail was highly stalled and more stable. The same trends in force prediction were seen when employing the aerodynamic-structural model. Predictions were good to fair in the baseline setting but improved in the overtrimmed configuration.
An advanced approach for computer modeling and prototyping of the human tooth.
Chang, Kuang-Hua; Magdum, Sheetalkumar; Khera, Satish C; Goel, Vijay K
2003-05-01
This paper presents a systematic and practical method for constructing accurate computer and physical models that can be employed for the study of human tooth mechanics. The proposed method starts with a histological section preparation of a human tooth. Through tracing outlines of the tooth on the sections, discrete points are obtained and are employed to construct B-spline curves that represent the exterior contours and dentino-enamel junction (DEJ) of the tooth using a least square curve fitting technique. The surface skinning technique is then employed to quilt the B-spline curves to create a smooth boundary and DEJ of the tooth using B-spline surfaces. These surfaces are respectively imported into SolidWorks via its application protocol interface to create solid models. The solid models are then imported into Pro/MECHANICA Structure for finite element analysis (FEA). The major advantage of the proposed method is that it first generates smooth solid models, instead of finite element models in discretized form. As a result, a more advanced p-FEA can be employed for structural analysis, which usually provides superior results to traditional h-FEA. In addition, the solid model constructed is smooth and can be fabricated with various scales using the solid freeform fabrication technology. This method is especially useful in supporting bioengineering applications, where the shape of the object is usually complicated. A human maxillary second molar is presented to illustrate and demonstrate the proposed method. Note that both the solid and p-FEA models of the molar are presented. However, comparison between p- and h-FEA models is out of the scope of the paper.
Computation of turbulent boundary layers employing the defect wall-function method. M.S. Thesis
NASA Technical Reports Server (NTRS)
Brown, Douglas L.
1994-01-01
In order to decrease overall computational time requirements of spatially-marching parabolized Navier-Stokes finite-difference computer code when applied to turbulent fluid flow, a wall-function methodology, originally proposed by R. Barnwell, was implemented. This numerical effort increases computational speed and calculates reasonably accurate wall shear stress spatial distributions and boundary-layer profiles. Since the wall shear stress is analytically determined from the wall-function model, the computational grid near the wall is not required to spatially resolve the laminar-viscous sublayer. Consequently, a substantially increased computational integration step size is achieved resulting in a considerable decrease in net computational time. This wall-function technique is demonstrated for adiabatic flat plate test cases from Mach 2 to Mach 8. These test cases are analytically verified employing: (1) Eckert reference method solutions, (2) experimental turbulent boundary-layer data of Mabey, and (3) finite-difference computational code solutions with fully resolved laminar-viscous sublayers. Additionally, results have been obtained for two pressure-gradient cases: (1) an adiabatic expansion corner and (2) an adiabatic compression corner.
Improving the XAJ Model on the Basis of Mass-Energy Balance
NASA Astrophysics Data System (ADS)
Fang, Yuanhao; Corbari, Chiara; Zhang, Xingnan; Mancini, Marco
2014-11-01
Introduction: The Xin'anjiang(XAJ) model is a conceptual model developed by the group led by Prof. Ren-Jun Zhao, which takes the pan evaporation as one of its input and then computes the effective evapotranspiration (ET) of the catchment by mass balance. Such scheme can ensure a good performance of discharge simulation but has obvious defects, one of which is that the effective ET is spatially-constant over the computation unit, neglecting the spatial variation of variables that influence the effective ET and therefore the simulation of ET and SM by the XAJ model, comparing with discharge, is less reliable. In this study, The XAJ model was improved to employ both energy and mass balance to compute the ET following the energy-mass balance scheme of FEST-EWB. model.
Improving the XAJ Model on the Basis of Mass-Energy Balance
NASA Astrophysics Data System (ADS)
Fang, Yuanghao; Corbari, Chiara; Zhang, Xingnan; Mancini, Marco
2014-11-01
The Xin’anjiang(XAJ) model is a conceptual model developed by the group led by Prof. Ren-Jun Zhao, which takes the pan evaporation as one of its input and then computes the effective evapotranspiration (ET) of the catchment by mass balance. Such scheme can ensure a good performance of discharge simulation but has obvious defects, one of which is that the effective ET is spatially-constant over the computation unit, neglecting the spatial variation of variables that influence the effective ET and therefore the simulation of ET and SM by the XAJ model, comparing with discharge, is less reliable. In this study, The XAJ model was improved to employ both energy and mass balance to compute the ET following the energy-mass balance scheme of FEST-EWB. model.
Quasi-Static Viscoelastic Finite Element Model of an Aircraft Tire
NASA Technical Reports Server (NTRS)
Johnson, Arthur R.; Tanner, John A.; Mason, Angela J.
1999-01-01
An elastic large displacement thick-shell mixed finite element is modified to allow for the calculation of viscoelastic stresses. Internal strain variables are introduced at the element's stress nodes and are employed to construct a viscous material model. First order ordinary differential equations relate the internal strain variables to the corresponding elastic strains at the stress nodes. The viscous stresses are computed from the internal strain variables using viscous moduli which are a fraction of the elastic moduli. The energy dissipated by the action of the viscous stresses is included in the mixed variational functional. The nonlinear quasi-static viscous equilibrium equations are then obtained. Previously developed Taylor expansions of the nonlinear elastic equilibrium equations are modified to include the viscous terms. A predictor-corrector time marching solution algorithm is employed to solve the algebraic-differential equations. The viscous shell element is employed to computationally simulate a stair-step loading and unloading of an aircraft tire in contact with a frictionless surface.
Protein-membrane electrostatic interactions: Application of the Lekner summation technique
NASA Astrophysics Data System (ADS)
Juffer, André H.; Shepherd, Craig M.; Vogel, Hans J.
2001-01-01
A model has been developed to calculate the electrostatic interaction between biomolecules and lipid bilayers. The effect of ionic strength is included by means of explicit ions, while water is described as a background continuum. The bilayer is considered at the atomic level. The Lekner summation technique is employed to calculate the long-range electrostatic interactions. The new method is employed to estimate the electrostatic contribution to the free energy of binding of sandostatin, a cyclic eight-residue analogue of the peptide hormone somatostatin, to lipid bilayers with thermodynamic integration. Monte Carlo simulation techniques were employed to determine ion distributions and peptide orientations. Both neutral as well as negatively charged lipid bilayers were used. An error analysis to judge the quality of the computation is also presented. The applicability of the Lekner summation technique to combine it with computer simulation models that simulate the adsorption of peptides (and proteins) into the interfacial region of lipid bilayers is discussed.
Patient-Specific Modeling of Intraventricular Hemodynamics
NASA Astrophysics Data System (ADS)
Vedula, Vijay; Marsden, Alison
2017-11-01
Heart disease is the one of the leading causes of death in the world. Apart from malfunctions in electrophysiology and myocardial mechanics, abnormal hemodynamics is a major factor attributed to heart disease across all ages. Computer simulations offer an efficient means to accurately reproduce in vivo flow conditions and also make predictions of post-operative outcomes and disease progression. We present an experimentally validated computational framework for performing patient-specific modeling of intraventricular hemodynamics. Our modeling framework employs the SimVascular open source software to build an anatomic model and employs robust image registration methods to extract ventricular motion from the image data. We then employ a stabilized finite element solver to simulate blood flow in the ventricles, solving the Navier-Stokes equations in arbitrary Lagrangian-Eulerian (ALE) coordinates by prescribing the wall motion extracted during registration. We model the fluid-structure interaction effects of the cardiac valves using an immersed boundary method and discuss the potential application of this methodology in single ventricle physiology and trans-catheter aortic valve replacement (TAVR). This research is supported in part by the Stanford Child Health Research Institute and the Stanford NIH-NCATS-CTSA through Grant UL1 TR001085 and partly through NIH NHLBI R01 Grant 5R01HL129727-02.
Reinforcement learning in depression: A review of computational research.
Chen, Chong; Takahashi, Taiki; Nakagawa, Shin; Inoue, Takeshi; Kusumi, Ichiro
2015-08-01
Despite being considered primarily a mood disorder, major depressive disorder (MDD) is characterized by cognitive and decision making deficits. Recent research has employed computational models of reinforcement learning (RL) to address these deficits. The computational approach has the advantage in making explicit predictions about learning and behavior, specifying the process parameters of RL, differentiating between model-free and model-based RL, and the computational model-based functional magnetic resonance imaging and electroencephalography. With these merits there has been an emerging field of computational psychiatry and here we review specific studies that focused on MDD. Considerable evidence suggests that MDD is associated with impaired brain signals of reward prediction error and expected value ('wanting'), decreased reward sensitivity ('liking') and/or learning (be it model-free or model-based), etc., although the causality remains unclear. These parameters may serve as valuable intermediate phenotypes of MDD, linking general clinical symptoms to underlying molecular dysfunctions. We believe future computational research at clinical, systems, and cellular/molecular/genetic levels will propel us toward a better understanding of the disease. Copyright © 2015 Elsevier Ltd. All rights reserved.
Computer simulation of surface and film processes
NASA Technical Reports Server (NTRS)
Tiller, W. A.; Halicioglu, M. T.
1983-01-01
Adequate computer methods, based on interactions between discrete particles, provide information leading to an atomic level understanding of various physical processes. The success of these simulation methods, however, is related to the accuracy of the potential energy function representing the interactions among the particles. The development of a potential energy function for crystalline SiO2 forms that can be employed in lengthy computer modelling procedures was investigated. In many of the simulation methods which deal with discrete particles, semiempirical two body potentials were employed to analyze energy and structure related properties of the system. Many body interactions are required for a proper representation of the total energy for many systems. Many body interactions for simulations based on discrete particles are discussed.
NASA Astrophysics Data System (ADS)
Nguyen, Hung T.; Pabit, Suzette A.; Meisburger, Steve P.; Pollack, Lois; Case, David A.
2014-12-01
A new method is introduced to compute X-ray solution scattering profiles from atomic models of macromolecules. The three-dimensional version of the Reference Interaction Site Model (RISM) from liquid-state statistical mechanics is employed to compute the solvent distribution around the solute, including both water and ions. X-ray scattering profiles are computed from this distribution together with the solute geometry. We describe an efficient procedure for performing this calculation employing a Lebedev grid for the angular averaging. The intensity profiles (which involve no adjustable parameters) match experiment and molecular dynamics simulations up to wide angle for two proteins (lysozyme and myoglobin) in water, as well as the small-angle profiles for a dozen biomolecules taken from the BioIsis.net database. The RISM model is especially well-suited for studies of nucleic acids in salt solution. Use of fiber-diffraction models for the structure of duplex DNA in solution yields close agreement with the observed scattering profiles in both the small and wide angle scattering (SAXS and WAXS) regimes. In addition, computed profiles of anomalous SAXS signals (for Rb+ and Sr2+) emphasize the ionic contribution to scattering and are in reasonable agreement with experiment. In cases where an absolute calibration of the experimental data at q = 0 is available, one can extract a count of the excess number of waters and ions; computed values depend on the closure that is assumed in the solution of the Ornstein-Zernike equations, with results from the Kovalenko-Hirata closure being closest to experiment for the cases studied here.
Radrich, Karin; Ale, Angelique; Ermolayev, Vladimir; Ntziachristos, Vasilis
2012-12-01
We examine the improvement in imaging performance, such as axial resolution and signal localization, when employing limited-projection-angle fluorescence molecular tomography (FMT) together with x-ray computed tomography (XCT) measurements versus stand-alone FMT. For this purpose, we employed living mice, bearing a spontaneous lung tumor model, and imaged them with FMT and XCT under identical geometrical conditions using fluorescent probes for cancer targeting. The XCT data was employed, herein, as structural prior information to guide the FMT reconstruction. Gold standard images were provided by fluorescence images of mouse cryoslices, providing the ground truth in fluorescence bio-distribution. Upon comparison of FMT images versus images reconstructed using hybrid FMT and XCT data, we demonstrate marked improvements in image accuracy. This work relates to currently disseminated FMT systems, using limited projection scans, and can be employed to enhance their performance.
GIS For Floodplain Mapping in Design of Highway Drainage Facilities
DOT National Transportation Integrated Search
1998-08-01
Since the 1960s,civil engineers have employed a variety of computer models for stream floodplain analysis. HEC-2 and its Windows counterpart, HEC-RAS, have been the principal models used for such analyses. A significant deficiency of these programs i...
NASA Technical Reports Server (NTRS)
Marvin, J. G.; Horstman, C. C.; Rubesin, M. W.; Coakley, T. J.; Kussoy, M. I.
1975-01-01
An experiment designed to test and guide computations of the interaction of an impinging shock wave with a turbulent boundary layer is described. Detailed mean flow-field and surface data are presented for two shock strengths which resulted in attached and separated flows, respectively. Numerical computations, employing the complete time-averaged Navier-Stokes equations along with algebraic eddy-viscosity and turbulent Prandtl number models to describe shear stress and heat flux, are used to illustrate the dependence of the computations on the particulars of the turbulence models. Models appropriate for zero-pressure-gradient flows predicted the overall features of the flow fields, but were deficient in predicting many of the details of the interaction regions. Improvements to the turbulence model parameters were sought through a combination of detailed data analysis and computer simulations which tested the sensitivity of the solutions to model parameter changes. Computer simulations using these improvements are presented and discussed.
Musculoskeletal modelling in dogs: challenges and future perspectives.
Dries, Billy; Jonkers, Ilse; Dingemanse, Walter; Vanwanseele, Benedicte; Vander Sloten, Jos; van Bree, Henri; Gielen, Ingrid
2016-05-18
Musculoskeletal models have proven to be a valuable tool in human orthopaedics research. Recently, veterinary research started taking an interest in the computer modelling approach to understand the forces acting upon the canine musculoskeletal system. While many of the methods employed in human musculoskeletal models can applied to canine musculoskeletal models, not all techniques are applicable. This review summarizes the important parameters necessary for modelling, as well as the techniques employed in human musculoskeletal models and the limitations in transferring techniques to canine modelling research. The major challenges in future canine modelling research are likely to centre around devising alternative techniques for obtaining maximal voluntary contractions, as well as finding scaling factors to adapt a generalized canine musculoskeletal model to represent specific breeds and subjects.
Reanalysis, compatibility and correlation in analysis of modified antenna structures
NASA Technical Reports Server (NTRS)
Levy, R.
1989-01-01
A simple computational procedure is synthesized to process changes in the microwave-antenna pathlength-error measure when there are changes in the antenna structure model. The procedure employs structural modification reanalysis methods combined with new extensions of correlation analysis to provide the revised rms pathlength error. Mainframe finite-element-method processing of the structure model is required only for the initial unmodified structure, and elementary postprocessor computations develop and deal with the effects of the changes. Several illustrative computational examples are included. The procedure adapts readily to processing spectra of changes for parameter studies or sensitivity analyses.
Grid computing in large pharmaceutical molecular modeling.
Claus, Brian L; Johnson, Stephen R
2008-07-01
Most major pharmaceutical companies have employed grid computing to expand their compute resources with the intention of minimizing additional financial expenditure. Historically, one of the issues restricting widespread utilization of the grid resources in molecular modeling is the limited set of suitable applications amenable to coarse-grained parallelization. Recent advances in grid infrastructure technology coupled with advances in application research and redesign will enable fine-grained parallel problems, such as quantum mechanics and molecular dynamics, which were previously inaccessible to the grid environment. This will enable new science as well as increase resource flexibility to load balance and schedule existing workloads.
NASA Astrophysics Data System (ADS)
Magno, Andrea; Pellarin, Riccardo; Caflisch, Amedeo
Amyloid fibrils are ordered polypeptide aggregates that have been implicated in several neurodegenerative pathologies, such as Alzheimer's, Parkinson's, Huntington's, and prion diseases, [1, 2] and, more recently, also in biological functionalities. [3, 4, 5] These findings have paved the way for a wide range of experimental and computational studies aimed at understanding the details of the fibril-formation mechanism. Computer simulations using low-resolution models, which employ a simplified representation of protein geometry and energetics, have provided insights into the basic physical principles underlying protein aggregation in general [6, 7, 8] and ordered amyloid aggregation. [9, 10, 11, 12, 13, 14, 15] For example, Dokholyan and coworkers have used the Discrete Molecular Dynamics method [16, 17] to shed light on the mechanisms of protein oligomerization [18] and the conformational changes that take place in proteins before the aggregation onset. [19, 20] One challenging observation, which is difficult to observe by computer simulations, is the wide range of aggregation scenarios emerging from a variety of biophysical measurements. [21, 22] Atomistic models have been employed to study the conformational space of amyloidogenic polypeptides in the monomeric state, [23, 24, 25] the very initial steps of amyloid formation, [26, 27, 28, 29, 30, 31, 32] and the structural stability of fibril models. [33, 34, 35) However, all-atom simulations of the kinetics of fibril formation are beyond what can be done with modern computers.
Computer support for physiological cell modelling using an ontology on cell physiology.
Takao, Shimayoshi; Kazuhiro, Komurasaki; Akira, Amano; Takeshi, Iwashita; Masanori, Kanazawa; Tetsuya, Matsuda
2006-01-01
The development of electrophysiological whole cell models to support the understanding of biological mechanisms is increasing rapidly. Due to the complexity of biological systems, comprehensive cell models, which are composed of many imported sub-models of functional elements, can get quite complicated as well, making computer modification difficult. Here, we propose a computer support to enhance structural changes of cell models, employing the markup languages CellML and our original PMSML (physiological model structure markup language), in addition to a new ontology for cell physiological modelling. In particular, a method to make references from CellML files to the ontology and a method to assist manipulation of model structures using markup languages together with the ontology are reported. Using these methods three software utilities, including a graphical model editor, are implemented. Experimental results proved that these methods are effective for the modification of electrophysiological models.
COMPARING SIMULATED AND EXPERIMENTAL HYSTERETIC TWO- PHASE TRANSIENT FLUID FLOW PHENOMENA
A hysteretic model for two-phase permeability (k)-saturation (S)-pressure (P) relations is outlined that accounts for effects of nonwetting fluid entrapment. The model can be employed in unsaturated fluid flow computer codes to predict temporal and spatial fluid distributions. Co...
Stochastic Approaches to Understanding Dissociations in Inflectional Morphology
ERIC Educational Resources Information Center
Plunkett, Kim; Bandelow, Stephan
2006-01-01
Computer modelling research has undermined the view that double dissociations in behaviour are sufficient to infer separability in the cognitive mechanisms underlying those behaviours. However, all these models employ "multi-modal" representational schemes, where functional specialisation of processing emerges from the training process.…
NASA Technical Reports Server (NTRS)
Bradley, D. B.; Irwin, J. D.
1974-01-01
A computer simulation model for a multiprocessor computer is developed that is useful for studying the problem of matching multiprocessor's memory space, memory bandwidth and numbers and speeds of processors with aggregate job set characteristics. The model assumes an input work load of a set of recurrent jobs. The model includes a feedback scheduler/allocator which attempts to improve system performance through higher memory bandwidth utilization by matching individual job requirements for space and bandwidth with space availability and estimates of bandwidth availability at the times of memory allocation. The simulation model includes provisions for specifying precedence relations among the jobs in a job set, and provisions for specifying precedence execution of TMR (Triple Modular Redundant and SIMPLEX (non redundant) jobs.
NASA Astrophysics Data System (ADS)
Huang, Yanhui; Zhao, He; Wang, Yixing; Ratcliff, Tyree; Breneman, Curt; Brinson, L. Catherine; Chen, Wei; Schadler, Linda S.
2017-08-01
It has been found that doping dielectric polymers with a small amount of nanofiller or molecular additive can stabilize the material under a high field and lead to increased breakdown strength and lifetime. Choosing appropriate fillers is critical to optimizing the material performance, but current research largely relies on experimental trial and error. The employment of computer simulations for nanodielectric design is rarely reported. In this work, we propose a multi-scale modeling approach that employs ab initio, Monte Carlo, and continuum scales to predict the breakdown strength and lifetime of polymer nanocomposites based on the charge trapping effect of the nanofillers. The charge transfer, charge energy relaxation, and space charge effects are modeled in respective hierarchical scales by distinctive simulation techniques, and these models are connected together for high fidelity and robustness. The preliminary results show good agreement with the experimental data, suggesting its promise for use in the computer aided material design of high performance dielectrics.
Securing Secrets and Managing Trust in Modern Computing Applications
ERIC Educational Resources Information Center
Sayler, Andy
2016-01-01
The amount of digital data generated and stored by users increases every day. In order to protect this data, modern computing systems employ numerous cryptographic and access control solutions. Almost all of such solutions, however, require the keeping of certain secrets as the basis of their security models. How best to securely store and control…
A Model for Minimizing Numeric Function Generator Complexity and Delay
2007-12-01
allow computation of difficult mathematical functions in less time and with less hardware than commonly employed methods. They compute piecewise...Programmable Gate Arrays (FPGAs). The algorithms and estimation techniques apply to various NFG architectures and mathematical functions. This...thesis compares hardware utilization and propagation delay for various NFG architectures, mathematical functions, word widths, and segmentation methods
NASA Technical Reports Server (NTRS)
Yanosy, J. L.; Rowell, L. F.
1985-01-01
Efforts to make increasingly use of suitable computer programs in the design of hardware have the potential to reduce expenditures. In this context, NASA has evaluated the benefits provided by software tools through an application to the Environmental Control and Life Support (ECLS) system. The present paper is concerned with the benefits obtained by an employment of simulation tools in the case of the Air Revitalization System (ARS) of a Space Station life support system. Attention is given to the ARS functions and components, a computer program overview, a SAND (solid amine water desorbed) bed model description, a model validation, and details regarding the simulation benefits.
Hierarchy of simulation models for a turbofan gas engine
NASA Technical Reports Server (NTRS)
Longenbaker, W. E.; Leake, R. J.
1977-01-01
Steady-state and transient performance of an F-100-like turbofan gas engine are modeled by a computer program, DYNGEN, developed by NASA. The model employs block data maps and includes about 25 states. Low-order nonlinear analytical and linear techniques are described in terms of their application to the model. Experimental comparisons illustrating the accuracy of each model are presented.
Adaptive Search through Constraint Violations
1990-01-01
procedural) knowledge? Different methodologies are used to investigate these questions: Psychological experiments, computer simulations, historical studies...learns control knowledge through adaptive search. Unlike most other psychological models of skill acquisition, HS is a model of analytical, or...Newzll, 1986; VanLehn, in press). Psychological models of skill acquisition employ different problem solving mechanisms (forward search, backward
A hybrid architecture for the implementation of the Athena neural net model
NASA Technical Reports Server (NTRS)
Koutsougeras, C.; Papachristou, C.
1989-01-01
The implementation of an earlier introduced neural net model for pattern classification is considered. Data flow principles are employed in the development of a machine that efficiently implements the model and can be useful for real time classification tasks. Further enhancement with optical computing structures is also considered.
Goodman, Thomas C.; Hardies, Stephen C.; Cortez, Carlos; Hillen, Wolfgang
1981-01-01
Computer programs are described that direct the collection, processing, and graphical display of numerical data obtained from high resolution thermal denaturation (1-3) and circular dichroism (4) studies. Besides these specific applications, the programs may also be useful, either directly or as programming models, in other types of spectrophotometric studies employing computers, programming languages, or instruments similar to those described here (see Materials and Methods). PMID:7335498
CSDMS2.0: Computational Infrastructure for Community Surface Dynamics Modeling
NASA Astrophysics Data System (ADS)
Syvitski, J. P.; Hutton, E.; Peckham, S. D.; Overeem, I.; Kettner, A.
2012-12-01
The Community Surface Dynamic Modeling System (CSDMS) is an NSF-supported, international and community-driven program that seeks to transform the science and practice of earth-surface dynamics modeling. CSDMS integrates a diverse community of more than 850 geoscientists representing 360 international institutions (academic, government, industry) from 60 countries and is supported by a CSDMS Interagency Committee (22 Federal agencies), and a CSDMS Industrial Consortia (18 companies). CSDMS presently distributes more 200 Open Source models and modeling tools, access to high performance computing clusters in support of developing and running models, and a suite of products for education and knowledge transfer. CSDMS software architecture employs frameworks and services that convert stand-alone models into flexible "plug-and-play" components to be assembled into larger applications. CSDMS2.0 will support model applications within a web browser, on a wider variety of computational platforms, and on other high performance computing clusters to ensure robustness and sustainability of the framework. Conversion of stand-alone models into "plug-and-play" components will employ automated wrapping tools. Methods for quantifying model uncertainty are being adapted as part of the modeling framework. Benchmarking data is being incorporated into the CSDMS modeling framework to support model inter-comparison. Finally, a robust mechanism for ingesting and utilizing semantic mediation databases is being developed within the Modeling Framework. Six new community initiatives are being pursued: 1) an earth - ecosystem modeling initiative to capture ecosystem dynamics and ensuing interactions with landscapes, 2) a geodynamics initiative to investigate the interplay among climate, geomorphology, and tectonic processes, 3) an Anthropocene modeling initiative, to incorporate mechanistic models of human influences, 4) a coastal vulnerability modeling initiative, with emphasis on deltas and their multiple threats and stressors, 5) a continental margin modeling initiative, to capture extreme oceanic and atmospheric events generating turbidity currents in the Gulf of Mexico, and 6) a CZO Focus Research Group, to develop compatibility between CSDMS architecture and protocols and Critical Zone Observatory-developed models and data.
NASA Astrophysics Data System (ADS)
Latypov, Marat I.; Kalidindi, Surya R.
2017-10-01
There is a critical need for the development and verification of practically useful multiscale modeling strategies for simulating the mechanical response of multiphase metallic materials with heterogeneous microstructures. In this contribution, we present data-driven reduced order models for effective yield strength and strain partitioning in such microstructures. These models are built employing the recently developed framework of Materials Knowledge Systems that employ 2-point spatial correlations (or 2-point statistics) for the quantification of the heterostructures and principal component analyses for their low-dimensional representation. The models are calibrated to a large collection of finite element (FE) results obtained for a diverse range of microstructures with various sizes, shapes, and volume fractions of the phases. The performance of the models is evaluated by comparing the predictions of yield strength and strain partitioning in two-phase materials with the corresponding predictions from a classical self-consistent model as well as results of full-field FE simulations. The reduced-order models developed in this work show an excellent combination of accuracy and computational efficiency, and therefore present an important advance towards computationally efficient microstructure-sensitive multiscale modeling frameworks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waters, E.C.; Holland, D.W.; Haynes, R.W.
1997-04-01
Traditional, fixed-price (input-output) economic models provide a useful framework for conceptualizing links in a regional economy. Apparent shortcomings in these models, however, severely restrict our ability to deduce valid prescriptions for public policy and economic development. A more efficient approach using regional computable general equilibrium (CGE) models as well as a brief survey of relevant literature is presented. Computable general equilibrium results under several different resource policy scenarios are examined and contrasted with a fixed-price analysis. In the most severe CGE scenario, elimination of Federal range programs caused the loss of 1,371 jobs (2.3 percent of regional employment) and $29more » million (1.6 percent) of house income; and an 80-percent reduction in Federal log supplies resulted in the loss of 3,329 jobs (5.5 percent of regional employment), and $76 millin (4.2 percent) of household income. These results do not include positive economic impacts associated with improvement in salmon runs. Economic counter scenarios indicate that increases in tourism and high-technology manufacturing and growth in the population of retirees can largely offset total employment and income losses.« less
Flywheel Propulsion Simulation
DOT National Transportation Integrated Search
1977-05-01
This report develops and describes the analytical models and digital computer simulations that can be used for the evaluation of flywheel-electric propulsion systems employed with urban transit vehicles operating over specified routes and with predet...
NASA Technical Reports Server (NTRS)
1973-01-01
A computer programmer's manual for a digital computer which will permit rapid and accurate parametric analysis of current and advanced attitude control propulsion systems is presented. The concept is for a cold helium pressurized, subcritical cryogen fluid supplied, bipropellant gas-fed attitude control propulsion system. The cryogen fluids are stored as liquids under low pressure and temperature conditions. The mathematical model provides a generalized form for the procedural technique employed in setting up the analysis program.
ELEMENT MASSES IN THE CRAB NEBULA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sibley, Adam R.; Katz, Andrea M.; Satterfield, Timothy J.
Using our previously published element abundance or mass-fraction distributions in the Crab Nebula, we derived actual mass distributions and estimates for overall nebular masses of hydrogen, helium, carbon, nitrogen, oxygen and sulfur. As with the previous work, computations were carried out for photoionization models involving constant hydrogen density and also constant nuclear density. In addition, employing new flux measurements for [Ni ii] λ 7378, along with combined photoionization models and analytic computations, a nickel abundance distribution was mapped and a nebular stable nickel mass estimate was derived.
Amber Vanden Wymelenberg; Patrick Minges; Grzegorz Sabat; Diego Martinez; Andrea Aerts; Asaf Salamov; Igor Grigoriev; Harris Shapiro; Nik Putnam; Paula Belinky; Carlos Dosoretz; Jill Gaskell; Phil Kersten; Dan Cullen
2006-01-01
The white-rot basidiomycete Phanerochaete chrysosporium employs extracellular enzymes to completely degrade the major polymers of wood: cellulose, hemicellulose, and lignin. Analysis of a total of 10,048 v2.1 gene models predicts 769 secreted proteins, a substantial increase over the 268 models identified in the earlier database (v1.0). Within the v2.1 âcomputational...
Quanbeck, Andrew; Lang, Katharine; Enami, Kohei; Brown, Richard L
2010-02-01
A previous cost-benefit analysis found Screening, Brief Intervention, and Referral to Treatment (SBIRT) to be cost-beneficial from a societal perspective. This paper develops a cost-benefit model that includes the employer's perspective by considering the costs of absenteeism and impaired presenteeism due to problem drinking. We developed a Monte Carlo simulation model to estimate the costs and benefits of SBIRT implementation to an employer. We first presented the likely costs of problem drinking to a theoretical Wisconsin firm that does not currently provide SBIRT services. We then constructed a cost-benefit model in which the firm funds SBIRT for its employees. The net present value of SBIRT adoption was computed by comparing costs due to problem drinking both with and without the program. When absenteeism and impaired presenteeism costs were considered from the employer's perspective, the net present value of SBIRT adoption was $771 per employee. We concluded that implementing SBIRT is cost-beneficial from the employer's perspective and recommend that Wisconsin employers consider covering SBIRT services for their employees.
Discriminative Random Field Models for Subsurface Contamination Uncertainty Quantification
NASA Astrophysics Data System (ADS)
Arshadi, M.; Abriola, L. M.; Miller, E. L.; De Paolis Kaluza, C.
2017-12-01
Application of flow and transport simulators for prediction of the release, entrapment, and persistence of dense non-aqueous phase liquids (DNAPLs) and associated contaminant plumes is a computationally intensive process that requires specification of a large number of material properties and hydrologic/chemical parameters. Given its computational burden, this direct simulation approach is particularly ill-suited for quantifying both the expected performance and uncertainty associated with candidate remediation strategies under real field conditions. Prediction uncertainties primarily arise from limited information about contaminant mass distributions, as well as the spatial distribution of subsurface hydrologic properties. Application of direct simulation to quantify uncertainty would, thus, typically require simulating multiphase flow and transport for a large number of permeability and release scenarios to collect statistics associated with remedial effectiveness, a computationally prohibitive process. The primary objective of this work is to develop and demonstrate a methodology that employs measured field data to produce equi-probable stochastic representations of a subsurface source zone that capture the spatial distribution and uncertainty associated with key features that control remediation performance (i.e., permeability and contamination mass). Here we employ probabilistic models known as discriminative random fields (DRFs) to synthesize stochastic realizations of initial mass distributions consistent with known, and typically limited, site characterization data. Using a limited number of full scale simulations as training data, a statistical model is developed for predicting the distribution of contaminant mass (e.g., DNAPL saturation and aqueous concentration) across a heterogeneous domain. Monte-Carlo sampling methods are then employed, in conjunction with the trained statistical model, to generate realizations conditioned on measured borehole data. Performance of the statistical model is illustrated through comparisons of generated realizations with the `true' numerical simulations. Finally, we demonstrate how these realizations can be used to determine statistically optimal locations for further interrogation of the subsurface.
Tandem internal models execute motor learning in the cerebellum.
Honda, Takeru; Nagao, Soichi; Hashimoto, Yuji; Ishikawa, Kinya; Yokota, Takanori; Mizusawa, Hidehiro; Ito, Masao
2018-06-25
In performing skillful movement, humans use predictions from internal models formed by repetition learning. However, the computational organization of internal models in the brain remains unknown. Here, we demonstrate that a computational architecture employing a tandem configuration of forward and inverse internal models enables efficient motor learning in the cerebellum. The model predicted learning adaptations observed in hand-reaching experiments in humans wearing a prism lens and explained the kinetic components of these behavioral adaptations. The tandem system also predicted a form of subliminal motor learning that was experimentally validated after training intentional misses of hand targets. Patients with cerebellar degeneration disease showed behavioral impairments consistent with tandemly arranged internal models. These findings validate computational tandemization of internal models in motor control and its potential uses in more complex forms of learning and cognition. Copyright © 2018 the Author(s). Published by PNAS.
Computational fluid dynamics challenges for hybrid air vehicle applications
NASA Astrophysics Data System (ADS)
Carrin, M.; Biava, M.; Steijl, R.; Barakos, G. N.; Stewart, D.
2017-06-01
This paper begins by comparing turbulence models for the prediction of hybrid air vehicle (HAV) flows. A 6 : 1 prolate spheroid is employed for validation of the computational fluid dynamics (CFD) method. An analysis of turbulent quantities is presented and the Shear Stress Transport (SST) k-ω model is compared against a k-ω Explicit Algebraic Stress model (EASM) within the unsteady Reynolds-Averaged Navier-Stokes (RANS) framework. Further comparisons involve Scale Adaptative Simulation models and a local transition transport model. The results show that the flow around the vehicle at low pitch angles is sensitive to transition effects. At high pitch angles, the vortices generated on the suction side provide substantial lift augmentation and are better resolved by EASMs. The validated CFD method is employed for the flow around a shape similar to the Airlander aircraft of Hybrid Air Vehicles Ltd. The sensitivity of the transition location to the Reynolds number is demonstrated and the role of each vehicle£s component is analyzed. It was found that the ¦ns contributed the most to increase the lift and drag.
ERIC Educational Resources Information Center
Duchin, Faye; Lange, Glenn-Marie
A study was conducted to describe the segments of the U.S. labor force that have been affected by the recent deterioration in U.S. trade. The methodology involved computer modeling of the effects of eliminating the 1987 merchandise trade deficit on employment by detailed industry and occupation, by geographic region, and by wage group. This was…
ERIC Educational Resources Information Center
Psycharis, Sarantos
2016-01-01
In this study, an instructional design model, based on the computational experiment approach, was employed in order to explore the effects of the formative assessment strategies and scientific abilities rubrics on students' engagement in the development of inquiry-based pedagogical scenario. In the following study, rubrics were used during the…
Mitsuhashi, Kenji; Poudel, Joemini; Matthews, Thomas P.; Garcia-Uribe, Alejandro; Wang, Lihong V.; Anastasio, Mark A.
2017-01-01
Photoacoustic computed tomography (PACT) is an emerging imaging modality that exploits optical contrast and ultrasonic detection principles to form images of the photoacoustically induced initial pressure distribution within tissue. The PACT reconstruction problem corresponds to an inverse source problem in which the initial pressure distribution is recovered from measurements of the radiated wavefield. A major challenge in transcranial PACT brain imaging is compensation for aberrations in the measured data due to the presence of the skull. Ultrasonic waves undergo absorption, scattering and longitudinal-to-shear wave mode conversion as they propagate through the skull. To properly account for these effects, a wave-equation-based inversion method should be employed that can model the heterogeneous elastic properties of the skull. In this work, a forward model based on a finite-difference time-domain discretization of the three-dimensional elastic wave equation is established and a procedure for computing the corresponding adjoint of the forward operator is presented. Massively parallel implementations of these operators employing multiple graphics processing units (GPUs) are also developed. The developed numerical framework is validated and investigated in computer19 simulation and experimental phantom studies whose designs are motivated by transcranial PACT applications. PMID:29387291
NASA Astrophysics Data System (ADS)
Poudel, Joemini; Matthews, Thomas P.; Mitsuhashi, Kenji; Garcia-Uribe, Alejandro; Wang, Lihong V.; Anastasio, Mark A.
2017-03-01
Photoacoustic computed tomography (PACT) is an emerging computed imaging modality that exploits optical contrast and ultrasonic detection principles to form images of the photoacoustically induced initial pressure distribution within tissue. The PACT reconstruction problem corresponds to a time-domain inverse source problem, where the initial pressure distribution is recovered from the measurements recorded on an aperture outside the support of the source. A major challenge in transcranial PACT brain imaging is to compensate for aberrations in the measured data due to the propagation of the photoacoustic wavefields through the skull. To properly account for these effects, a wave equation-based inversion method should be employed that can model the heterogeneous elastic properties of the medium. In this study, an iterative image reconstruction method for 3D transcranial PACT is developed based on the elastic wave equation. To accomplish this, a forward model based on a finite-difference time-domain discretization of the elastic wave equation is established. Subsequently, gradient-based methods are employed for computing penalized least squares estimates of the initial source distribution that produced the measured photoacoustic data. The developed reconstruction algorithm is validated and investigated through computer-simulation studies.
Current Density and Continuity in Discretized Models
ERIC Educational Resources Information Center
Boykin, Timothy B.; Luisier, Mathieu; Klimeck, Gerhard
2010-01-01
Discrete approaches have long been used in numerical modelling of physical systems in both research and teaching. Discrete versions of the Schrodinger equation employing either one or several basis functions per mesh point are often used by senior undergraduates and beginning graduate students in computational physics projects. In studying…
ERIC Educational Resources Information Center
Blanco, Francesco; La Rocca, Paola; Petta, Catia; Riggi, Francesco
2009-01-01
An educational model simulation of the sound produced by lightning in the sky has been employed to demonstrate realistic signatures of thunder and its connection to the particular structure of the lightning channel. Algorithms used in the past have been revisited and implemented, making use of current computer techniques. The basic properties of…
Spin-neurons: A possible path to energy-efficient neuromorphic computers
NASA Astrophysics Data System (ADS)
Sharad, Mrigank; Fan, Deliang; Roy, Kaushik
2013-12-01
Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices. Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and "thresholding" operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that "spin-neurons" (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.
Spin-neurons: A possible path to energy-efficient neuromorphic computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharad, Mrigank; Fan, Deliang; Roy, Kaushik
Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices.more » Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and “thresholding” operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that “spin-neurons” (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.« less
Systems Biology in Immunology – A Computational Modeling Perspective
Germain, Ronald N.; Meier-Schellersheim, Martin; Nita-Lazar, Aleksandra; Fraser, Iain D. C.
2011-01-01
Systems biology is an emerging discipline that combines high-content, multiplexed measurements with informatic and computational modeling methods to better understand biological function at various scales. Here we present a detailed review of the methods used to create computational models and conduct simulations of immune function, We provide descriptions of the key data gathering techniques employed to generate the quantitative and qualitative data required for such modeling and simulation and summarize the progress to date in applying these tools and techniques to questions of immunological interest, including infectious disease. We include comments on what insights modeling can provide that complement information obtained from the more familiar experimental discovery methods used by most investigators and why quantitative methods are needed to eventually produce a better understanding of immune system operation in health and disease. PMID:21219182
Turbulence modeling of free shear layers for high performance aircraft
NASA Technical Reports Server (NTRS)
Sondak, Douglas
1993-01-01
In many flowfield computations, accuracy of the turbulence model employed is frequently a limiting factor in the overall accuracy of the computation. This is particularly true for complex flowfields such as those around full aircraft configurations. Free shear layers such as wakes, impinging jets (in V/STOL applications), and mixing layers over cavities are often part of these flowfields. Although flowfields have been computed for full aircraft, the memory and CPU requirements for these computations are often excessive. Additional computer power is required for multidisciplinary computations such as coupled fluid dynamics and conduction heat transfer analysis. Massively parallel computers show promise in alleviating this situation, and the purpose of this effort was to adapt and optimize CFD codes to these new machines. The objective of this research effort was to compute the flowfield and heat transfer for a two-dimensional jet impinging normally on a cool plate. The results of this research effort were summarized in an AIAA paper titled 'Parallel Implementation of the k-epsilon Turbulence Model'. Appendix A contains the full paper.
Adiabatic quantum computation with neutral atoms via the Rydberg blockade
NASA Astrophysics Data System (ADS)
Goyal, Krittika; Deutsch, Ivan
2011-05-01
We study a trapped-neutral-atom implementation of the adiabatic model of quantum computation whereby the Hamiltonian of a set of interacting qubits is changed adiabatically so that its ground state evolves to the desired output of the algorithm. We employ the ``Rydberg blockade interaction,'' which previously has been used to implement two-qubit entangling gates in the quantum circuit model. Here it is employed via off-resonant virtual dressing of the excited levels, so that atoms always remain in the ground state. The resulting dressed-Rydberg interaction is insensitive to the distance between the atoms within a certain blockade radius, making this process robust to temperature and vibrational fluctuations. Single qubit interactions are implemented with global microwaves and atoms are locally addressed with light shifts. With these ingredients, we study a protocol to implement the two-qubit Quadratic Unconstrained Binary Optimization (QUBO) problem. We model atom trapping, addressing, coherent evolution, and decoherence. We also explore collective control of the many-atom system and generalize the QUBO problem to multiple qubits. We study a trapped-neutral-atom implementation of the adiabatic model of quantum computation whereby the Hamiltonian of a set of interacting qubits is changed adiabatically so that its ground state evolves to the desired output of the algorithm. We employ the ``Rydberg blockade interaction,'' which previously has been used to implement two-qubit entangling gates in the quantum circuit model. Here it is employed via off-resonant virtual dressing of the excited levels, so that atoms always remain in the ground state. The resulting dressed-Rydberg interaction is insensitive to the distance between the atoms within a certain blockade radius, making this process robust to temperature and vibrational fluctuations. Single qubit interactions are implemented with global microwaves and atoms are locally addressed with light shifts. With these ingredients, we study a protocol to implement the two-qubit Quadratic Unconstrained Binary Optimization (QUBO) problem. We model atom trapping, addressing, coherent evolution, and decoherence. We also explore collective control of the many-atom system and generalize the QUBO problem to multiple qubits. We acknowledge funding from the AQUARIUS project, Sandia National Laboratories
Particle Hydrodynamics with Material Strength for Multi-Layer Orbital Debris Shield Design
NASA Technical Reports Server (NTRS)
Fahrenthold, Eric P.
1999-01-01
Three dimensional simulation of oblique hypervelocity impact on orbital debris shielding places extreme demands on computer resources. Research to date has shown that particle models provide the most accurate and efficient means for computer simulation of shield design problems. In order to employ a particle based modeling approach to the wall plate impact portion of the shield design problem, it is essential that particle codes be augmented to represent strength effects. This report describes augmentation of a Lagrangian particle hydrodynamics code developed by the principal investigator, to include strength effects, allowing for the entire shield impact problem to be represented using a single computer code.
Development of a table tennis robot for ball interception using visual feedback
NASA Astrophysics Data System (ADS)
Parnichkun, Manukid; Thalagoda, Janitha A.
2016-07-01
This paper presents a concept of intercepting a moving table tennis ball using a robot. The robot has four degrees of freedom(DOF) which are simplified in such a way that The system is able to perform the task within the bounded limit. It employs computer vision to localize the ball. For ball identification, Colour Based Threshold Segmentation(CBTS) and Background Subtraction(BS) methodologies are used. Coordinate Transformation(CT) is employed to transform the data, which is taken based on camera coordinate frame to the general coordinate frame. The sensory system consisted of two HD Web Cameras. The computation time of image processing from web cameras is long .it is not possible to intercept table tennis ball using only image processing. Therefore the projectile motion model is employed to predict the final destination of the ball.
NASA Astrophysics Data System (ADS)
Chu, A.
2016-12-01
Modern earthquake catalogs are often analyzed using spatial-temporal point process models such as the epidemic-type aftershock sequence (ETAS) models of Ogata (1998). My work implements three of the homogeneous ETAS models described in Ogata (1998). With a model's log-likelihood function, my software finds the Maximum-Likelihood Estimates (MLEs) of the model's parameters to estimate the homogeneous background rate and the temporal and spatial parameters that govern triggering effects. EM-algorithm is employed for its advantages of stability and robustness (Veen and Schoenberg, 2008). My work also presents comparisons among the three models in robustness, convergence speed, and implementations from theory to computing practice. Up-to-date regional seismic data of seismic active areas such as Southern California and Japan are used to demonstrate the comparisons. Data analysis has been done using computer languages Java and R. Java has the advantages of being strong-typed and easiness of controlling memory resources, while R has the advantages of having numerous available functions in statistical computing. Comparisons are also made between the two programming languages in convergence and stability, computational speed, and easiness of implementation. Issues that may affect convergence such as spatial shapes are discussed.
Plate refractive camera model and its applications
NASA Astrophysics Data System (ADS)
Huang, Longxiang; Zhao, Xu; Cai, Shen; Liu, Yuncai
2017-03-01
In real applications, a pinhole camera capturing objects through a planar parallel transparent plate is frequently employed. Due to the refractive effects of the plate, such an imaging system does not comply with the conventional pinhole camera model. Although the system is ubiquitous, it has not been thoroughly studied. This paper aims at presenting a simple virtual camera model, called a plate refractive camera model, which has a form similar to a pinhole camera model and can efficiently model refractions through a plate. The key idea is to employ a pixel-wise viewpoint concept to encode the refraction effects into a pixel-wise pinhole camera model. The proposed camera model realizes an efficient forward projection computation method and has some advantages in applications. First, the model can help to compute the caustic surface to represent the changes of the camera viewpoints. Second, the model has strengths in analyzing and rectifying the image caustic distortion caused by the plate refraction effects. Third, the model can be used to calibrate the camera's intrinsic parameters without removing the plate. Last but not least, the model contributes to putting forward the plate refractive triangulation methods in order to solve the plate refractive triangulation problem easily in multiviews. We verify our theory in both synthetic and real experiments.
Adding a solar-radiance function to the Hošek-Wilkie skylight model.
Hošek, Lukáš; Wilkie, Alexander
2013-01-01
One prerequisite for realistic renderings of outdoor scenes is the proper capturing of the sky's appearance. Currently, an explicit simulation of light scattering in the atmosphere isn't computationally feasible, and won't be in the foreseeable future. Captured luminance patterns have proven their usefulness in practice but can't meet all user needs. To fill this capability gap, computer graphics technology has employed analytical models of sky-dome luminance patterns for more than two decades. For technical reasons, such models deal with only the sky dome's appearance, though, and exclude the solar disc. The widely used model proposed by Arcot Preetham and colleagues employed a separately derived analytical formula for adding a solar emitter of suitable radiant intensity. Although this yields reasonable results, the formula is derived in a manner that doesn't exactly match the conditions in their sky-dome model. But the more sophisticated a skylight model is and the more subtly it can represent different conditions, the more the solar radiance should exactly match the skylight's conditions. Toward that end, researchers propose a solar-radiance function that exactly matches a recently published high-quality analytical skylight model.
NASA Technical Reports Server (NTRS)
Hah, C.; Lakshminarayana, B.
1982-01-01
Turbulent wakes of turbomachinery rotor blades, isolated airfoils, and a cascade of airfoils were investigated both numerically and experimentally. Low subsonic and incompressible wake flows were examined. A finite difference procedure was employed in the numerical analysis utilizing the continuity, momentum, and turbulence closure equations in the rotating, curvilinear, and nonorthogonal coordinate system. A nonorthogonal curvilinear coordinate system was developed to improve the accuracy and efficiency of the numerical calculation. Three turbulence models were employed to obtain closure of the governing equations. The first model was comprised to transport equations for the turbulent kinetic energy and the rate of energy dissipation, and the second and third models were comprised of equations for the rate of turbulent kinetic energy dissipation and Reynolds stresses, respectively. The second model handles the convection and diffusion terms in the Reynolds stress transport equation collectively, while the third model handles them individually. The numerical results demonstrate that the second and third models provide accurate predictions, but the computer time and memory storage can be considerably saved with the second model.
Analysis of thermo-chemical nonequilibrium models for carbon dioxide flows
NASA Technical Reports Server (NTRS)
Rock, Stacey G.; Candler, Graham V.; Hornung, Hans G.
1992-01-01
The aerothermodynamics of thermochemical nonequilibrium carbon dioxide flows is studied. The chemical kinetics models of McKenzie and Park are implemented in separate three-dimensional computational fluid dynamics codes. The codes incorporate a five-species gas model characterized by a translational-rotational and a vibrational temperature. Solutions are obtained for flow over finite length elliptical and circular cylinders. The computed flowfields are then employed to calculate Mach-Zehnder interferograms for comparison with experimental data. The accuracy of the chemical kinetics models is determined through this comparison. Also, the methodology of the three-dimensional thermochemical nonequilibrium code is verified by the reproduction of the experiments.
Gilbert, David
2016-01-01
Insights gained from multilevel computational models of biological systems can be translated into real-life applications only if the model correctness has been verified first. One of the most frequently employed in silico techniques for computational model verification is model checking. Traditional model checking approaches only consider the evolution of numeric values, such as concentrations, over time and are appropriate for computational models of small scale systems (e.g. intracellular networks). However for gaining a systems level understanding of how biological organisms function it is essential to consider more complex large scale biological systems (e.g. organs). Verifying computational models of such systems requires capturing both how numeric values and properties of (emergent) spatial structures (e.g. area of multicellular population) change over time and across multiple levels of organization, which are not considered by existing model checking approaches. To address this limitation we have developed a novel approximate probabilistic multiscale spatio-temporal meta model checking methodology for verifying multilevel computational models relative to specifications describing the desired/expected system behaviour. The methodology is generic and supports computational models encoded using various high-level modelling formalisms because it is defined relative to time series data and not the models used to generate it. In addition, the methodology can be automatically adapted to case study specific types of spatial structures and properties using the spatio-temporal meta model checking concept. To automate the computational model verification process we have implemented the model checking approach in the software tool Mule (http://mule.modelchecking.org). Its applicability is illustrated against four systems biology computational models previously published in the literature encoding the rat cardiovascular system dynamics, the uterine contractions of labour, the Xenopus laevis cell cycle and the acute inflammation of the gut and lung. Our methodology and software will enable computational biologists to efficiently develop reliable multilevel computational models of biological systems. PMID:27187178
Pârvu, Ovidiu; Gilbert, David
2016-01-01
Insights gained from multilevel computational models of biological systems can be translated into real-life applications only if the model correctness has been verified first. One of the most frequently employed in silico techniques for computational model verification is model checking. Traditional model checking approaches only consider the evolution of numeric values, such as concentrations, over time and are appropriate for computational models of small scale systems (e.g. intracellular networks). However for gaining a systems level understanding of how biological organisms function it is essential to consider more complex large scale biological systems (e.g. organs). Verifying computational models of such systems requires capturing both how numeric values and properties of (emergent) spatial structures (e.g. area of multicellular population) change over time and across multiple levels of organization, which are not considered by existing model checking approaches. To address this limitation we have developed a novel approximate probabilistic multiscale spatio-temporal meta model checking methodology for verifying multilevel computational models relative to specifications describing the desired/expected system behaviour. The methodology is generic and supports computational models encoded using various high-level modelling formalisms because it is defined relative to time series data and not the models used to generate it. In addition, the methodology can be automatically adapted to case study specific types of spatial structures and properties using the spatio-temporal meta model checking concept. To automate the computational model verification process we have implemented the model checking approach in the software tool Mule (http://mule.modelchecking.org). Its applicability is illustrated against four systems biology computational models previously published in the literature encoding the rat cardiovascular system dynamics, the uterine contractions of labour, the Xenopus laevis cell cycle and the acute inflammation of the gut and lung. Our methodology and software will enable computational biologists to efficiently develop reliable multilevel computational models of biological systems.
NASA Astrophysics Data System (ADS)
Pilz, Tobias; Francke, Till; Bronstert, Axel
2016-04-01
Until today a large number of competing computer models has been developed to understand hydrological processes and to simulate and predict streamflow dynamics of rivers. This is primarily the result of a lack of a unified theory in catchment hydrology due to insufficient process understanding and uncertainties related to model development and application. Therefore, the goal of this study is to analyze the uncertainty structure of a process-based hydrological catchment model employing a multiple hypotheses approach. The study focuses on three major problems that have received only little attention in previous investigations. First, to estimate the impact of model structural uncertainty by employing several alternative representations for each simulated process. Second, explore the influence of landscape discretization and parameterization from multiple datasets and user decisions. Third, employ several numerical solvers for the integration of the governing ordinary differential equations to study the effect on simulation results. The generated ensemble of model hypotheses is then analyzed and the three sources of uncertainty compared against each other. To ensure consistency and comparability all model structures and numerical solvers are implemented within a single simulation environment. First results suggest that the selection of a sophisticated numerical solver for the differential equations positively affects simulation outcomes. However, already some simple and easy to implement explicit methods perform surprisingly well and need less computational efforts than more advanced but time consuming implicit techniques. There is general evidence that ambiguous and subjective user decisions form a major source of uncertainty and can greatly influence model development and application at all stages.
NASA Astrophysics Data System (ADS)
Fast, J. D.; Ma, P.; Easter, R. C.; Liu, X.; Zaveri, R. A.; Rasch, P.
2012-12-01
Predictions of aerosol radiative forcing in climate models still contain large uncertainties, resulting from a poor understanding of certain aerosol processes, the level of complexity of aerosol processes represented in models, and the ability of models to account for sub-grid scale variability of aerosols and processes affecting them. In addition, comparing the performance and computational efficiency of new aerosol process modules used in various studies is problematic because different studies often employ different grid configurations, meteorology, trace gas chemistry, and emissions that affect the temporal and spatial evolution of aerosols. To address this issue, we have developed an Aerosol Modeling Testbed (AMT) to systematically and objectively evaluate aerosol process modules. The AMT consists of the modular Weather Research and Forecasting (WRF) model, a series of testbed cases for which extensive in situ and remote sensing measurements of meteorological, trace gas, and aerosol properties are available, and a suite of tools to evaluate the performance of meteorological, chemical, aerosol process modules. WRF contains various parameterizations of meteorological, chemical, and aerosol processes and includes interactive aerosol-cloud-radiation treatments similar to those employed by climate models. In addition, the physics suite from a global climate model, Community Atmosphere Model version 5 (CAM5), has also been ported to WRF so that these parameterizations can be tested at various spatial scales and compared directly with field campaign data and other parameterizations commonly used by the mesoscale modeling community. In this study, we evaluate simple and complex treatments of the aerosol size distribution and secondary organic aerosols using the AMT and measurements collected during three field campaigns: the Megacities Initiative Local and Global Observations (MILAGRO) campaign conducted in the vicinity of Mexico City during March 2006, the Carbonaceous Aerosol and Radiative Effects Study (CARES) conducted in the vicinity of Sacramento California during June 2010, and the California Nexus (CalNex) campaign conducted in southern California during May and June of 2010. For the aerosol size distribution, we compare the predictions from the GOCART bulk aerosol model, the MADE/SORGAM modal aerosol model, the Modal Aerosol Model (MAM) employed by CAM5, and the Model for Simulating Aerosol Interactions and Chemistry (MOSAIC) which uses a sectional representation. For secondary organic aerosols, we compare simple fixed mass yield approaches with the numerically complex volatility basis set approach. All simulations employ the same emissions, meteorology, trace gas chemistry (except for that involving condensable organic species), and initial and boundary conditions. Performance metrics from the AMT are used to assess performance in terms of simulated mass, composition, size distribution (except for GOCART), and aerosol optical properties in relation to computational expense. In addition to statistical measures, qualitative differences among the different aerosol models over the computational domain are presented to examine variations in how aerosols age among the aerosol models.
Williams, Kent E; Voigt, Jeffrey R
2004-01-01
The research reported herein presents the results of an empirical evaluation that focused on the accuracy and reliability of cognitive models created using a computerized tool: the cognitive analysis tool for human-computer interaction (CAT-HCI). A sample of participants, expert in interacting with a newly developed tactical display for the U.S. Army's Bradley Fighting Vehicle, individually modeled their knowledge of 4 specific tasks employing the CAT-HCI tool. Measures of the accuracy and consistency of task models created by these task domain experts using the tool were compared with task models created by a double expert. The findings indicated a high degree of consistency and accuracy between the different "single experts" in the task domain in terms of the resultant models generated using the tool. Actual or potential applications of this research include assessing human-computer interaction complexity, determining the productivity of human-computer interfaces, and analyzing an interface design to determine whether methods can be automated.
Quantum Iterative Deepening with an Application to the Halting Problem
Tarrataca, Luís; Wichert, Andreas
2013-01-01
Classical models of computation traditionally resort to halting schemes in order to enquire about the state of a computation. In such schemes, a computational process is responsible for signaling an end of a calculation by setting a halt bit, which needs to be systematically checked by an observer. The capacity of quantum computational models to operate on a superposition of states requires an alternative approach. From a quantum perspective, any measurement of an equivalent halt qubit would have the potential to inherently interfere with the computation by provoking a random collapse amongst the states. This issue is exacerbated by undecidable problems such as the Entscheidungsproblem which require universal computational models, e.g. the classical Turing machine, to be able to proceed indefinitely. In this work we present an alternative view of quantum computation based on production system theory in conjunction with Grover's amplitude amplification scheme that allows for (1) a detection of halt states without interfering with the final result of a computation; (2) the possibility of non-terminating computation and (3) an inherent speedup to occur during computations susceptible of parallelization. We discuss how such a strategy can be employed in order to simulate classical Turing machines. PMID:23520465
Particle size and shape distributions of hammer milled pine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Westover, Tyler Lott; Matthews, Austin Colter; Williams, Christopher Luke
2015-04-01
Particle size and shape distributions impact particle heating rates and diffusion of volatized gases out of particles during fast pyrolysis conversion, and consequently must be modeled accurately in order for computational pyrolysis models to produce reliable results for bulk solid materials. For this milestone, lodge pole pine chips were ground using a Thomas-Wiley #4 mill using two screen sizes in order to produce two representative materials that are suitable for fast pyrolysis. For the first material, a 6 mm screen was employed in the mill and for the second material, a 3 mm screen was employed in the mill. Bothmore » materials were subjected to RoTap sieve analysis, and the distributions of the particle sizes and shapes were determined using digital image analysis. The results of the physical analysis will be fed into computational pyrolysis simulations to create models of materials with realistic particle size and shape distributions. This milestone was met on schedule.« less
NASA Astrophysics Data System (ADS)
Arendt, V.; Shalchi, A.
2018-06-01
We explore numerically the transport of energetic particles in a turbulent magnetic field configuration. A test-particle code is employed to compute running diffusion coefficients as well as particle distribution functions in the different directions of space. Our numerical findings are compared with models commonly used in diffusion theory such as Gaussian distribution functions and solutions of the cosmic ray Fokker-Planck equation. Furthermore, we compare the running diffusion coefficients across the mean magnetic field with solutions obtained from the time-dependent version of the unified non-linear transport theory. In most cases we find that particle distribution functions are indeed of Gaussian form as long as a two-component turbulence model is employed. For turbulence setups with reduced dimensionality, however, the Gaussian distribution can no longer be obtained. It is also shown that the unified non-linear transport theory agrees with simulated perpendicular diffusion coefficients as long as the pure two-dimensional model is excluded.
Influence of an asymmetric ring on the modeling of an orthogonally stiffened cylindrical shell
NASA Technical Reports Server (NTRS)
Rastogi, Naveen; Johnson, Eric R.
1994-01-01
Structural models are examined for the influence of a ring with an asymmetrical cross section on the linear elastic response of an orthogonally stiffened cylindrical shell subjected to internal pressure. The first structural model employs classical theory for the shell and stiffeners. The second model employs transverse shear deformation theories for the shell and stringer and classical theory for the ring. Closed-end pressure vessel effects are included. Interacting line load intensities are computed in the stiffener-to-skin joints for an example problem having the dimensions of the fuselage of a large transport aircraft. Classical structural theory is found to exaggerate the asymmetric response compared to the transverse shear deformation theory.
Navier-Stokes computations for circulation control airfoils
NASA Technical Reports Server (NTRS)
Pulliam, Thomas H.; Jespersen, Dennis C.; Barth, Timothy J.
1987-01-01
Navier-Stokes computations of subsonic to transonic flow past airfoils with augmented lift due to rearward jet blowing over a curved trailing edge are presented. The approach uses a spiral grid topology. Solutions are obtained using a Navier-Stokes code which employs an implicit finite difference method, an algebraic turbulence model, and developments which improve stability, convergence, and accuracy. Results are compared against experiments for no jet blowing and moderate jet pressures and demonstrate the capability to compute these complicated flows.
Navier-Stokes computations for circulation controlled airfoils
NASA Technical Reports Server (NTRS)
Pulliam, T. H.; Jesperen, D. C.; Barth, T. J.
1986-01-01
Navier-Stokes computations of subsonic to transonic flow past airfoils with augmented lift due to rearward jet blowing over a curved trailing edge are presented. The approach uses a spiral grid topology. Solutions are obtained using a Navier-Stokes code which employs an implicit finite difference method, an algebraic turbulence model, and developments which improve stability, convergence, and accuracy. Results are compared against experiments for no jet blowing and moderate jet pressures and demonstrate the capability to compute these complicated flows.
NASA Technical Reports Server (NTRS)
Levison, W. H.; Baron, S.
1984-01-01
Preliminary results in the application of a closed loop pilot/simulator model to the analysis of some simulator fidelity issues are discussed in the context of an air to air target tracking task. The closed loop model is described briefly. Then, problem simplifications that are employed to reduce computational costs are discussed. Finally, model results showing sensitivity of performance to various assumptions concerning the simulator and/or the pilot are presented.
A fast analytical undulator model for realistic high-energy FEL simulations
NASA Astrophysics Data System (ADS)
Tatchyn, R.; Cremer, T.
1997-02-01
A number of leading FEL simulation codes used for modeling gain in the ultralong undulators required for SASE saturation in the <100 Å range employ simplified analytical models both for field and error representations. Although it is recognized that both the practical and theoretical validity of such codes could be enhanced by incorporating realistic undulator field calculations, the computational cost of doing this can be prohibitive, especially for point-to-point integration of the equations of motion through each undulator period. In this paper we describe a simple analytical model suitable for modeling realistic permanent magnet (PM), hybrid/PM, and non-PM undulator structures, and discuss selected techniques for minimizing computation time.
NASA Astrophysics Data System (ADS)
Fujitani, Y.; Sumino, Y.
2018-04-01
A classically scale invariant extension of the standard model predicts large anomalous Higgs self-interactions. We compute missing contributions in previous studies for probing the Higgs triple coupling of a minimal model using the process e+e- → Zhh. Employing a proper order counting, we compute the total and differential cross sections at the leading order, which incorporate the one-loop corrections between zero external momenta and their physical values. Discovery/exclusion potential of a future e+e- collider for this model is estimated. We also find a unique feature in the momentum dependence of the Higgs triple vertex for this class of models.
NASA Astrophysics Data System (ADS)
Sharpanskykh, Alexei; Treur, Jan
Employing rich internal agent models of actors in large-scale socio-technical systems often results in scalability issues. The problem addressed in this paper is how to improve computational properties of a complex internal agent model, while preserving its behavioral properties. The problem is addressed for the case of an existing affective-cognitive decision making model instantiated for an emergency scenario. For this internal decision model an abstracted behavioral agent model is obtained, which ensures a substantial increase of the computational efficiency at the cost of approximately 1% behavioural error. The abstraction technique used can be applied to a wide range of internal agent models with loops, for example, involving mutual affective-cognitive interactions.
A LabVIEW model incorporating an open-loop arterial impedance and a closed-loop circulatory system.
Cole, R T; Lucas, C L; Cascio, W E; Johnson, T A
2005-11-01
While numerous computer models exist for the circulatory system, many are limited in scope, contain unwanted features or incorporate complex components specific to unique experimental situations. Our purpose was to develop a basic, yet multifaceted, computer model of the left heart and systemic circulation in LabVIEW having universal appeal without sacrificing crucial physiologic features. The program we developed employs Windkessel-type impedance models in several open-loop configurations and a closed-loop model coupling a lumped impedance and ventricular pressure source. The open-loop impedance models demonstrate afterload effects on arbitrary aortic pressure/flow inputs. The closed-loop model catalogs the major circulatory waveforms with changes in afterload, preload, and left heart properties. Our model provides an avenue for expanding the use of the ventricular equations through closed-loop coupling that includes a basic coronary circuit. Tested values used for the afterload components and the effects of afterload parameter changes on various waveforms are consistent with published data. We conclude that this model offers the ability to alter several circulatory factors and digitally catalog the most salient features of the pressure/flow waveforms employing a user-friendly platform. These features make the model a useful instructional tool for students as well as a simple experimental tool for cardiovascular research.
Radar Detection Models in Computer Supported Naval War Games
1979-06-08
revealed a requirement for the effective centralized manage- ment of computer supported war game development and employment in the U.S. Navy. A...considerations and supports the requirement for centralized Io 97 management of computerized war game development . Therefore it is recommended that a central...managerial and fiscal authority be estab- lished for computerized tactical war game development . This central authority should ensure that new games
A Probabilistic Collocation Based Iterative Kalman Filter for Landfill Data Assimilation
NASA Astrophysics Data System (ADS)
Qiang, Z.; Zeng, L.; Wu, L.
2016-12-01
Due to the strong spatial heterogeneity of landfill, uncertainty is ubiquitous in gas transport process in landfill. To accurately characterize the landfill properties, the ensemble Kalman filter (EnKF) has been employed to assimilate the measurements, e.g., the gas pressure. As a Monte Carlo (MC) based method, the EnKF usually requires a large ensemble size, which poses a high computational cost for large scale problems. In this work, we propose a probabilistic collocation based iterative Kalman filter (PCIKF) to estimate permeability in a liquid-gas coupling model. This method employs polynomial chaos expansion (PCE) to represent and propagate the uncertainties of model parameters and states, and an iterative form of Kalman filter to assimilate the current gas pressure data. To further reduce the computation cost, the functional ANOVA (analysis of variance) decomposition is conducted, and only the first order ANOVA components are remained for PCE. Illustrated with numerical case studies, this proposed method shows significant superiority in computation efficiency compared with the traditional MC based iterative EnKF. The developed method has promising potential in reliable prediction and management of landfill gas production.
ERIC Educational Resources Information Center
Tudela, Ignacio; Bonete, Pedro; Fullana, Andres; Conesa, Juan Antonio
2011-01-01
The unreacted-core shrinking (UCS) model is employed to characterize fluid-particle reactions that are important in industry and research. An approach to understand the UCS model by numerical methods is presented, which helps the visualization of the influence of the variables that control the overall heterogeneous process. Use of this approach in…
Model Checking Temporal Logic Formulas Using Sticker Automata
Feng, Changwei; Wu, Huanmei
2017-01-01
As an important complex problem, the temporal logic model checking problem is still far from being fully resolved under the circumstance of DNA computing, especially Computation Tree Logic (CTL), Interval Temporal Logic (ITL), and Projection Temporal Logic (PTL), because there is still a lack of approaches for DNA model checking. To address this challenge, a model checking method is proposed for checking the basic formulas in the above three temporal logic types with DNA molecules. First, one-type single-stranded DNA molecules are employed to encode the Finite State Automaton (FSA) model of the given basic formula so that a sticker automaton is obtained. On the other hand, other single-stranded DNA molecules are employed to encode the given system model so that the input strings of the sticker automaton are obtained. Next, a series of biochemical reactions are conducted between the above two types of single-stranded DNA molecules. It can then be decided whether the system satisfies the formula or not. As a result, we have developed a DNA-based approach for checking all the basic formulas of CTL, ITL, and PTL. The simulated results demonstrate the effectiveness of the new method. PMID:29119114
FAST Simulation Tool Containing Methods for Predicting the Dynamic Response of Wind Turbines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jonkman, Jason
2015-08-12
FAST is a simulation tool (computer software) for modeling tlie dynamic response of horizontal-axis wind turbines. FAST employs a combined modal and multibody structural-dynamics formulation in the time domain.
NASA Astrophysics Data System (ADS)
Pei, Zongrui; Eisenbach, Markus
2017-06-01
Dislocations are among the most important defects in determining the mechanical properties of both conventional alloys and high-entropy alloys. The Peierls-Nabarro model supplies an efficient pathway to their geometries and mobility. The difficulty in solving the integro-differential Peierls-Nabarro equation is how to effectively avoid the local minima in the energy landscape of a dislocation core. Among the other methods to optimize the dislocation core structures, we choose the algorithm of Particle Swarm Optimization, an algorithm that simulates the social behaviors of organisms. By employing more particles (bigger swarm) and more iterative steps (allowing them to explore for longer time), the local minima can be effectively avoided. But this would require more computational cost. The advantage of this algorithm is that it is readily parallelized in modern high computing architecture. We demonstrate the performance of our parallelized algorithm scales linearly with the number of employed cores.
Using the [beta][subscript 2]-Adrenoceptor for Structure-Based Drug Design
ERIC Educational Resources Information Center
Manallack, David T.; Chalmers, David K.; Yuriev, Elizabeth
2010-01-01
The topics of molecular modeling and drug design are studied in a medicinal chemistry course. The recently reported structures of several G protein-coupled receptors (GPCR) with bound ligands have been used to develop a simple computer-based experiment employing molecular-modeling software. Knowledge of the specific interactions between a ligand…
Domain-Specific QSAR Models for Identifying Potential Estrogenic Activity of Phenols (FutureTox III)
Computational tools can be used for efficient evaluation of untested chemicals for their ability to disrupt the endocrine system. We have employed previously developed global QSAR models that were trained and validated on the ToxCast/Tox21 ER assay data for virtual screening of a...
A New Tradition To Fit the Model.
ERIC Educational Resources Information Center
Darnell, D. Roe; Rosenthal, Donna McCrohan
2001-01-01
Discusses Cerro Coso Community College in Ridgecrest (California), where 80-85 of all local jobs are with one employer, the China Lake Naval Air Weapons Station (NAWS). States that massive layoffs at NAWS inspired creative ways of rethinking the community college model at Cerro Coso, such as creating the nation's first computer graphics imagery…
Computers Launch Faster, Better Job Matching
ERIC Educational Resources Information Center
Stevenson, Gloria
1976-01-01
Employment Security Automation Project (ESAP), a five-year program sponsored by the Employment and Training Administration, features an innovative computer-assisted job matching system and instantaneous computer-assisted service for unemployment insurance claimants. ESAP will also consolidate existing automated employment security systems to…
Page, Tessa; Nguyen, Huong Thi Huynh; Hilts, Lindsey; Ramos, Lorena; Hanrahan, Grady
2012-06-01
This work reveals a computational framework for parallel electrophoretic separation of complex biological macromolecules and model urinary metabolites. More specifically, the implementation of a particle swarm optimization (PSO) algorithm on a neural network platform for multiparameter optimization of multiplexed 24-capillary electrophoresis technology with UV detection is highlighted. Two experimental systems were examined: (1) separation of purified rabbit metallothioneins and (2) separation of model toluene urinary metabolites and selected organic acids. Results proved superior to the use of neural networks employing standard back propagation when examining training error, fitting response, and predictive abilities. Simulation runs were obtained as a result of metaheuristic examination of the global search space with experimental responses in good agreement with predicted values. Full separation of selected analytes was realized after employing optimal model conditions. This framework provides guidance for the application of metaheuristic computational tools to aid in future studies involving parallel chemical separation and screening. Adaptable pseudo-code is provided to enable users of varied software packages and modeling framework to implement the PSO algorithm for their desired use.
NASA Astrophysics Data System (ADS)
Bonelli, Francesco; Tuttafesta, Michele; Colonna, Gianpiero; Cutrone, Luigi; Pascazio, Giuseppe
2017-10-01
This paper describes the most advanced results obtained in the context of fluid dynamic simulations of high-enthalpy flows using detailed state-to-state air kinetics. Thermochemical non-equilibrium, typical of supersonic and hypersonic flows, was modeled by using both the accurate state-to-state approach and the multi-temperature model proposed by Park. The accuracy of the two thermochemical non-equilibrium models was assessed by comparing the results with experimental findings, showing better predictions provided by the state-to-state approach. To overcome the huge computational cost of the state-to-state model, a multiple-nodes GPU implementation, based on an MPI-CUDA approach, was employed and a comprehensive code performance analysis is presented. Both the pure MPI-CPU and the MPI-CUDA implementations exhibit excellent scalability performance. GPUs outperform CPUs computing especially when the state-to-state approach is employed, showing speed-ups, of the single GPU with respect to the single-core CPU, larger than 100 in both the case of one MPI process and multiple MPI process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCoy, M; Kissel, L
2002-01-29
We are experimenting with a new computing model to be applied to a new computer dedicated to that model. Several LLNL science teams now have computational requirements, evidenced by the mature scientific applications that have been developed over the past five plus years, that far exceed the capability of the institution's computing resources. Thus, there is increased demand for dedicated, powerful parallel computational systems. Computation can, in the coming year, potentially field a capability system that is low cost because it will be based on a model that employs open source software and because it will use PC (IA32-P4) hardware.more » This incurs significant computer science risk regarding stability and system features but also presents great opportunity. We believe the risks can be managed, but the existence of risk cannot be ignored. In order to justify the budget for this system, we need to make the case that it serves science and, through serving science, serves the institution. That is the point of the meeting and the White Paper that we are proposing to prepare. The questions are listed and the responses received are in this report.« less
A study of reacting free and ducted hydrogen/air jets
NASA Technical Reports Server (NTRS)
Beach, H. L., Jr.
1975-01-01
The mixing and reaction of a supersonic jet of hydrogen in coaxial free and ducted high temperature test gases were investigated. The importance of chemical kinetics on computed results, and the utilization of free-jet theoretical approaches to compute enclosed flow fields were studied. Measured pitot pressure profiles were correlated by use of a parabolic mixing analysis employing an eddy viscosity model. All computations, including free, ducted, reacting, and nonreacting cases, use the same value of the empirical constant in the viscosity model. Equilibrium and finite rate chemistry models were utilized. The finite rate assumption allowed prediction of observed ignition delay, but the equilibrium model gave the best correlations downstream from the ignition location. Ducted calculations were made with finite rate chemistry; correlations were, in general, as good as the free-jet results until problems with the boundary conditions were encountered.
NASA Technical Reports Server (NTRS)
Hollis, Brian R.
1996-01-01
A computational algorithm has been developed which can be employed to determine the flow properties of an arbitrary real (virial) gas in a wind tunnel. A multiple-coefficient virial gas equation of state and the assumption of isentropic flow are used to model the gas and to compute flow properties throughout the wind tunnel. This algorithm has been used to calculate flow properties for the wind tunnels of the Aerothermodynamics Facilities Complex at the NASA Langley Research Center, in which air, CF4. He, and N2 are employed as test gases. The algorithm is detailed in this paper and sample results are presented for each of the Aerothermodynamic Facilities Complex wind tunnels.
A manifold learning approach to data-driven computational materials and processes
NASA Astrophysics Data System (ADS)
Ibañez, Ruben; Abisset-Chavanne, Emmanuelle; Aguado, Jose Vicente; Gonzalez, David; Cueto, Elias; Duval, Jean Louis; Chinesta, Francisco
2017-10-01
Standard simulation in classical mechanics is based on the use of two very different types of equations. The first one, of axiomatic character, is related to balance laws (momentum, mass, energy, …), whereas the second one consists of models that scientists have extracted from collected, natural or synthetic data. In this work we propose a new method, able to directly link data to computers in order to perform numerical simulations. These simulations will employ universal laws while minimizing the need of explicit, often phenomenological, models. They are based on manifold learning methodologies.
TRAC posttest calculations of Semiscale Test S-06-3. [PWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ireland, J.R.; Bleiweis, P.B.
A comparison of Transient Reactor Analysis Code (TRAC) steady-state and transient results with Semiscale Test S-06-3 (US Standard Problem 8) experimental data is discussed. The TRAC model used employs fewer mesh cells than normal data comparison models so that TRAC's ability to obtain reasonable results with less computer time can be assessed. In general, the TRAC results are in good agreement with the data and the major phenomena found in the experiment are reproduced by the code with a substantial reduction in computing times.
Numerical simulation of supersonic flow using a new analytical bleed boundary condition
NASA Technical Reports Server (NTRS)
Harloff, G. J.; Smith, G. E.
1995-01-01
A new analytical bleed boundary condition is used to compute flowfields for a strong oblique shock wave/boundary layer interaction with a baseline and three bleed rates at a freestream Mach number of 2.47 with an 8 deg shock generator. The computational results are compared to experimental Pitot pressure profiles and wall static pressures through the interaction region. An algebraic turbulence model is employed for the bleed and baseline cases, and a one equation model is also used for the baseline case where the boundary layer is separated.
Computational comparison of quantum-mechanical models for multistep direct reactions
NASA Astrophysics Data System (ADS)
Koning, A. J.; Akkermans, J. M.
1993-02-01
We have carried out a computational comparison of all existing quantum-mechanical models for multistep direct (MSD) reactions. The various MSD models, including the so-called Feshbach-Kerman-Koonin, Tamura-Udagawa-Lenske and Nishioka-Yoshida-Weidenmüller models, have been implemented in a single computer system. All model calculations thus use the same set of parameters and the same numerical techniques; only one adjustable parameter is employed. The computational results have been compared with experimental energy spectra and angular distributions for several nuclear reactions, namely, 90Zr(p,p') at 80 MeV, 209Bi(p,p') at 62 MeV, and 93Nb(n,n') at 25.7 MeV. In addition, the results have been compared with the Kalbach systematics and with semiclassical exciton model calculations. All quantum MSD models provide a good fit to the experimental data. In addition, they reproduce the systematics very well and are clearly better than semiclassical model calculations. We furthermore show that the calculated predictions do not differ very strongly between the various quantum MSD models, leading to the conclusion that the simplest MSD model (the Feshbach-Kerman-Koonin model) is adequate for the analysis of experimental data.
Modeling of a latent fault detector in a digital system
NASA Technical Reports Server (NTRS)
Nagel, P. M.
1978-01-01
Methods of modeling the detection time or latency period of a hardware fault in a digital system are proposed that explain how a computer detects faults in a computational mode. The objectives were to study how software reacts to a fault, to account for as many variables as possible affecting detection and to forecast a given program's detecting ability prior to computation. A series of experiments were conducted on a small emulated microprocessor with fault injection capability. Results indicate that the detecting capability of a program largely depends on the instruction subset used during computation and the frequency of its use and has little direct dependence on such variables as fault mode, number set, degree of branching and program length. A model is discussed which employs an analog with balls in an urn to explain the rate of which subsequent repetitions of an instruction or instruction set detect a given fault.
Computational models of an inductive power transfer system for electric vehicle battery charge
NASA Astrophysics Data System (ADS)
Anele, A. O.; Hamam, Y.; Chassagne, L.; Linares, J.; Alayli, Y.; Djouani, K.
2015-09-01
One of the issues to be solved for electric vehicles (EVs) to become a success is the technical solution of its charging system. In this paper, computational models of an inductive power transfer (IPT) system for EV battery charge are presented. Based on the fundamental principles behind IPT systems, 3 kW single phase and 22 kW three phase IPT systems for Renault ZOE are designed in MATLAB/Simulink. The results obtained based on the technical specifications of the lithium-ion battery and charger type of Renault ZOE show that the models are able to provide the total voltage required by the battery. Also, considering the charging time for each IPT model, they are capable of delivering the electricity needed to power the ZOE. In conclusion, this study shows that the designed computational IPT models may be employed as a support structure needed to effectively power any viable EV.
Sensitivity of Age-of-Air Calculations to the Choice of Advection Scheme
NASA Technical Reports Server (NTRS)
Eluszkiewicz, Janusz; Hemler, Richard S.; Mahlman, Jerry D.; Bruhwiler, Lori; Takacs, Lawrence L.
2000-01-01
The age of air has recently emerged as a diagnostic of atmospheric transport unaffected by chemical parameterizations, and the features in the age distributions computed in models have been interpreted in terms of the models' large-scale circulation field. This study shows, however, that in addition to the simulated large-scale circulation, three-dimensional age calculations can also be affected by the choice of advection scheme employed in solving the tracer continuity equation, Specifically, using the 3.0deg latitude X 3.6deg longitude and 40 vertical level version of the Geophysical Fluid Dynamics Laboratory SKYHI GCM and six online transport schemes ranging from Eulerian through semi-Lagrangian to fully Lagrangian, it will be demonstrated that the oldest ages are obtained using the nondiffusive centered-difference schemes while the youngest ages are computed with a semi-Lagrangian transport (SLT) scheme. The centered- difference schemes are capable of producing ages older than 10 years in the mesosphere, thus eliminating the "young bias" found in previous age-of-air calculations. At this stage, only limited intuitive explanations can be advanced for this sensitivity of age-of-air calculations to the choice of advection scheme, In particular, age distributions computed online with the National Center for Atmospheric Research Community Climate Model (MACCM3) using different varieties of the SLT scheme are substantially older than the SKYHI SLT distribution. The different varieties, including a noninterpolating-in-the-vertical version (which is essentially centered-difference in the vertical), also produce a narrower range of age distributions than the suite of advection schemes employed in the SKYHI model. While additional MACCM3 experiments with a wider range of schemes would be necessary to provide more definitive insights, the older and less variable MACCM3 age distributions can plausibly be interpreted as being due to the semi-implicit semi-Lagrangian dynamics employed in the MACCM3. This type of dynamical core (employed with a 60-min time step) is likely to reduce SLT's interpolation errors that are compounded by the short-term variability characteristic of the explicit centered-difference dynamics employed in the SKYHI model (time step of 3 min). In the extreme case of a very slowly varying circulation, the choice of advection scheme has no effect on two-dimensional (latitude-height) age-of-air calculations, owing to the smooth nature of the transport circulation in 2D models. These results suggest that nondiffusive schemes may be the preferred choice for multiyear simulations of tracers not overly sensitive to the requirement of monotonicity (this category includes many greenhouse gases). At the same time, age-of-air calculations offer a simple quantitative diagnostic of a scheme's long-term diffusive properties and may help in the evaluation of dynamical cores in multiyear integrations. On the other hand, the sensitivity of the computed ages to the model numerics calls for caution in using age of air as a diagnostic of a GCM's large-scale circulation field.
An improved swarm optimization for parameter estimation and biological model selection.
Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail
2013-01-01
One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data.
Tonutti, Michele; Gras, Gauthier; Yang, Guang-Zhong
2017-07-01
Accurate reconstruction and visualisation of soft tissue deformation in real time is crucial in image-guided surgery, particularly in augmented reality (AR) applications. Current deformation models are characterised by a trade-off between accuracy and computational speed. We propose an approach to derive a patient-specific deformation model for brain pathologies by combining the results of pre-computed finite element method (FEM) simulations with machine learning algorithms. The models can be computed instantaneously and offer an accuracy comparable to FEM models. A brain tumour is used as the subject of the deformation model. Load-driven FEM simulations are performed on a tetrahedral brain mesh afflicted by a tumour. Forces of varying magnitudes, positions, and inclination angles are applied onto the brain's surface. Two machine learning algorithms-artificial neural networks (ANNs) and support vector regression (SVR)-are employed to derive a model that can predict the resulting deformation for each node in the tumour's mesh. The tumour deformation can be predicted in real time given relevant information about the geometry of the anatomy and the load, all of which can be measured instantly during a surgical operation. The models can predict the position of the nodes with errors below 0.3mm, beyond the general threshold of surgical accuracy and suitable for high fidelity AR systems. The SVR models perform better than the ANN's, with positional errors for SVR models reaching under 0.2mm. The results represent an improvement over existing deformation models for real time applications, providing smaller errors and high patient-specificity. The proposed approach addresses the current needs of image-guided surgical systems and has the potential to be employed to model the deformation of any type of soft tissue. Copyright © 2017 Elsevier B.V. All rights reserved.
Cscibox: A Software System for Age-Model Construction and Evaluation
NASA Astrophysics Data System (ADS)
Bradley, E.; Anderson, K. A.; Marchitto, T. M., Jr.; de Vesine, L. R.; White, J. W. C.; Anderson, D. M.
2014-12-01
CSciBox is an integrated software system for the construction and evaluation of age models of paleo-environmetal archives, both directly dated and cross dated. The time has come to encourage cross-pollinization between earth science and computer science in dating paleorecords. This project addresses that need. The CSciBox code, which is being developed by a team of computer scientists and geoscientists, is open source and freely available on github. The system employs modern database technology to store paleoclimate proxy data and analysis results in an easily accessible and searchable form. This makes it possible to do analysis on the whole core at once, in an interactive fashion, or to tailor the analysis to a subset of the core without loading the entire data file. CSciBox provides a number of 'components' that perform the common steps in age-model construction and evaluation: calibrations, reservoir-age correction, interpolations, statistics, and so on. The user employs these components via a graphical user interface (GUI) to go from raw data to finished age model in a single tool: e.g., an IntCal09 calibration of 14C data from a marine sediment core, followed by a piecewise-linear interpolation. CSciBox's GUI supports plotting of any measurement in the core against any other measurement, or against any of the variables in the calculation of the age model-with or without explicit error representations. Using the GUI, CSciBox's user can import a new calibration curve or other background data set and define a new module that employs that information. Users can also incorporate other software (e.g., Calib, BACON) as 'plug ins.' In the case of truly large data or significant computational effort, CSciBox is parallelizable across modern multicore processors, or clusters, or even the cloud. The next generation of the CSciBox code, currently in the testing stages, includes an automated reasoning engine that supports a more-thorough exploration of plausible age models and cross-dating scenarios.
Estimating the dust production rate of carbon stars in the Small Magellanic Cloud
NASA Astrophysics Data System (ADS)
Nanni, Ambra; Marigo, Paola; Girardi, Léo; Rubele, Stefano; Bressan, Alessandro; Groenewegen, Martin A. T.; Pastorelli, Giada; Aringer, Bernhard
2018-02-01
We employ newly computed grids of spectra reprocessed by dust for estimating the total dust production rate (DPR) of carbon stars in the Small Magellanic Cloud (SMC). For the first time, the grids of spectra are computed as a function of the main stellar parameters, i.e. mass-loss rate, luminosity, effective temperature, current stellar mass and element abundances at the photosphere, following a consistent, physically grounded scheme of dust growth coupled with stationary wind outflow. The model accounts for the dust growth of various dust species formed in the circumstellar envelopes of carbon stars, such as carbon dust, silicon carbide and metallic iron. In particular, we employ some selected combinations of optical constants and grain sizes for carbon dust that have been shown to reproduce simultaneously the most relevant colour-colour diagrams in the SMC. By employing our grids of models, we fit the spectral energy distributions of ≈3100 carbon stars in the SMC, consistently deriving some important dust and stellar properties, i.e. luminosities, mass-loss rates, gas-to-dust ratios, expansion velocities and dust chemistry. We discuss these properties and we compare some of them with observations in the Galaxy and Large Magellanic Cloud. We compute the DPR of carbon stars in the SMC, finding that the estimates provided by our method can be significantly different, between a factor of ≈2-5, than the ones available in the literature. Our grids of models, including the spectra and other relevant dust and stellar quantities, are publicly available at http://starkey.astro.unipd.it/web/guest/dustymodels.
Numerical simulation of steady supersonic flow over spinning bodies of revolution
NASA Technical Reports Server (NTRS)
Sturek, W. B.; Schiff, L. B.
1982-01-01
A recently reported parabolized Navier-Stokes code has been employed to compute the supersonic flowfield about a spinning cone and spinning and nonspinning ogive cylinder and boattailed bodies of revolution at moderate incidence. The computations were performed for flow conditions where extensive measurements for wall pressure, boundary-layer velocity profiles, and Magnus force had been obtained. Comparisons between the computational results and experiment indicate excellent agreement for angles of attack up to 6 deg. At angles greater than 6 deg discrepancies are noted which are tentatively attributed to turbulence modeling errors. The comparisons for Magnus effects show that the code accurately predicts the effects of body shape for the selected models.
Numerical investigation of airflow in an idealised human extra-thoracic airway: a comparison study
Chen, Jie; Gutmark, Ephraim
2013-01-01
Large eddy simulation (LES) technique is employed to numerically investigate the airflow through an idealised human extra-thoracic airway under different breathing conditions, 10 l/min, 30 l/min, and 120 l/min. The computational results are compared with single and cross hot-wire measurements, and with time-averaged flow field computed by standard k-ω and k-ω-SST Reynolds averaged Navier-Stokes (RANS) models and the Lattice-Boltzmann method (LBM). The LES results are also compared to root-mean-square (RMS) flow field computed by the Reynolds stress model (RSM) and LBM. LES generally gives better prediction of the time-averaged flow field than RANS models and LBM. LES also provides better estimation of the RMS flow field than both the RSM and the LBM. PMID:23619907
Computational effects of inlet representation on powered hypersonic, airbreathing models
NASA Technical Reports Server (NTRS)
Huebner, Lawrence D.; Tatum, Kenneth E.
1993-01-01
Computational results are presented to illustrate the powered aftbody effects of representing the scramjet inlet on a generic hypersonic vehicle with a fairing, to divert the external flow, as compared to an operating flow-through scramjet inlet. This study is pertinent to the ground testing of hypersonic, airbreathing models employing scramjet exhaust flow simulation in typical small-scale hypersonic wind tunnels. The comparison of aftbody effects due to inlet representation is well-suited for computational study, since small model size typically precludes the ability to ingest flow into the inlet and perform exhaust simulation at the same time. Two-dimensional analysis indicates that, although flowfield differences exist for the two types of inlet representations, little, if any, difference in surface aftbody characteristics is caused by fairing over the inlet.
High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics
Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis
2014-07-28
The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable ofmore » handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.« less
Simulation of mixing in the quick quench region of a rich burn-quick quench mix-lean burn combustor
NASA Technical Reports Server (NTRS)
Shih, Tom I.-P.; Nguyen, H. Lee; Howe, Gregory W.; Li, Z.
1991-01-01
A computer program was developed to study the mixing process in the quick quench region of a rich burn-quick quench mix-lean burn combustor. The computer program developed was based on the density-weighted, ensemble-averaged conservation equations of mass, momentum (full compressible Navier-Stokes), total energy, and species, closed by a k-epsilon turbulence model with wall functions. The combustion process was modeled by a two-step global reaction mechanism, and NO(x) formation was modeled by the Zeldovich mechanism. The formulation employed in the computer program and the essence of the numerical method of solution are described. Some results obtained for nonreacting and reacting flows with different main-flow to dilution-jet momentum flux ratios are also presented.
Soft computing methods for geoidal height transformation
NASA Astrophysics Data System (ADS)
Akyilmaz, O.; Özlüdemir, M. T.; Ayan, T.; Çelik, R. N.
2009-07-01
Soft computing techniques, such as fuzzy logic and artificial neural network (ANN) approaches, have enabled researchers to create precise models for use in many scientific and engineering applications. Applications that can be employed in geodetic studies include the estimation of earth rotation parameters and the determination of mean sea level changes. Another important field of geodesy in which these computing techniques can be applied is geoidal height transformation. We report here our use of a conventional polynomial model, the Adaptive Network-based Fuzzy (or in some publications, Adaptive Neuro-Fuzzy) Inference System (ANFIS), an ANN and a modified ANN approach to approximate geoid heights. These approximation models have been tested on a number of test points. The results obtained through the transformation processes from ellipsoidal heights into local levelling heights have also been compared.
Computational analysis of an aortic valve jet
NASA Astrophysics Data System (ADS)
Shadden, Shawn C.; Astorino, Matteo; Gerbeau, Jean-Frédéric
2009-11-01
In this work we employ a coupled FSI scheme using an immersed boundary method to simulate flow through a realistic deformable, 3D aortic valve model. This data was used to compute Lagrangian coherent structures, which revealed flow separation from the valve leaflets during systole, and correspondingly, the boundary between the jet of ejected fluid and the regions of separated, recirculating flow. Advantages of computing LCS in multi-dimensional FSI models of the aortic valve are twofold. For one, the quality and effectiveness of existing clinical indices used to measure aortic jet size can be tested by taking advantage of the accurate measure of the jet area derived from LCS. Secondly, as an ultimate goal, a reliable computational framework for the assessment of the aortic valve stenosis could be developed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olsen, R.J.; Westley, G.W.; Herzog, H.W. Jr.
This report documents the development of MULTIREGION, a computer model of regional and interregional socio-economic development. The MULTIREGION model interprets the economy of each BEA economic area as a labor market, measures all activity in terms of people as members of the population (labor supply) or as employees (labor demand), and simultaneously simulates or forecasts the demands and supplies of labor in all BEA economic areas at five-year intervals. In general the outputs of MULTIREGION are intended to resemble those of the Water Resource Council's OBERS projections and to be put to similar planning and analysis purposes. This report hasmore » been written at two levels to serve the needs of multiple audiences. The body of the report serves as a fairly nontechnical overview of the entire MULTIREGION project; a series of technical appendixes provide detailed descriptions of the background empirical studies of births, deaths, migration, labor force participation, natural resource employment, manufacturing employment location, and local service employment used to construct the model.« less
Modeling of an intelligent pressure sensor using functional link artificial neural networks.
Patra, J C; van den Bos, A
2000-01-01
A capacitor pressure sensor (CPS) is modeled for accurate readout of applied pressure using a novel artificial neural network (ANN). The proposed functional link ANN (FLANN) is a computationally efficient nonlinear network and is capable of complex nonlinear mapping between its input and output pattern space. The nonlinearity is introduced into the FLANN by passing the input pattern through a functional expansion unit. Three different polynomials such as, Chebyschev, Legendre and power series have been employed in the FLANN. The FLANN offers computational advantage over a multilayer perceptron (MLP) for similar performance in modeling of the CPS. The prime aim of the present paper is to develop an intelligent model of the CPS involving less computational complexity, so that its implementation can be economical and robust. It is shown that, over a wide temperature variation ranging from -50 to 150 degrees C, the maximum error of estimation of pressure remains within +/- 3%. With the help of computer simulation, the performance of the three types of FLANN models has been compared to that of an MLP based model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McClanahan, Richard; De Leon, Phillip L.
The majority of state-of-the-art speaker recognition systems (SR) utilize speaker models that are derived from an adapted universal background model (UBM) in the form of a Gaussian mixture model (GMM). This is true for GMM supervector systems, joint factor analysis systems, and most recently i-vector systems. In all of the identified systems, the posterior probabilities and sufficient statistics calculations represent a computational bottleneck in both enrollment and testing. We propose a multi-layered hash system, employing a tree-structured GMM–UBM which uses Runnalls’ Gaussian mixture reduction technique, in order to reduce the number of these calculations. Moreover, with this tree-structured hash, wemore » can trade-off reduction in computation with a corresponding degradation of equal error rate (EER). As an example, we also reduce this computation by a factor of 15× while incurring less than 10% relative degradation of EER (or 0.3% absolute EER) when evaluated with NIST 2010 speaker recognition evaluation (SRE) telephone data.« less
McClanahan, Richard; De Leon, Phillip L.
2014-08-20
The majority of state-of-the-art speaker recognition systems (SR) utilize speaker models that are derived from an adapted universal background model (UBM) in the form of a Gaussian mixture model (GMM). This is true for GMM supervector systems, joint factor analysis systems, and most recently i-vector systems. In all of the identified systems, the posterior probabilities and sufficient statistics calculations represent a computational bottleneck in both enrollment and testing. We propose a multi-layered hash system, employing a tree-structured GMM–UBM which uses Runnalls’ Gaussian mixture reduction technique, in order to reduce the number of these calculations. Moreover, with this tree-structured hash, wemore » can trade-off reduction in computation with a corresponding degradation of equal error rate (EER). As an example, we also reduce this computation by a factor of 15× while incurring less than 10% relative degradation of EER (or 0.3% absolute EER) when evaluated with NIST 2010 speaker recognition evaluation (SRE) telephone data.« less
DAKOTA Design Analysis Kit for Optimization and Terascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Dalbey, Keith R.; Eldred, Michael S.
2010-02-24
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes (computational models) and iterative analysis methods. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and analysis of computational models on high performance computers.A user provides a set of DAKOTA commands in an input file and launches DAKOTA. DAKOTA invokes instances of the computational models, collects their results, and performs systems analyses. DAKOTA contains algorithms for optimization with gradient and nongradient-basedmore » methods; uncertainty quantification with sampling, reliability, polynomial chaos, stochastic collocation, and epistemic methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as hybrid optimization, surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. Services for parallel computing, simulation interfacing, approximation modeling, fault tolerance, restart, and graphics are also included.« less
NASA Technical Reports Server (NTRS)
Tanner, J. A.; Stubbs, S. M.; Dreher, R. C.; Smith, E. G.
1982-01-01
A computer study was performed to assess the accuracy of three brake pressure-torque mathematical models. The investigation utilized one main gear wheel, brake, and tire assembly of a McDonnell Douglas DC-9 series 10 airplane. The investigation indicates that the performance of aircraft antiskid braking systems is strongly influenced by tire characteristics, dynamic response of the antiskid control valve, and pressure-torque response of the brake. The computer study employed an average torque error criterion to assess the accuracy of the models. The results indicate that a variable nonlinear spring with hysteresis memory function models the pressure-torque response of the brake more accurately than currently used models.
Simulation of charge exchange plasma propagation near an ion thruster propelled spacecraft
NASA Technical Reports Server (NTRS)
Robinson, R. S.; Kaufman, H. R.; Winder, D. R.
1981-01-01
A model describing the charge exchange plasma and its propagation is discussed, along with a computer code based on the model. The geometry of an idealized spacecraft having an ion thruster is outlined, with attention given to the assumptions used in modeling the ion beam. Also presented is the distribution function describing charge exchange production. The barometric equation is used in relating the variation in plasma potential to the variation in plasma density. The numerical methods and approximations employed in the calculations are discussed, and comparisons are made between the computer simulation and experimental data. An analytical solution of a simple configuration is also used in verifying the model.
Modal response of a computational vocal fold model with a substrate layer of adipose tissue.
Jones, Cameron L; Achuthan, Ajit; Erath, Byron D
2015-02-01
This study demonstrates the effect of a substrate layer of adipose tissue on the modal response of the vocal folds, and hence, on the mechanics of voice production. Modal analysis is performed on the vocal fold structure with a lateral layer of adipose tissue. A finite element model is employed, and the first six mode shapes and modal frequencies are studied. The results show significant changes in modal frequencies and substantial variation in mode shapes depending on the strain rate of the adipose tissue. These findings highlight the importance of considering adipose tissue in computational vocal fold modeling.
Modeling of High Speed Reacting Flows: Established Practices and Future Challenges
NASA Technical Reports Server (NTRS)
Baurle, R. A.
2004-01-01
Computational fluid dynamics (CFD) has proven to be an invaluable tool for the design and analysis of high- speed propulsion devices. Massively parallel computing, together with the maturation of robust CFD codes, has made it possible to perform simulations of complete engine flowpaths. Steady-state Reynolds-Averaged Navier-Stokes simulations are now routinely used in the scramjet engine development cycle to determine optimal fuel injector arrangements, investigate trends noted during testing, and extract various measures of engine efficiency. Unfortunately, the turbulence and combustion models used in these codes have not changed significantly over the past decade. Hence, the CFD practitioner must often rely heavily on existing measurements (at similar flow conditions) to calibrate model coefficients on a case- by-case basis. This paper provides an overview of the modeled equations typically employed by commercial- quality CFD codes for high-speed combustion applications. Careful attention is given to the approximations employed for each of the unclosed terms in the averaged equation set. The salient features (and shortcomings) of common models used to close these terms are covered in detail, and several academic efforts aimed at addressing these shortcomings are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Nan; Battaglia, Francine; Pannala, Sreekanth
2008-01-01
Simulations of fluidized beds are performed to study and determine the effect on the use of coordinate systems and geometrical configurations to model fluidized bed reactors. Computational fluid dynamics is employed for an Eulerian-Eulerian model, which represents each phase as an interspersed continuum. The transport equation for granular temperature is solved and a hyperbolic tangent function is used to provide a smooth transition between the plastic and viscous regimes for the solid phase. The aim of the present work is to show the range of validity for employing simulations based on a 2D Cartesian coordinate system to approximate both cylindricalmore » and rectangular fluidized beds. Three different fluidization regimes, bubbling, slugging and turbulent regimes, are investigated and the results of 2D and 3D simulations are presented for both cylindrical and rectangular domains. The results demonstrate that a 2D Cartesian system can be used to successfully simulate and predict a bubbling regime. However, caution must be exercised when using 2D Cartesian coordinates for other fluidized regimes. A budget analysis that explains all the differences in detail is presented in Part II [N. Xie, F. Battaglia, S. Pannala, Effects of Using Two-Versus Three-Dimensional Computational Modeling of Fluidized Beds: Part II, budget analysis, 182 (1) (2007) 14] to complement the hydrodynamic theory of this paper.« less
Higgs boson decay into b-quarks at NNLO accuracy
NASA Astrophysics Data System (ADS)
Del Duca, Vittorio; Duhr, Claude; Somogyi, Gábor; Tramontano, Francesco; Trócsányi, Zoltán
2015-04-01
We compute the fully differential decay rate of the Standard Model Higgs boson into b-quarks at next-to-next-to-leading order (NNLO) accuracy in αs. We employ a general subtraction scheme developed for the calculation of higher order perturbative corrections to QCD jet cross sections, which is based on the universal infrared factorization properties of QCD squared matrix elements. We show that the subtractions render the various contributions to the NNLO correction finite. In particular, we demonstrate analytically that the sum of integrated subtraction terms correctly reproduces the infrared poles of the two-loop double virtual contribution to this process. We present illustrative differential distributions obtained by implementing the method in a parton level Monte Carlo program. The basic ingredients of our subtraction scheme, used here for the first time to compute a physical observable, are universal and can be employed for the computation of more involved processes.
Wind Energy Modeling and Simulation | Wind | NREL
Wind Energy Modeling and Simulation Wind Turbine Modeling and Simulation Wind turbines are unique wind turbines. It enables the analysis of a range of wind turbine configurations, including: Two- or (SOWFA) employs computational fluid dynamics to allow users to investigate wind turbine and wind power
The role of continuity in residual-based variational multiscale modeling of turbulence
NASA Astrophysics Data System (ADS)
Akkerman, I.; Bazilevs, Y.; Calo, V. M.; Hughes, T. J. R.; Hulshoff, S.
2008-02-01
This paper examines the role of continuity of the basis in the computation of turbulent flows. We compare standard finite elements and non-uniform rational B-splines (NURBS) discretizations that are employed in Isogeometric Analysis (Hughes et al. in Comput Methods Appl Mech Eng, 194:4135 4195, 2005). We make use of quadratic discretizations that are C 0-continuous across element boundaries in standard finite elements, and C 1-continuous in the case of NURBS. The variational multiscale residual-based method (Bazilevs in Isogeometric analysis of turbulence and fluid-structure interaction, PhD thesis, ICES, UT Austin, 2006; Bazilevs et al. in Comput Methods Appl Mech Eng, submitted, 2007; Calo in Residual-based multiscale turbulence modeling: finite volume simulation of bypass transition. PhD thesis, Department of Civil and Environmental Engineering, Stanford University, 2004; Hughes et al. in proceedings of the XXI international congress of theoretical and applied mechanics (IUTAM), Kluwer, 2004; Scovazzi in Multiscale methods in science and engineering, PhD thesis, Department of Mechanical Engineering, Stanford Universty, 2004) is employed as a turbulence modeling technique. We find that C 1-continuous discretizations outperform their C 0-continuous counterparts on a per-degree-of-freedom basis. We also find that the effect of continuity is greater for higher Reynolds number flows.
3-D modeling of ductile tearing using finite elements: Computational aspects and techniques
NASA Astrophysics Data System (ADS)
Gullerud, Arne Stewart
This research focuses on the development and application of computational tools to perform large-scale, 3-D modeling of ductile tearing in engineering components under quasi-static to mild loading rates. Two standard models for ductile tearing---the computational cell methodology and crack growth controlled by the crack tip opening angle (CTOA)---are described and their 3-D implementations are explored. For the computational cell methodology, quantification of the effects of several numerical issues---computational load step size, procedures for force release after cell deletion, and the porosity for cell deletion---enables construction of computational algorithms to remove the dependence of predicted crack growth on these issues. This work also describes two extensions of the CTOA approach into 3-D: a general 3-D method and a constant front technique. Analyses compare the characteristics of the extensions, and a validation study explores the ability of the constant front extension to predict crack growth in thin aluminum test specimens over a range of specimen geometries, absolutes sizes, and levels of out-of-plane constraint. To provide a computational framework suitable for the solution of these problems, this work also describes the parallel implementation of a nonlinear, implicit finite element code. The implementation employs an explicit message-passing approach using the MPI standard to maintain portability, a domain decomposition of element data to provide parallel execution, and a master-worker organization of the computational processes to enhance future extensibility. A linear preconditioned conjugate gradient (LPCG) solver serves as the core of the solution process. The parallel LPCG solver utilizes an element-by-element (EBE) structure of the computations to permit a dual-level decomposition of the element data: domain decomposition of the mesh provides efficient coarse-grain parallel execution, while decomposition of the domains into blocks of similar elements (same type, constitutive model, etc.) provides fine-grain parallel computation on each processor. A major focus of the LPCG solver is a new implementation of the Hughes-Winget element-by-element (HW) preconditioner. The implementation employs a weighted dependency graph combined with a new coloring algorithm to provide load-balanced scheduling for the preconditioner and overlapped communication/computation. This approach enables efficient parallel application of the HW preconditioner for arbitrary unstructured meshes.
NASA Astrophysics Data System (ADS)
Xie, Ya-Ping; Chen, Xurong
2018-05-01
Photoproduction of vector mesons is computed with dipole model in proton-proton ultraperipheral collisions (UPCs) at the CERN Large Hadron Collider (LHC). The dipole model framework is employed in the calculations of vector mesons production in diffractive processes. Parameters of the bCGC model are refitted with the latest inclusive deep inelastic scattering experimental data. Employing the bCGC model and boosted Gaussian light-cone wave function for vector mesons, we obtain the prediction of rapidity distributions of J/ψ and ψ(2s) mesons in proton-proton ultraperipheral collisions at the LHC. The predictions give a good description of the experimental data of LHCb. Predictions of ϕ and ω mesons are also evaluated in this paper.
Scalable File Systems for High Performance Computing Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brandt, S A
2007-10-03
Simulations of mode I interlaminar fracture toughness tests of a carbon-reinforced composite material (BMS 8-212) were conducted with LSDYNA. The fracture toughness tests were performed by U.C. Berkeley. The simulations were performed to investigate the validity and practicality of employing decohesive elements to represent interlaminar bond failures that are prevalent in carbon-fiber composite structure penetration events. The simulations employed a decohesive element formulation that was verified on a simple two element model before being employed to perform the full model simulations. Care was required during the simulations to ensure that the explicit time integration of LSDYNA duplicate the near steady-statemore » testing conditions. In general, this study validated the use of employing decohesive elements to represent the interlaminar bond failures seen in carbon-fiber composite structures, but the practicality of employing the elements to represent the bond failures seen in carbon-fiber composite structures during penetration events was not established.« less
ERIC Educational Resources Information Center
Gibbs, Shirley; Steel, Gary; Kuiper, Alison
2011-01-01
The use of computers has become part of everyday life. The high prevalence of computer use appears to lead employers to assume that university graduates will have the good computing skills necessary in many graduate level jobs. This study investigates how well the expectations of employers match the perceptions of near-graduate students about the…
Characterization and Computational Modeling of Minor Phases in Alloy LSHR
NASA Technical Reports Server (NTRS)
Jou, Herng-Jeng; Olson, Gregory; Gabb, Timothy; Garg, Anita; Miller, Derek
2012-01-01
The minor phases of powder metallurgy disk superalloy LSHR were studied. Samples were consistently heat treated at three different temperatures for long times to approach equilibrium. Additional heat treatments were also performed for shorter times, to assess minor phase kinetics in non-equilibrium conditions. Minor phases including MC carbides, M23C6 carbides, M3B2 borides, and sigma were identified. Their average sizes and total area fractions were determined. CALPHAD thermodynamics databases and PrecipiCalc(TradeMark), a computational precipitation modeling tool, were employed with Ni-base thermodynamics and diffusion databases to model and simulate the phase microstructural evolution observed in the experiments with an objective to identify the model limitations and the directions of model enhancement.
Aeroelastic Analysis for Rotorcraft
NASA Technical Reports Server (NTRS)
Johnson, W.
1982-01-01
Aeroelastic-analysis computer program incorporates an analytical model of aeroelastic behavior of wide range of rotorcraft. Such an analytical model is desirable for both pretest predictions and posttest correlations. Program can be applied in investigations of isolated rotor aeroelasticity and helicopter-flight dynamics and could be employed as basis for more-extensive investigations or aeroelastic behavior, such as automatic control system design.
29 CFR 516.8 - Computations and reports.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 3 2010-07-01 2010-07-01 false Computations and reports. 516.8 Section 516.8 Labor... BE KEPT BY EMPLOYERS General Requirements § 516.8 Computations and reports. Each employer required to... and shall submit to the Wage and Hour Division such reports concerning persons employed and the wages...
NASA Technical Reports Server (NTRS)
Johnson, D. R.; Uccellini, L. W.
1983-01-01
In connection with the employment of the sigma coordinates introduced by Phillips (1957), problems can arise regarding an accurate finite-difference computation of the pressure gradient force. Over steeply sloped terrain, the calculation of the sigma-coordinate pressure gradient force involves computing the difference between two large terms of opposite sign which results in large truncation error. To reduce the truncation error, several finite-difference methods have been designed and implemented. The present investigation has the objective to provide another method of computing the sigma-coordinate pressure gradient force. Phillips' method is applied for the elimination of a hydrostatic component to a flux formulation. The new technique is compared with four other methods for computing the pressure gradient force. The work is motivated by the desire to use an isentropic and sigma-coordinate hybrid model for experiments designed to study flow near mountainous terrain.
Bindu, G; Semenov, S
2013-01-01
This paper describes an efficient two-dimensional fused image reconstruction approach for Microwave Tomography (MWT). Finite Difference Time Domain (FDTD) models were created for a viable MWT experimental system having the transceivers modelled using thin wire approximation with resistive voltage sources. Born Iterative and Distorted Born Iterative methods have been employed for image reconstruction with the extremity imaging being done using a differential imaging technique. The forward solver in the imaging algorithm employs the FDTD method of solving the time domain Maxwell's equations with the regularisation parameter computed using a stochastic approach. The algorithm is tested with 10% noise inclusion and successful image reconstruction has been shown implying its robustness.
Mesoscale Climate Evaluation Using Grid Computing
NASA Astrophysics Data System (ADS)
Campos Velho, H. F.; Freitas, S. R.; Souto, R. P.; Charao, A. S.; Ferraz, S.; Roberti, D. R.; Streck, N.; Navaux, P. O.; Maillard, N.; Collischonn, W.; Diniz, G.; Radin, B.
2012-04-01
The CLIMARS project is focused to establish an operational environment for seasonal climate prediction for the Rio Grande do Sul state, Brazil. The dynamical downscaling will be performed with the use of several software platforms and hardware infrastructure to carry out the investigation on mesoscale of the global change impact. The grid computing takes advantage of geographically spread out computer systems, connected by the internet, for enhancing the power of computation. The ensemble climate prediction is an appropriated application for processing on grid computing, because the integration of each ensemble member does not have a dependency on information from another ensemble members. The grid processing is employed to compute the 20-year climatology and the long range simulations under ensemble methodology. BRAMS (Brazilian Regional Atmospheric Model) is a mesoscale model developed from a version of the RAMS (from the Colorado State University - CSU, USA). BRAMS model is the tool for carrying out the dynamical downscaling from the IPCC scenarios. Long range BRAMS simulations will provide data for some climate (data) analysis, and supply data for numerical integration of different models: (a) Regime of the extreme events for temperature and precipitation fields: statistical analysis will be applied on the BRAMS data, (b) CCATT-BRAMS (Coupled Chemistry Aerosol Tracer Transport - BRAMS) is an environmental prediction system that will be used to evaluate if the new standards of temperature, rain regime, and wind field have a significant impact on the pollutant dispersion in the analyzed regions, (c) MGB-IPH (Portuguese acronym for the Large Basin Model (MGB), developed by the Hydraulic Research Institute, (IPH) from the Federal University of Rio Grande do Sul (UFRGS), Brazil) will be employed to simulate the alteration of the river flux under new climate patterns. Important meteorological input variables for the MGB-IPH are the precipitation (most relevant), temperature, and wind field, all provided by BRAMS. The Uruguay river basin will be analyzed in the scope of this proposal, (d) INFOCROP: this crop model has been calibrated for Southern Brazil, three agriculture cropswill be analyzed: rice, soybean and corn.
Collisional transport across the magnetic field in drift-fluid models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madsen, J., E-mail: jmad@fysik.dtu.dk; Naulin, V.; Nielsen, A. H.
2016-03-15
Drift ordered fluid models are widely applied in studies of low-frequency turbulence in the edge and scrape-off layer regions of magnetically confined plasmas. Here, we show how collisional transport across the magnetic field is self-consistently incorporated into drift-fluid models without altering the drift-fluid energy integral. We demonstrate that the inclusion of collisional transport in drift-fluid models gives rise to diffusion of particle density, momentum, and pressures in drift-fluid turbulence models and, thereby, obviates the customary use of artificial diffusion in turbulence simulations. We further derive a computationally efficient, two-dimensional model, which can be time integrated for several turbulence de-correlation timesmore » using only limited computational resources. The model describes interchange turbulence in a two-dimensional plane perpendicular to the magnetic field located at the outboard midplane of a tokamak. The model domain has two regions modeling open and closed field lines. The model employs a computational expedient model for collisional transport. Numerical simulations show good agreement between the full and the simplified model for collisional transport.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
DuPont, Bryony; Cagan, Jonathan; Moriarty, Patrick
This paper presents a system of modeling advances that can be applied in the computational optimization of wind plants. These modeling advances include accurate cost and power modeling, partial wake interaction, and the effects of varying atmospheric stability. To validate the use of this advanced modeling system, it is employed within an Extended Pattern Search (EPS)-Multi-Agent System (MAS) optimization approach for multiple wind scenarios. The wind farm layout optimization problem involves optimizing the position and size of wind turbines such that the aerodynamic effects of upstream turbines are reduced, which increases the effective wind speed and resultant power at eachmore » turbine. The EPS-MAS optimization algorithm employs a profit objective, and an overarching search determines individual turbine positions, with a concurrent EPS-MAS determining the optimal hub height and rotor diameter for each turbine. Two wind cases are considered: (1) constant, unidirectional wind, and (2) three discrete wind speeds and varying wind directions, each of which have a probability of occurrence. Results show the advantages of applying the series of advanced models compared to previous application of an EPS with less advanced models to wind farm layout optimization, and imply best practices for computational optimization of wind farms with improved accuracy.« less
NASA Astrophysics Data System (ADS)
Rylander, Marissa N.; Feng, Yusheng; Zhang, Yongjie; Bass, Jon; Stafford, Roger J.; Hazle, John D.; Diller, Kenneth R.
2006-07-01
Thermal therapy efficacy can be diminished due to heat shock protein (HSP) induction in regions of a tumor where temperatures are insufficient to coagulate proteins. HSP expression enhances tumor cell viability and imparts resistance to chemotherapy and radiation treatments, which are generally employed in conjunction with hyperthermia. Therefore, an understanding of the thermally induced HSP expression within the targeted tumor must be incorporated into the treatment plan to optimize the thermal dose delivery and permit prediction of the overall tissue response. A treatment planning computational model capable of predicting the temperature, HSP27 and HSP70 expression, and damage fraction distributions associated with laser heating in healthy prostate tissue and tumors is presented. Measured thermally induced HSP27 and HSP70 expression kinetics and injury data for normal and cancerous prostate cells and prostate tumors are employed to create the first HSP expression predictive model and formulate an Arrhenius damage model. The correlation coefficients between measured and model predicted temperature, HSP27, and HSP70 were 0.98, 0.99, and 0.99, respectively, confirming the accuracy of the model. Utilization of the treatment planning model in the design of prostate cancer thermal therapies can enable optimization of the treatment outcome by controlling HSP expression and injury.
Cheminformatic Analysis of the US EPA ToxCast Chemical Library
The ToxCast project is employing high throughput screening (HTS) technologies, along with chemical descriptors and computational models, to develop approaches for screening and prioritizing environmental chemicals for further toxicity testing. ToxCast Phase I generated HTS data f...
NASA Technical Reports Server (NTRS)
Bratanow, T.; Ecer, A.
1973-01-01
A general computational method for analyzing unsteady flow around pitching and plunging airfoils was developed. The finite element method was applied in developing an efficient numerical procedure for the solution of equations describing the flow around airfoils. The numerical results were employed in conjunction with computer graphics techniques to produce visualization of the flow. The investigation involved mathematical model studies of flow in two phases: (1) analysis of a potential flow formulation and (2) analysis of an incompressible, unsteady, viscous flow from Navier-Stokes equations.
Real-time human collaboration monitoring and intervention
Merkle, Peter B.; Johnson, Curtis M.; Jones, Wendell B.; Yonas, Gerold; Doser, Adele B.; Warner, David J.
2010-07-13
A method of and apparatus for monitoring and intervening in, in real time, a collaboration between a plurality of subjects comprising measuring indicia of physiological and cognitive states of each of the plurality of subjects, communicating the indicia to a monitoring computer system, with the monitoring computer system, comparing the indicia with one or more models of previous collaborative performance of one or more of the plurality of subjects, and with the monitoring computer system, employing the results of the comparison to communicate commands or suggestions to one or more of the plurality of subjects.
Computational Planning in Facial Surgery.
Zachow, Stefan
2015-10-01
This article reflects the research of the last two decades in computational planning for cranio-maxillofacial surgery. Model-guided and computer-assisted surgery planning has tremendously developed due to ever increasing computational capabilities. Simulators for education, planning, and training of surgery are often compared with flight simulators, where maneuvers are also trained to reduce a possible risk of failure. Meanwhile, digital patient models can be derived from medical image data with astonishing accuracy and thus can serve for model surgery to derive a surgical template model that represents the envisaged result. Computerized surgical planning approaches, however, are often still explorative, meaning that a surgeon tries to find a therapeutic concept based on his or her expertise using computational tools that are mimicking real procedures. Future perspectives of an improved computerized planning may be that surgical objectives will be generated algorithmically by employing mathematical modeling, simulation, and optimization techniques. Planning systems thus act as intelligent decision support systems. However, surgeons can still use the existing tools to vary the proposed approach, but they mainly focus on how to transfer objectives into reality. Such a development may result in a paradigm shift for future surgery planning. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Constructing a patient-specific computer model of the upper airway in sleep apnea patients.
Dhaliwal, Sandeep S; Hesabgar, Seyyed M; Haddad, Seyyed M H; Ladak, Hanif; Samani, Abbas; Rotenberg, Brian W
2018-01-01
The use of computer simulation to develop a high-fidelity model has been proposed as a novel and cost-effective alternative to help guide therapeutic intervention in sleep apnea surgery. We describe a computer model based on patient-specific anatomy of obstructive sleep apnea (OSA) subjects wherein the percentage and sites of upper airway collapse are compared to findings on drug-induced sleep endoscopy (DISE). Basic science computer model generation. Three-dimensional finite element techniques were undertaken for model development in a pilot study of four OSA patients. Magnetic resonance imaging was used to capture patient anatomy and software employed to outline critical anatomical structures. A finite-element mesh was applied to the volume enclosed by each structure. Linear and hyperelastic soft-tissue properties for various subsites (tonsils, uvula, soft palate, and tongue base) were derived using an inverse finite-element technique from surgical specimens. Each model underwent computer simulation to determine the degree of displacement on various structures within the upper airway, and these findings were compared to DISE exams performed on the four study patients. Computer simulation predictions for percentage of airway collapse and site of maximal collapse show agreement with observed results seen on endoscopic visualization. Modeling the upper airway in OSA patients is feasible and holds promise in aiding patient-specific surgical treatment. NA. Laryngoscope, 128:277-282, 2018. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.
High temperature superconductors applications in telecommunications
NASA Technical Reports Server (NTRS)
Kumar, A. Anil; Li, Jiang; Zhang, Ming Fang
1995-01-01
The purpose of this paper is twofold: (1) to discuss high temperature superconductors with specific reference to their employment in telecommunications applications; and (2) to discuss a few of the limitations of the normally employed two-fluid model. While the debate on the actual usage of high temperature superconductors in the design of electronic and telecommunications devices - obvious advantages versus practical difficulties - needs to be settled in the near future, it is of great interest to investigate the parameters and the assumptions that will be employed in such designs. This paper deals with the issue of providing the microwave design engineer with performance data for such superconducting waveguides. The values of conductivity and surface resistance, which are the primary determining factors of a waveguide performance, are computed based on the two-fluid model. A comparison between two models - a theoretical one in terms of microscopic parameters (termed Model A) and an experimental fit in terms of macroscopic parameters (termed Model B) - shows the limitations and the resulting ambiguities of the two-fluid model at high frequencies and at temperatures close to the transition temperature. The validity of the two-fluid model is then discussed. Our preliminary results show that the electrical transport description in the normal and superconducting phases as they are formulated in the two-fluid model needs to be modified to incorporate the new and special features of high temperature superconductors. Parameters describing the waveguide performance - conductivity, surface resistance and attenuation constant - will be computed. Potential applications in communications networks and large scale integrated circuits will be discussed. Some of the ongoing work will be reported. In particular, a brief proposal is made to investigate of the effects of electromagnetic interference and the concomitant notion of electromagnetic compatibility (EMI/EMC) of high T(sub c) superconductors.
High temperature superconductors applications in telecommunications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, A.A.; Li, J.; Zhang, M.F.
1994-12-31
The purpose of this paper is twofold: to discuss high temperature superconductors with specific reference to their employment in telecommunications applications; and to discuss a few of the limitations of the normally employed two-fluid model. While the debate on the actual usage of high temperature superconductors in the design of electronic and telecommunications devices-obvious advantages versus practical difficulties-needs to be settled in the near future, it is of great interest to investigate the parameters and the assumptions that will be employed in such designs. This paper deals with the issue of providing the microwave design engineer with performance data formore » such superconducting waveguides. The values of conductivity and surface resistance, which are the primary determining factors of a waveguide performance, are computed based on the two-fluid model. A comparison between two models-a theoretical one in terms of microscopic parameters (termed Model A) and an experimental fit in terms of macroscopic parameters (termed Model B)-shows the limitations and the resulting ambiguities of the two-fluid model at high frequencies and at temperatures close to the transition temperature. The validity of the two-fluid model is then discussed. Our preliminary results show that the electrical transport description in the normal and superconducting phases as they are formulated in the two-fluid model needs to be modified to incorporate the new and special features of high temperature superconductors. Parameters describing the waveguide performance-conductivity, surface resistance and attenuation constant-will be computed. Potential applications in communications networks and large scale integrated circuits will be discussed. Some of the ongoing work will be reported. In particular, a brief proposal is made to investigate of the effects of electromagnetic interference and the concomitant notion of electromagnetic compatibility (EMI/EMC) of high T{sub c} superconductors.« less
Economics of Employer-Sponsored Workplace Vaccination to Prevent Pandemic and Seasonal Influenza
Lee, Bruce Y.; Bailey, Rachel R.; Wiringa, Ann E.; Afriyie, Abena; Wateska, Angela R.; Smith, Kenneth J.; Zimmerman, Richard K.
2010-01-01
Employers may be loath to fund vaccination programs without understanding the economic consequences. We developed a decision analytic computational simulation model including dynamic transmission elements that determined the cost-benefit of employer-sponsored workplace vaccination from the employer's perspective. Implementing such programs was relatively inexpensive (<$35/vaccinated employee) and, in many cases, cost saving across diverse occupational groups in all seasonal influenza scenarios. Such programs were cost-saving for a 20% serologic attack rate pandemic scenario (−$15 to −$995) per vaccinated employee) and a 30% serologic attack rate pandemic scenario (range −$39 to −$1,494 per vaccinated employee) across all age and major occupational groups. PMID:20620168
Geostatistical applications in ground-water modeling in south-central Kansas
Ma, T.-S.; Sophocleous, M.; Yu, Y.-S.
1999-01-01
This paper emphasizes the supportive role of geostatistics in applying ground-water models. Field data of 1994 ground-water level, bedrock, and saltwater-freshwater interface elevations in south-central Kansas were collected and analyzed using the geostatistical approach. Ordinary kriging was adopted to estimate initial conditions for ground-water levels and topography of the Permian bedrock at the nodes of a finite difference grid used in a three-dimensional numerical model. Cokriging was used to estimate initial conditions for the saltwater-freshwater interface. An assessment of uncertainties in the estimated data is presented. The kriged and cokriged estimation variances were analyzed to evaluate the adequacy of data employed in the modeling. Although water levels and bedrock elevations are well described by spherical semivariogram models, additional data are required for better cokriging estimation of the interface data. The geostatistically analyzed data were employed in a numerical model of the Siefkes site in the project area. Results indicate that the computed chloride concentrations and ground-water drawdowns reproduced the observed data satisfactorily.This paper emphasizes the supportive role of geostatistics in applying ground-water models. Field data of 1994 ground-water level, bedrock, and saltwater-freshwater interface elevations in south-central Kansas were collected and analyzed using the geostatistical approach. Ordinary kriging was adopted to estimate initial conditions for ground-water levels and topography of the Permian bedrock at the nodes of a finite difference grid used in a three-dimensional numerical model. Cokriging was used to estimate initial conditions for the saltwater-freshwater interface. An assessment of uncertainties in the estimated data is presented. The kriged and cokriged estimation variances were analyzed to evaluate the adequacy of data employed in the modeling. Although water levels and bedrock elevations are well described by spherical semivariogram models, additional data are required for better cokriging estimation of the interface data. The geostatistically analyzed data were employed in a numerical model of the Siefkes site in the project area. Results indicate that the computed chloride concentrations and ground-water drawdowns reproduced the observed data satisfactorily.
NASA Technical Reports Server (NTRS)
Suzen, Y. B.; Huang, P. G.; Ashpis, D. E.; Volino, R. J.; Corke, T. C.; Thomas, F. O.; Huang, J.; Lake, J. P.; King, P. I.
2007-01-01
A transport equation for the intermittency factor is employed to predict the transitional flows in low-pressure turbines. The intermittent behavior of the transitional flows is taken into account and incorporated into computations by modifying the eddy viscosity, mu(sub p) with the intermittency factor, gamma. Turbulent quantities are predicted using Menter's two-equation turbulence model (SST). The intermittency factor is obtained from a transport equation model which can produce both the experimentally observed streamwise variation of intermittency and a realistic profile in the cross stream direction. The model had been previously validated against low-pressure turbine experiments with success. In this paper, the model is applied to predictions of three sets of recent low-pressure turbine experiments on the Pack B blade to further validate its predicting capabilities under various flow conditions. Comparisons of computational results with experimental data are provided. Overall, good agreement between the experimental data and computational results is obtained. The new model has been shown to have the capability of accurately predicting transitional flows under a wide range of low-pressure turbine conditions.
Computing chemical organizations in biological networks.
Centler, Florian; Kaleta, Christoph; di Fenizio, Pietro Speroni; Dittrich, Peter
2008-07-15
Novel techniques are required to analyze computational models of intracellular processes as they increase steadily in size and complexity. The theory of chemical organizations has recently been introduced as such a technique that links the topology of biochemical reaction network models to their dynamical repertoire. The network is decomposed into algebraically closed and self-maintaining subnetworks called organizations. They form a hierarchy representing all feasible system states including all steady states. We present three algorithms to compute the hierarchy of organizations for network models provided in SBML format. Two of them compute the complete organization hierarchy, while the third one uses heuristics to obtain a subset of all organizations for large models. While the constructive approach computes the hierarchy starting from the smallest organization in a bottom-up fashion, the flux-based approach employs self-maintaining flux distributions to determine organizations. A runtime comparison on 16 different network models of natural systems showed that none of the two exhaustive algorithms is superior in all cases. Studying a 'genome-scale' network model with 762 species and 1193 reactions, we demonstrate how the organization hierarchy helps to uncover the model structure and allows to evaluate the model's quality, for example by detecting components and subsystems of the model whose maintenance is not explained by the model. All data and a Java implementation that plugs into the Systems Biology Workbench is available from http://www.minet.uni-jena.de/csb/prj/ot/tools.
Automatic computation of 2D cardiac measurements from B-mode echocardiography
NASA Astrophysics Data System (ADS)
Park, JinHyeong; Feng, Shaolei; Zhou, S. Kevin
2012-03-01
We propose a robust and fully automatic algorithm which computes the 2D echocardiography measurements recommended by America Society of Echocardiography. The algorithm employs knowledge-based imaging technologies which can learn the expert's knowledge from the training images and expert's annotation. Based on the models constructed from the learning stage, the algorithm searches initial location of the landmark points for the measurements by utilizing heart structure of left ventricle including mitral valve aortic valve. It employs the pseudo anatomic M-mode image generated by accumulating the line images in 2D parasternal long axis view along the time to refine the measurement landmark points. The experiment results with large volume of data show that the algorithm runs fast and is robust comparable to expert.
Hybrid reduced order modeling for assembly calculations
Bang, Youngsuk; Abdel-Khalik, Hany S.; Jessee, Matthew A.; ...
2015-08-14
While the accuracy of assembly calculations has greatly improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the usemore » of the reduced order modeling for a single physics code, such as a radiation transport calculation. This paper extends those works to coupled code systems as currently employed in assembly calculations. Finally, numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system.« less
Computational Modeling and Real-Time Control of Patient-Specific Laser Treatment of Cancer
Fuentes, D.; Oden, J. T.; Diller, K. R.; Hazle, J. D.; Elliott, A.; Shetty, A.; Stafford, R. J.
2014-01-01
An adaptive feedback control system is presented which employs a computational model of bioheat transfer in living tissue to guide, in real-time, laser treatments of prostate cancer monitored by magnetic resonance thermal imaging (MRTI). The system is built on what can be referred to as cyberinfrastructure - a complex structure of high-speed network, large-scale parallel computing devices, laser optics, imaging, visualizations, inverse-analysis algorithms, mesh generation, and control systems that guide laser therapy to optimally control the ablation of cancerous tissue. The computational system has been successfully tested on in-vivo, canine prostate. Over the course of an 18 minute laser induced thermal therapy (LITT) performed at M.D. Anderson Cancer Center (MDACC) in Houston, Texas, the computational models were calibrated to intra-operative real time thermal imaging treatment data and the calibrated models controlled the bioheat transfer to within 5°C of the predetermined treatment plan. The computational arena is in Austin, Texas and managed at the Institute for Computational Engineering and Sciences (ICES). The system is designed to control the bioheat transfer remotely while simultaneously providing real-time remote visualization of the on-going treatment. Post operative histology of the canine prostate reveal that the damage region was within the targeted 1.2cm diameter treatment objective. PMID:19148754
Computational modeling and real-time control of patient-specific laser treatment of cancer.
Fuentes, D; Oden, J T; Diller, K R; Hazle, J D; Elliott, A; Shetty, A; Stafford, R J
2009-04-01
An adaptive feedback control system is presented which employs a computational model of bioheat transfer in living tissue to guide, in real-time, laser treatments of prostate cancer monitored by magnetic resonance thermal imaging. The system is built on what can be referred to as cyberinfrastructure-a complex structure of high-speed network, large-scale parallel computing devices, laser optics, imaging, visualizations, inverse-analysis algorithms, mesh generation, and control systems that guide laser therapy to optimally control the ablation of cancerous tissue. The computational system has been successfully tested on in vivo, canine prostate. Over the course of an 18 min laser-induced thermal therapy performed at M.D. Anderson Cancer Center (MDACC) in Houston, Texas, the computational models were calibrated to intra-operative real-time thermal imaging treatment data and the calibrated models controlled the bioheat transfer to within 5 degrees C of the predetermined treatment plan. The computational arena is in Austin, Texas and managed at the Institute for Computational Engineering and Sciences (ICES). The system is designed to control the bioheat transfer remotely while simultaneously providing real-time remote visualization of the on-going treatment. Post-operative histology of the canine prostate reveal that the damage region was within the targeted 1.2 cm diameter treatment objective.
The NASA/MSFC global reference atmospheric model: MOD 3 (with spherical harmonic wind model)
NASA Technical Reports Server (NTRS)
Justus, C. G.; Fletcher, G. R.; Gramling, F. E.; Pace, W. B.
1980-01-01
Improvements to the global reference atmospheric model are described. The basic model includes monthly mean values of pressure, density, temperature, and geostrophic winds, as well as quasi-biennial and small and large scale random perturbations. A spherical harmonic wind model for the 25 to 90 km height range is included. Below 25 km and above 90 km, the GRAM program uses the geostrophic wind equations and pressure data to compute the mean wind. In the altitudes where the geostrophic wind relations are used, an interpolation scheme is employed for estimating winds at low latitudes where the geostrophic wind relations being to mesh down. Several sample wind profiles are given, as computed by the spherical harmonic model. User and programmer manuals are presented.
A catchment scale water balance model for FIFE
NASA Technical Reports Server (NTRS)
Famiglietti, J. S.; Wood, E. F.; Sivapalan, M.; Thongs, D. J.
1992-01-01
A catchment scale water balance model is presented and used to predict evaporation from the King's Creek catchment at the First ISLSCP Field Experiment site on the Konza Prairie, Kansas. The model incorporates spatial variability in topography, soils, and precipitation to compute the land surface hydrologic fluxes. A network of 20 rain gages was employed to measure rainfall across the catchment in the summer of 1987. These data were spatially interpolated and used to drive the model during storm periods. During interstorm periods the model was driven by the estimated potential evaporation, which was calculated using net radiation data collected at site 2. Model-computed evaporation is compared to that observed, both at site 2 (grid location 1916-BRS) and the catchment scale, for the simulation period from June 1 to October 9, 1987.
NASA Technical Reports Server (NTRS)
Lord, Steven D.
1992-01-01
This report describes a new software tool, ATRAN, which computes the transmittance of Earth's atmosphere at near- and far-infrared wavelengths. We compare the capabilities of this program with others currently available and demonstrate its utility for observational data calibration and reduction. The program employs current water-vapor and ozone models to produce fast and accurate transmittance spectra for wavelengths ranging from 0.8 microns to 10 mm.
Computational Design of Functionalized Metal–Organic Framework Nodes for Catalysis
2017-01-01
Recent progress in the synthesis and characterization of metal–organic frameworks (MOFs) has opened the door to an increasing number of possible catalytic applications. The great versatility of MOFs creates a large chemical space, whose thorough experimental examination becomes practically impossible. Therefore, computational modeling is a key tool to support, rationalize, and guide experimental efforts. In this outlook we survey the main methodologies employed to model MOFs for catalysis, and we review selected recent studies on the functionalization of their nodes. We pay special attention to catalytic applications involving natural gas conversion. PMID:29392172
NASA Technical Reports Server (NTRS)
Gallenstein, J.; Huston, R. L.
1973-01-01
This paper presents an analysis of swimming motion with specific attention given to the flutter kick, the breast-stroke kick, and the breast stroke. The analysis is completely theoretical. It employs a mathematical model of the human body consisting of frustrums of elliptical cones. Dynamical equations are written for this model including both viscous and inertia forces. These equations are then applied with approximated swimming strokes and solved numerically using a digital computer. The procedure is to specify the input of the swimming motion. The computer solution then provides the output displacement, velocity, and rotation or body roll of the swimmer.
Ngwuluka, Ndidi C; Choonara, Yahya E; Kumar, Pradeep; du Toit, Lisa C; Khan, Riaz A; Pillay, Viness
2015-03-01
This study was undertaken to synthesize an interpolyelectrolyte complex (IPEC) of polymethacrylate (E100) and sodium carboxymethylcellulose (NaCMC) to form a polymeric hydrogel material for application in specialized oral drug delivery of sensitive levodopa. Computational modeling was employed to proffer insight into the interactions between the polymers. In addition, the reactional profile of NaCMC and polymethacrylate was elucidated using molecular mechanics energy relationships (MMER) and molecular dynamics simulations (MDS) by exploring the spatial disposition of NaCMC and E100 with respect to each other. Computational modeling revealed that the formation of the IPEC was due to strong ionic associations, hydrogen bonding, and hydrophilic interactions. The computational results corroborated well with the experimental and the analytical data. © 2014 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Treinish, Lloyd A.; Gough, Michael L.; Wildenhain, W. David
1987-01-01
The capability was developed of rapidly producing visual representations of large, complex, multi-dimensional space and earth sciences data sets via the implementation of computer graphics modeling techniques on the Massively Parallel Processor (MPP) by employing techniques recently developed for typically non-scientific applications. Such capabilities can provide a new and valuable tool for the understanding of complex scientific data, and a new application of parallel computing via the MPP. A prototype system with such capabilities was developed and integrated into the National Space Science Data Center's (NSSDC) Pilot Climate Data System (PCDS) data-independent environment for computer graphics data display to provide easy access to users. While developing these capabilities, several problems had to be solved independently of the actual use of the MPP, all of which are outlined.
NASA Astrophysics Data System (ADS)
Ravishankar, Bharani
Conventional space vehicles have thermal protection systems (TPS) that provide protection to an underlying structure that carries the flight loads. In an attempt to save weight, there is interest in an integrated TPS (ITPS) that combines the structural function and the TPS function. This has weight saving potential, but complicates the design of the ITPS that now has both thermal and structural failure modes. The main objectives of this dissertation was to optimally design the ITPS subjected to thermal and mechanical loads through deterministic and reliability based optimization. The optimization of the ITPS structure requires computationally expensive finite element analyses of 3D ITPS (solid) model. To reduce the computational expenses involved in the structural analysis, finite element based homogenization method was employed, homogenizing the 3D ITPS model to a 2D orthotropic plate. However it was found that homogenization was applicable only for panels that are much larger than the characteristic dimensions of the repeating unit cell in the ITPS panel. Hence a single unit cell was used for the optimization process to reduce the computational cost. Deterministic and probabilistic optimization of the ITPS panel required evaluation of failure constraints at various design points. This further demands computationally expensive finite element analyses which was replaced by efficient, low fidelity surrogate models. In an optimization process, it is important to represent the constraints accurately to find the optimum design. Instead of building global surrogate models using large number of designs, the computational resources were directed towards target regions near constraint boundaries for accurate representation of constraints using adaptive sampling strategies. Efficient Global Reliability Analyses (EGRA) facilitates sequentially sampling of design points around the region of interest in the design space. EGRA was applied to the response surface construction of the failure constraints in the deterministic and reliability based optimization of the ITPS panel. It was shown that using adaptive sampling, the number of designs required to find the optimum were reduced drastically, while improving the accuracy. System reliability of ITPS was estimated using Monte Carlo Simulation (MCS) based method. Separable Monte Carlo method was employed that allowed separable sampling of the random variables to predict the probability of failure accurately. The reliability analysis considered uncertainties in the geometry, material properties, loading conditions of the panel and error in finite element modeling. These uncertainties further increased the computational cost of MCS techniques which was also reduced by employing surrogate models. In order to estimate the error in the probability of failure estimate, bootstrapping method was applied. This research work thus demonstrates optimization of the ITPS composite panel with multiple failure modes and large number of uncertainties using adaptive sampling techniques.
ERIC Educational Resources Information Center
Levin, Sidney
1984-01-01
Presents the listing (TRS-80) for a computer program which derives the relativistic equation (employing as a model the concept of a moving clock which emits photons at regular intervals) and calculates transformations of time, mass, and length with increasing velocities (Einstein-Lorentz transformations). (JN)
NASA Astrophysics Data System (ADS)
Yi, Jin; Li, Xinyu; Xiao, Mi; Xu, Junnan; Zhang, Lin
2017-01-01
Engineering design often involves different types of simulation, which results in expensive computational costs. Variable fidelity approximation-based design optimization approaches can realize effective simulation and efficiency optimization of the design space using approximation models with different levels of fidelity and have been widely used in different fields. As the foundations of variable fidelity approximation models, the selection of sample points of variable-fidelity approximation, called nested designs, is essential. In this article a novel nested maximin Latin hypercube design is constructed based on successive local enumeration and a modified novel global harmony search algorithm. In the proposed nested designs, successive local enumeration is employed to select sample points for a low-fidelity model, whereas the modified novel global harmony search algorithm is employed to select sample points for a high-fidelity model. A comparative study with multiple criteria and an engineering application are employed to verify the efficiency of the proposed nested designs approach.
NASA Technical Reports Server (NTRS)
Rubesin, M. W.; Okuno, A. F.; Levy, L. L., Jr.; Mcdevitt, J. B.; Seegmiller, H. L.
1976-01-01
A combined experimental and computational research program is described for testing and guiding turbulence modeling within regions of separation induced by shock waves incident in turbulent boundary layers. Specifically, studies are made of the separated flow the rear portion of an 18%-thick circular-arc airfoil at zero angle of attack in high Reynolds number supercritical flow. The measurements include distributions of surface static pressure and local skin friction. The instruments employed include highfrequency response pressure cells and a large array of surface hot-wire skin-friction gages. Computations at the experimental flow conditions are made using time-dependent solutions of ensemble-averaged Navier-Stokes equations, plus additional equations for the turbulence modeling.
Comparison of FDNS liquid rocket engine plume computations with SPF/2
NASA Technical Reports Server (NTRS)
Kumar, G. N.; Griffith, D. O., II; Warsi, S. A.; Seaford, C. M.
1993-01-01
Prediction of a plume's shape and structure is essential to the evaluation of base region environments. The JANNAF standard plume flowfield analysis code SPF/2 predicts plumes well, but cannot analyze base regions. Full Navier-Stokes CFD codes can calculate both zones; however, before they can be used, they must be validated. The CFD code FDNS3D (Finite Difference Navier-Stokes Solver) was used to analyze the single plume of a Space Transportation Main Engine (STME) and comparisons were made with SPF/2 computations. Both frozen and finite rate chemistry models were employed as well as two turbulence models in SPF/2. The results indicate that FDNS3D plume computations agree well with SPF/2 predictions for liquid rocket engine plumes.
Computational Modeling of Cultural Dimensions in Adversary Organizations
2010-01-01
Nodes”, In the Proceedings of the 9th Conference on Uncertainty in Artificial Intelli - gence, 1993. [8] Pearl, J. Probabilistic Reasoning in...the artificial life simulations; in con- trast, models with only a few agents typically employ quite sophisticated cognitive agents capa- ble of...Model Construction 45 cisions as to how to allocate scarce ISR assets (two Unmanned Air Systems, UAS ) among the two Red activities while at the same
A Pythonic Approach for Computational Geosciences and Geo-Data Processing
NASA Astrophysics Data System (ADS)
Morra, G.; Yuen, D. A.; Lee, S. M.
2016-12-01
Computational methods and data analysis play a constantly increasing role in Earth Sciences however students and professionals need to climb a steep learning curve before reaching a sufficient level that allows them to run effective models. Furthermore the recent arrival and new powerful machine learning tools such as Torch and Tensor Flow has opened new possibilities but also created a new realm of complications related to the completely different technology employed. We present here a series of examples entirely written in Python, a language that combines the simplicity of Matlab with the power and speed of compiled languages such as C, and apply them to a wide range of geological processes such as porous media flow, multiphase fluid-dynamics, creeping flow and many-faults interaction. We also explore ways in which machine learning can be employed in combination with numerical modelling. From immediately interpreting a large number of modeling results to optimizing a set of modeling parameters to obtain a desired optimal simulation. We show that by using Python undergraduate and graduate can learn advanced numerical technologies with a minimum dedicated effort, which in turn encourages them to develop more numerical tools and quickly progress in their computational abilities. We also show how Python allows combining modeling with machine learning as pieces of LEGO, therefore simplifying the transition towards a new kind of scientific geo-modelling. The conclusion is that Python is an ideal tool to create an infrastructure for geosciences that allows users to quickly develop tools, reuse techniques and encourage collaborative efforts to interpret and integrate geo-data in profound new ways.
Experimental Identification of Non-Abelian Topological Orders on a Quantum Simulator.
Li, Keren; Wan, Yidun; Hung, Ling-Yan; Lan, Tian; Long, Guilu; Lu, Dawei; Zeng, Bei; Laflamme, Raymond
2017-02-24
Topological orders can be used as media for topological quantum computing-a promising quantum computation model due to its invulnerability against local errors. Conversely, a quantum simulator, often regarded as a quantum computing device for special purposes, also offers a way of characterizing topological orders. Here, we show how to identify distinct topological orders via measuring their modular S and T matrices. In particular, we employ a nuclear magnetic resonance quantum simulator to study the properties of three topologically ordered matter phases described by the string-net model with two string types, including the Z_{2} toric code, doubled semion, and doubled Fibonacci. The third one, non-Abelian Fibonacci order is notably expected to be the simplest candidate for universal topological quantum computing. Our experiment serves as the basic module, built on which one can simulate braiding of non-Abelian anyons and ultimately, topological quantum computation via the braiding, and thus provides a new approach of investigating topological orders using quantum computers.
Computations of the Magnus effect for slender bodies in supersonic flow
NASA Technical Reports Server (NTRS)
Sturek, W. B.; Schiff, L. B.
1980-01-01
A recently reported Parabolized Navier-Stokes code has been employed to compute the supersonic flow field about spinning cone, ogive-cylinder, and boattailed bodies of revolution at moderate incidence. The computations were performed for flow conditions where extensive measurements for wall pressure, boundary layer velocity profiles and Magnus force had been obtained. Comparisons between the computational results and experiment indicate excellent agreement for angles of attack up to six degrees. The comparisons for Magnus effects show that the code accurately predicts the effects of body shape and Mach number for the selected models for Mach numbers in the range of 2-4.
A Comparison of Three Navier-Stokes Solvers for Exhaust Nozzle Flowfields
NASA Technical Reports Server (NTRS)
Georgiadis, Nicholas J.; Yoder, Dennis A.; Debonis, James R.
1999-01-01
A comparison of the NPARC, PAB, and WIND (previously known as NASTD) Navier-Stokes solvers is made for two flow cases with turbulent mixing as the dominant flow characteristic, a two-dimensional ejector nozzle and a Mach 1.5 elliptic jet. The objective of the work is to determine if comparable predictions of nozzle flows can be obtained from different Navier-Stokes codes employed in a multiple site research program. A single computational grid was constructed for each of the two flows and used for all of the Navier-Stokes solvers. In addition, similar k-e based turbulence models were employed in each code, and boundary conditions were specified as similarly as possible across the codes. Comparisons of mass flow rates, velocity profiles, and turbulence model quantities are made between the computations and experimental data. The computational cost of obtaining converged solutions with each of the codes is also documented. Results indicate that all of the codes provided similar predictions for the two nozzle flows. Agreement of the Navier-Stokes calculations with experimental data was good for the ejector nozzle. However, for the Mach 1.5 elliptic jet, the calculations were unable to accurately capture the development of the three dimensional elliptic mixing layer.
2010-01-01
Background The finite volume solver Fluent (Lebanon, NH, USA) is a computational fluid dynamics software employed to analyse biological mass-transport in the vasculature. A principal consideration for computational modelling of blood-side mass-transport is convection-diffusion discretisation scheme selection. Due to numerous discretisation schemes available when developing a mass-transport numerical model, the results obtained should either be validated against benchmark theoretical solutions or experimentally obtained results. Methods An idealised aneurysm model was selected for the experimental and computational mass-transport analysis of species concentration due to its well-defined recirculation region within the aneurysmal sac, allowing species concentration to vary slowly with time. The experimental results were obtained from fluid samples extracted from a glass aneurysm model, using the direct spectrophometric concentration measurement technique. The computational analysis was conducted using the four convection-diffusion discretisation schemes available to the Fluent user, including the First-Order Upwind, the Power Law, the Second-Order Upwind and the Quadratic Upstream Interpolation for Convective Kinetics (QUICK) schemes. The fluid has a diffusivity of 3.125 × 10-10 m2/s in water, resulting in a Peclet number of 2,560,000, indicating strongly convection-dominated flow. Results The discretisation scheme applied to the solution of the convection-diffusion equation, for blood-side mass-transport within the vasculature, has a significant influence on the resultant species concentration field. The First-Order Upwind and the Power Law schemes produce similar results. The Second-Order Upwind and QUICK schemes also correlate well but differ considerably from the concentration contour plots of the First-Order Upwind and Power Law schemes. The computational results were then compared to the experimental findings. An average error of 140% and 116% was demonstrated between the experimental results and those obtained from the First-Order Upwind and Power Law schemes, respectively. However, both the Second-Order upwind and QUICK schemes accurately predict species concentration under high Peclet number, convection-dominated flow conditions. Conclusion Convection-diffusion discretisation scheme selection has a strong influence on resultant species concentration fields, as determined by CFD. Furthermore, either the Second-Order or QUICK discretisation schemes should be implemented when numerically modelling convection-dominated mass-transport conditions. Finally, care should be taken not to utilize computationally inexpensive discretisation schemes at the cost of accuracy in resultant species concentration. PMID:20642816
Burd, H J; Wilde, G S
2016-04-01
The use of a femtosecond laser to form planes of cavitation bubbles within the ocular lens has been proposed as a potential treatment for presbyopia. The intended purpose of these planes of cavitation bubbles (referred to in this paper as 'cutting planes') is to increase the compliance of the lens, with a consequential increase in the amplitude of accommodation. The current paper describes a computational modelling study, based on three-dimensional finite element analysis, to investigate the relationship between the geometric arrangement of the cutting planes and the resulting improvement in lens accommodation performance. The study is limited to radial cutting planes. The effectiveness of a variety of cutting plane geometries was investigated by means of modelling studies conducted on a 45-year human lens. The results obtained from the analyses depend on the particular modelling procedures that are employed. When the lens substance is modelled as an incompressible material, radial cutting planes are found to be ineffective. However, when a poroelastic model is employed for the lens substance, radial cuts are shown to cause an increase in the computed accommodation performance of the lens. In this case, radial cuts made in the peripheral regions of the lens have a relatively small influence on the accommodation performance of the lens; the lentotomy process is seen to be more effective when cuts are made near to the polar axis. When the lens substance is modelled as a poroelastic material, the computational results suggest that useful improvements in lens accommodation performance can be achieved, provided that the radial cuts are extended to the polar axis. Radial cuts are ineffective when the lens substance is modelled as an incompressible material. Significant challenges remain in developing a safe and effective surgical procedure based on this lentotomy technique.
NASA Technical Reports Server (NTRS)
Turon, A.; Davila, C. G.; Camanho, P. P.; Costa, J.
2007-01-01
This paper presents a methodology to determine the parameters to be used in the constitutive equations of Cohesive Zone Models employed in the simulation of delamination in composite materials by means of decohesion finite elements. A closed-form expression is developed to define the stiffness of the cohesive layer. A novel procedure that allows the use of coarser meshes of decohesion elements in large-scale computations is also proposed. The procedure ensures that the energy dissipated by the fracture process is computed correctly. It is shown that coarse-meshed models defined using the approach proposed here yield the same results as the models with finer meshes normally used for the simulation of fracture processes.
Modeling Temporal Crowd Work Quality with Limited Supervision
2015-11-11
crowdsourcing, human computation, predic- tion, uncertainty-aware learning, time- series modeling Introduction While crowdsourcing offers a cost...individual correctness. As discussed ear- lier, such a strategy is difficult to employ in a live setting because it is unrealistic to assume that all...et al. 2014). Finally, there are interesting opportunities to investigate at the intersection of live task-routing with active-learning techniques
NASA Astrophysics Data System (ADS)
Feng, Jinchao; Lansford, Joshua; Mironenko, Alexander; Pourkargar, Davood Babaei; Vlachos, Dionisios G.; Katsoulakis, Markos A.
2018-03-01
We propose non-parametric methods for both local and global sensitivity analysis of chemical reaction models with correlated parameter dependencies. The developed mathematical and statistical tools are applied to a benchmark Langmuir competitive adsorption model on a close packed platinum surface, whose parameters, estimated from quantum-scale computations, are correlated and are limited in size (small data). The proposed mathematical methodology employs gradient-based methods to compute sensitivity indices. We observe that ranking influential parameters depends critically on whether or not correlations between parameters are taken into account. The impact of uncertainty in the correlation and the necessity of the proposed non-parametric perspective are demonstrated.
NASA Astrophysics Data System (ADS)
Huang, Chao; Nie, Liming; Schoonover, Robert W.; Guo, Zijian; Schirra, Carsten O.; Anastasio, Mark A.; Wang, Lihong V.
2012-06-01
A challenge in photoacoustic tomography (PAT) brain imaging is to compensate for aberrations in the measured photoacoustic data due to their propagation through the skull. By use of information regarding the skull morphology and composition obtained from adjunct x-ray computed tomography image data, we developed a subject-specific imaging model that accounts for such aberrations. A time-reversal-based reconstruction algorithm was employed with this model for image reconstruction. The image reconstruction methodology was evaluated in experimental studies involving phantoms and monkey heads. The results establish that our reconstruction methodology can effectively compensate for skull-induced acoustic aberrations and improve image fidelity in transcranial PAT.
A fortran program for Monte Carlo simulation of oil-field discovery sequences
Bohling, Geoffrey C.; Davis, J.C.
1993-01-01
We have developed a program for performing Monte Carlo simulation of oil-field discovery histories. A synthetic parent population of fields is generated as a finite sample from a distribution of specified form. The discovery sequence then is simulated by sampling without replacement from this parent population in accordance with a probabilistic discovery process model. The program computes a chi-squared deviation between synthetic and actual discovery sequences as a function of the parameters of the discovery process model, the number of fields in the parent population, and the distributional parameters of the parent population. The program employs the three-parameter log gamma model for the distribution of field sizes and employs a two-parameter discovery process model, allowing the simulation of a wide range of scenarios. ?? 1993.
Variability simulations with a steady, linearized primitive equations model
NASA Technical Reports Server (NTRS)
Kinter, J. L., III; Nigam, S.
1985-01-01
Solutions of the steady, primitive equations on a sphere, linearized about a zonally symmetric basic state are computed for the purpose of simulating monthly mean variability in the troposphere. The basic states are observed, winter monthly mean, zonal means of zontal and meridional velocities, temperatures and surface pressures computed from the 15 year NMC time series. A least squares fit to a series of Legendre polynomials is used to compute the basic states between 20 H and the equator, and the hemispheres are assumed symmetric. The model is spectral in the zonal direction, and centered differences are employed in the meridional and vertical directions. Since the model is steady and linear, the solution is obtained by inversion of a block, pente-diagonal matrix. The model simulates the climatology of the GFDL nine level, spectral general circulation model quite closely, particularly in middle latitudes above the boundary layer. This experiment is an extension of that simulation to examine variability of the steady, linear solution.
Verification of component mode techniques for flexible multibody systems
NASA Technical Reports Server (NTRS)
Wiens, Gloria J.
1990-01-01
Investigations were conducted in the modeling aspects of flexible multibodies undergoing large angular displacements. Models were to be generated and analyzed through application of computer simulation packages employing the 'component mode synthesis' techniques. Multibody Modeling, Verification and Control Laboratory (MMVC) plan was implemented, which includes running experimental tests on flexible multibody test articles. From these tests, data was to be collected for later correlation and verification of the theoretical results predicted by the modeling and simulation process.
The Preliminary Design of a Standardized Spacecraft Bus for Small Tactical Satellites (Volume 2)
1996-11-01
this requirement, conditions of the model need to be modified to provide some flexibility to the original solution set. In the business world this...time The mission modules modeled in the Modsat computer model are necessarily "generic" in nature to provide both flexibility in design evaluation and...methods employed during the study, the scope of the problem, the value system used to evaluate alternatives, tradeoff studies performed, modeling tools
DOT National Transportation Integrated Search
1978-01-01
A system analysis was completed of the general deterrence of driving while intoxicated (DWI). Elements which influence DWI decisions were identified and interrelated in a system model; then, potential countermeasures which might be employed in DWI ge...
Cost-Effectiveness of Four Educational Interventions.
ERIC Educational Resources Information Center
Levin, Henry M.; And Others
This study employs meta-analysis and cost-effectiveness instruments to evaluate and compare cross-age tutoring, computer assistance, class size reductions, and instructional time increases for their utility in improving elementary school reading and math scores. Using intervention effect studies as replication models, researchers first estimate…
Jagiello, Karolina; Grzonkowska, Monika; Swirog, Marta; ...
2016-08-29
In this contribution, the advantages and limitations of two computational techniques that can be used for the investigation of nanoparticles activity and toxicity: classic nano-QSAR (Quantitative Structure–Activity Relationships employed for nanomaterials) and 3D nano-QSAR (three-dimensional Quantitative Structure–Activity Relationships, such us Comparative Molecular Field Analysis, CoMFA/Comparative Molecular Similarity Indices Analysis, CoMSIA analysis employed for nanomaterials) have been briefly summarized. Both approaches were compared according to the selected criteria, including: efficiency, type of experimental data, class of nanomaterials, time required for calculations and computational cost, difficulties in the interpretation. Taking into account the advantages and limitations of each method, we provide themore » recommendations for nano-QSAR modellers and QSAR model users to be able to determine a proper and efficient methodology to investigate biological activity of nanoparticles in order to describe the underlying interactions in the most reliable and useful manner.« less
Pei, Zongrui; Max-Planck-Inst. fur Eisenforschung, Duseldorf; Eisenbach, Markus
2017-02-06
Dislocations are among the most important defects in determining the mechanical properties of both conventional alloys and high-entropy alloys. The Peierls-Nabarro model supplies an efficient pathway to their geometries and mobility. The difficulty in solving the integro-differential Peierls-Nabarro equation is how to effectively avoid the local minima in the energy landscape of a dislocation core. Among the other methods to optimize the dislocation core structures, we choose the algorithm of Particle Swarm Optimization, an algorithm that simulates the social behaviors of organisms. By employing more particles (bigger swarm) and more iterative steps (allowing them to explore for longer time), themore » local minima can be effectively avoided. But this would require more computational cost. The advantage of this algorithm is that it is readily parallelized in modern high computing architecture. We demonstrate the performance of our parallelized algorithm scales linearly with the number of employed cores.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jagiello, Karolina; Grzonkowska, Monika; Swirog, Marta
In this contribution, the advantages and limitations of two computational techniques that can be used for the investigation of nanoparticles activity and toxicity: classic nano-QSAR (Quantitative Structure–Activity Relationships employed for nanomaterials) and 3D nano-QSAR (three-dimensional Quantitative Structure–Activity Relationships, such us Comparative Molecular Field Analysis, CoMFA/Comparative Molecular Similarity Indices Analysis, CoMSIA analysis employed for nanomaterials) have been briefly summarized. Both approaches were compared according to the selected criteria, including: efficiency, type of experimental data, class of nanomaterials, time required for calculations and computational cost, difficulties in the interpretation. Taking into account the advantages and limitations of each method, we provide themore » recommendations for nano-QSAR modellers and QSAR model users to be able to determine a proper and efficient methodology to investigate biological activity of nanoparticles in order to describe the underlying interactions in the most reliable and useful manner.« less
Production Level CFD Code Acceleration for Hybrid Many-Core Architectures
NASA Technical Reports Server (NTRS)
Duffy, Austen C.; Hammond, Dana P.; Nielsen, Eric J.
2012-01-01
In this work, a novel graphics processing unit (GPU) distributed sharing model for hybrid many-core architectures is introduced and employed in the acceleration of a production-level computational fluid dynamics (CFD) code. The latest generation graphics hardware allows multiple processor cores to simultaneously share a single GPU through concurrent kernel execution. This feature has allowed the NASA FUN3D code to be accelerated in parallel with up to four processor cores sharing a single GPU. For codes to scale and fully use resources on these and the next generation machines, codes will need to employ some type of GPU sharing model, as presented in this work. Findings include the effects of GPU sharing on overall performance. A discussion of the inherent challenges that parallel unstructured CFD codes face in accelerator-based computing environments is included, with considerations for future generation architectures. This work was completed by the author in August 2010, and reflects the analysis and results of the time.
Hierarchical specification of the SIFT fault tolerant flight control system
NASA Technical Reports Server (NTRS)
Melliar-Smith, P. M.; Schwartz, R. L.
1981-01-01
The specification and mechanical verification of the Software Implemented Fault Tolerance (SIFT) flight control system is described. The methodology employed in the verification effort is discussed, and a description of the hierarchical models of the SIFT system is given. To meet the objective of NASA for the reliability of safety critical flight control systems, the SIFT computer must achieve a reliability well beyond the levels at which reliability can be actually measured. The methodology employed to demonstrate rigorously that the SIFT computer meets as reliability requirements is described. The hierarchy of design specifications from very abstract descriptions of system function down to the actual implementation is explained. The most abstract design specifications can be used to verify that the system functions correctly and with the desired reliability since almost all details of the realization were abstracted out. A succession of lower level models refine these specifications to the level of the actual implementation, and can be used to demonstrate that the implementation has the properties claimed of the abstract design specifications.
NASA Astrophysics Data System (ADS)
Cheviakov, Alexei F.
2017-11-01
An efficient systematic procedure is provided for symbolic computation of Lie groups of equivalence transformations and generalized equivalence transformations of systems of differential equations that contain arbitrary elements (arbitrary functions and/or arbitrary constant parameters), using the software package GeM for Maple. Application of equivalence transformations to the reduction of the number of arbitrary elements in a given system of equations is discussed, and several examples are considered. The first computational example of generalized equivalence transformations where the transformation of the dependent variable involves an arbitrary constitutive function is presented. As a detailed physical example, a three-parameter family of nonlinear wave equations describing finite anti-plane shear displacements of an incompressible hyperelastic fiber-reinforced medium is considered. Equivalence transformations are computed and employed to radically simplify the model for an arbitrary fiber direction, invertibly reducing the model to a simple form that corresponds to a special fiber direction, and involves no arbitrary elements. The presented computation algorithm is applicable to wide classes of systems of differential equations containing arbitrary elements.
Computational models for the analysis of three-dimensional internal and exhaust plume flowfields
NASA Technical Reports Server (NTRS)
Dash, S. M.; Delguidice, P. D.
1977-01-01
This paper describes computational procedures developed for the analysis of three-dimensional supersonic ducted flows and multinozzle exhaust plume flowfields. The models/codes embodying these procedures cater to a broad spectrum of geometric situations via the use of multiple reference plane grid networks in several coordinate systems. Shock capturing techniques are employed to trace the propagation and interaction of multiple shock surfaces while the plume interface, separating the exhaust and external flows, and the plume external shock are discretely analyzed. The computational grid within the reference planes follows the trace of streamlines to facilitate the incorporation of finite-rate chemistry and viscous computational capabilities. Exhaust gas properties consist of combustion products in chemical equilibrium. The computational accuracy of the models/codes is assessed via comparisons with exact solutions, results of other codes and experimental data. Results are presented for the flows in two-dimensional convergent and divergent ducts, expansive and compressive corner flows, flow in a rectangular nozzle and the plume flowfields for exhausts issuing out of single and multiple rectangular nozzles.
Bindu, G.; Semenov, S.
2013-01-01
This paper describes an efficient two-dimensional fused image reconstruction approach for Microwave Tomography (MWT). Finite Difference Time Domain (FDTD) models were created for a viable MWT experimental system having the transceivers modelled using thin wire approximation with resistive voltage sources. Born Iterative and Distorted Born Iterative methods have been employed for image reconstruction with the extremity imaging being done using a differential imaging technique. The forward solver in the imaging algorithm employs the FDTD method of solving the time domain Maxwell’s equations with the regularisation parameter computed using a stochastic approach. The algorithm is tested with 10% noise inclusion and successful image reconstruction has been shown implying its robustness. PMID:24058889
A computational model of in vitro angiogenesis based on extracellular matrix fibre orientation.
Edgar, Lowell T; Sibole, Scott C; Underwood, Clayton J; Guilkey, James E; Weiss, Jeffrey A
2013-01-01
Recent interest in the process of vascularisation within the biomedical community has motivated numerous new research efforts focusing on the process of angiogenesis. Although the role of chemical factors during angiogenesis has been well documented, the role of mechanical factors, such as the interaction between angiogenic vessels and the extracellular matrix, remains poorly understood. In vitro methods for studying angiogenesis exist; however, measurements available using such techniques often suffer from limited spatial and temporal resolutions. For this reason, computational models have been extensively employed to investigate various aspects of angiogenesis. This paper outlines the formulation and validation of a simple and robust computational model developed to accurately simulate angiogenesis based on length, branching and orientation morphometrics collected from vascularised tissue constructs. Microvessels were represented as a series of connected line segments. The morphology of the vessels was determined by a linear combination of the collagen fibre orientation, the vessel density gradient and a random walk component. Excellent agreement was observed between computational and experimental morphometric data over time. Computational predictions of microvessel orientation within an anisotropic matrix correlated well with experimental data. The accuracy of this modelling approach makes it a valuable platform for investigating the role of mechanical interactions during angiogenesis.
Yates, Christian A; Flegg, Mark B
2015-05-06
Spatial reaction-diffusion models have been employed to describe many emergent phenomena in biological systems. The modelling technique most commonly adopted in the literature implements systems of partial differential equations (PDEs), which assumes there are sufficient densities of particles that a continuum approximation is valid. However, owing to recent advances in computational power, the simulation and therefore postulation, of computationally intensive individual-based models has become a popular way to investigate the effects of noise in reaction-diffusion systems in which regions of low copy numbers exist. The specific stochastic models with which we shall be concerned in this manuscript are referred to as 'compartment-based' or 'on-lattice'. These models are characterized by a discretization of the computational domain into a grid/lattice of 'compartments'. Within each compartment, particles are assumed to be well mixed and are permitted to react with other particles within their compartment or to transfer between neighbouring compartments. Stochastic models provide accuracy, but at the cost of significant computational resources. For models that have regions of both low and high concentrations, it is often desirable, for reasons of efficiency, to employ coupled multi-scale modelling paradigms. In this work, we develop two hybrid algorithms in which a PDE in one region of the domain is coupled to a compartment-based model in the other. Rather than attempting to balance average fluxes, our algorithms answer a more fundamental question: 'how are individual particles transported between the vastly different model descriptions?' First, we present an algorithm derived by carefully redefining the continuous PDE concentration as a probability distribution. While this first algorithm shows very strong convergence to analytical solutions of test problems, it can be cumbersome to simulate. Our second algorithm is a simplified and more efficient implementation of the first, it is derived in the continuum limit over the PDE region alone. We test our hybrid methods for functionality and accuracy in a variety of different scenarios by comparing the averaged simulations with analytical solutions of PDEs for mean concentrations. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
A simulation study of homogeneous ice nucleation in supercooled salty water
NASA Astrophysics Data System (ADS)
Soria, Guiomar D.; Espinosa, Jorge R.; Ramirez, Jorge; Valeriani, Chantal; Vega, Carlos; Sanz, Eduardo
2018-06-01
We use computer simulations to investigate the effect of salt on homogeneous ice nucleation. The melting point of the employed solution model was obtained both by direct coexistence simulations and by thermodynamic integration from previous calculations of the water chemical potential. Using a seeding approach, in which we simulate ice seeds embedded in a supercooled aqueous solution, we compute the nucleation rate as a function of temperature for a 1.85 NaCl mol per water kilogram solution at 1 bar. To improve the accuracy and reliability of our calculations, we combine seeding with the direct computation of the ice-solution interfacial free energy at coexistence using the Mold Integration method. We compare the results with previous simulation work on pure water to understand the effect caused by the solute. The model captures the experimental trend that the nucleation rate at a given supercooling decreases when adding salt. Despite the fact that the thermodynamic driving force for ice nucleation is higher for salty water for a given supercooling, the nucleation rate slows down with salt due to a significant increase of the ice-fluid interfacial free energy. The salty water model predicts an ice nucleation rate that is in good agreement with experimental measurements, bringing confidence in the predictive ability of the model. We expect that the combination of state-of-the-art simulation methods here employed to study ice nucleation from solution will be of much use in forthcoming numerical investigations of crystallization in mixtures.
A simulation study of homogeneous ice nucleation in supercooled salty water.
Soria, Guiomar D; Espinosa, Jorge R; Ramirez, Jorge; Valeriani, Chantal; Vega, Carlos; Sanz, Eduardo
2018-06-14
We use computer simulations to investigate the effect of salt on homogeneous ice nucleation. The melting point of the employed solution model was obtained both by direct coexistence simulations and by thermodynamic integration from previous calculations of the water chemical potential. Using a seeding approach, in which we simulate ice seeds embedded in a supercooled aqueous solution, we compute the nucleation rate as a function of temperature for a 1.85 NaCl mol per water kilogram solution at 1 bar. To improve the accuracy and reliability of our calculations, we combine seeding with the direct computation of the ice-solution interfacial free energy at coexistence using the Mold Integration method. We compare the results with previous simulation work on pure water to understand the effect caused by the solute. The model captures the experimental trend that the nucleation rate at a given supercooling decreases when adding salt. Despite the fact that the thermodynamic driving force for ice nucleation is higher for salty water for a given supercooling, the nucleation rate slows down with salt due to a significant increase of the ice-fluid interfacial free energy. The salty water model predicts an ice nucleation rate that is in good agreement with experimental measurements, bringing confidence in the predictive ability of the model. We expect that the combination of state-of-the-art simulation methods here employed to study ice nucleation from solution will be of much use in forthcoming numerical investigations of crystallization in mixtures.
Cosmic Strings Stabilized by Quantum Fluctuations
NASA Astrophysics Data System (ADS)
Weigel, H.
2017-03-01
Fermion quantum corrections to the energy of cosmic strings are computed. A number of rather technical tools are needed to formulate this correction, and isospin and gauge invariance are employed to verify consistency of these tools. These corrections must also be included when computing the energy of strings that are charged by populating fermion bound states in its background. It is found that charged strings are dynamically stabilized in theories similar to the standard model of particle physics.
Level-Set Simulation of Viscous Free Surface Flow Around a Commercial Hull Form
2005-04-15
Abstract The viscous free surface flow around a 3600 TEU KRISO Container Ship is computed using the finite volume based multi-block RANS code, WAVIS...developed at KRISO . The free surface is captured with the Level-set method and the realizable k-ε model is employed for turbulence closure. The...computations are done for a 3600 TEU container ship of Korea Research Institute of Ships & Ocean Engineering, KORDI (hereafter, KRISO ) selected as
Integration of Extended MHD and Kinetic Effects in Global Magnetosphere Models
NASA Astrophysics Data System (ADS)
Germaschewski, K.; Wang, L.; Maynard, K. R. M.; Raeder, J.; Bhattacharjee, A.
2015-12-01
Computational models of Earth's geospace environment are an important tool to investigate the science of the coupled solar-wind -- magnetosphere -- ionosphere system, complementing satellite and ground observations with a global perspective. They are also crucial in understanding and predicting space weather, in particular under extreme conditions. Traditionally, global models have employed the one-fluid MHD approximation, which captures large-scale dynamics quite well. However, in Earth's nearly collisionless plasma environment it breaks down on small scales, where ion and electron dynamics and kinetic effects become important, and greatly change the reconnection dynamics. A number of approaches have recently been taken to advance global modeling, e.g., including multiple ion species, adding Hall physics in a Generalized Ohm's Law, embedding local PIC simulations into a larger fluid domain and also some work on simulating the entire system with hybrid or fully kinetic models, the latter however being to computationally expensive to be run at realistic parameters. We will present an alternate approach, ie., a multi-fluid moment model that is derived rigorously from the Vlasov-Maxwell system. The advantage is that the computational cost remains managable, as we are still solving fluid equations. While the evolution equation for each moment is exact, it depends on the next higher-order moment, so that truncating the hiearchy and closing the system to capture the essential kinetic physics is crucial. We implement 5-moment (density, momentum, scalar pressure) and 10-moment (includes pressure tensor) versions of the model, and use local approximations for the heat flux to close the system. We test these closures by local simulations where we can compare directly to PIC / hybrid codes, and employ them in global simulations using the next-generation OpenGGCM to contrast them to MHD / Hall-MHD results and compare with observations.
Toward A Simulation-Based Tool for the Treatment of Vocal Fold Paralysis
Mittal, Rajat; Zheng, Xudong; Bhardwaj, Rajneesh; Seo, Jung Hee; Xue, Qian; Bielamowicz, Steven
2011-01-01
Advances in high-performance computing are enabling a new generation of software tools that employ computational modeling for surgical planning. Surgical management of laryngeal paralysis is one area where such computational tools could have a significant impact. The current paper describes a comprehensive effort to develop a software tool for planning medialization laryngoplasty where a prosthetic implant is inserted into the larynx in order to medialize the paralyzed vocal fold (VF). While this is one of the most common procedures used to restore voice in patients with VF paralysis, it has a relatively high revision rate, and the tool being developed is expected to improve surgical outcomes. This software tool models the biomechanics of airflow-induced vibration in the human larynx and incorporates sophisticated approaches for modeling the turbulent laryngeal flow, the complex dynamics of the VFs, as well as the production of voiced sound. The current paper describes the key elements of the modeling approach, presents computational results that demonstrate the utility of the approach and also describes some of the limitations and challenges. PMID:21556320
Park, Seungman
2017-09-01
Interstitial flow (IF) is a creeping flow through the interstitial space of the extracellular matrix (ECM). IF plays a key role in diverse biological functions, such as tissue homeostasis, cell function and behavior. Currently, most studies that have characterized IF have focused on the permeability of ECM or shear stress distribution on the cells, but less is known about the prediction of shear stress on the individual fibers or fiber networks despite its significance in the alignment of matrix fibers and cells observed in fibrotic or wound tissues. In this study, I developed a computational model to predict shear stress for different structured fibrous networks. To generate isotropic models, a random growth algorithm and a second-order orientation tensor were employed. Then, a three-dimensional (3D) solid model was created using computer-aided design (CAD) software for the aligned models (i.e., parallel, perpendicular and cubic models). Subsequently, a tetrahedral unstructured mesh was generated and flow solutions were calculated by solving equations for mass and momentum conservation for all models. Through the flow solutions, I estimated permeability using Darcy's law. Average shear stress (ASS) on the fibers was calculated by averaging the wall shear stress of the fibers. By using nonlinear surface fitting of permeability, viscosity, velocity, porosity and ASS, I devised new computational models. Overall, the developed models showed that higher porosity induced higher permeability, as previous empirical and theoretical models have shown. For comparison of the permeability, the present computational models were matched well with previous models, which justify our computational approach. ASS tended to increase linearly with respect to inlet velocity and dynamic viscosity, whereas permeability was almost the same. Finally, the developed model nicely predicted the ASS values that had been directly estimated from computational fluid dynamics (CFD). The present computational models will provide new tools for predicting accurate functional properties and designing fibrous porous materials, thereby significantly advancing tissue engineering. Copyright © 2017 Elsevier B.V. All rights reserved.
Up, Up, Up, and Away: Trends in Computer Occupations.
ERIC Educational Resources Information Center
Howard, H. Philip; Rothstein, Debra E.
1981-01-01
Discusses the technological changes that have occurred in computers and the recent and projected employment trends in the major computer occupations. Implications of growth in computer occupations are examined in five areas: education, recruiting techniques, salaries, competition between industry and education, and employment opportunities. (CT)
NASA Technical Reports Server (NTRS)
Cooke, C. H.
1976-01-01
An iterative method for numerically solving the time independent Navier-Stokes equations for viscous compressible flows is presented. The method is based upon partial application of the Gauss-Seidel principle in block form to the systems of nonlinear algebraic equations which arise in construction of finite element (Galerkin) models approximating solutions of fluid dynamic problems. The C deg-cubic element on triangles is employed for function approximation. Computational results for a free shear flow at Re = 1,000 indicate significant achievement of economy in iterative convergence rate over finite element and finite difference models which employ the customary time dependent equations and asymptotic time marching procedure to steady solution. Numerical results are in excellent agreement with those obtained for the same test problem employing time marching finite element and finite difference solution techniques.
Validation of chemistry models employed in a particle simulation method
NASA Technical Reports Server (NTRS)
Haas, Brian L.; Mcdonald, Jeffrey D.
1991-01-01
The chemistry models employed in a statistical particle simulation method, as implemented in the Intel iPSC/860 multiprocessor computer, are validated and applied. Chemical relaxation of five-species air in these reservoirs involves 34 simultaneous dissociation, recombination, and atomic-exchange reactions. The reaction rates employed in the analytic solutions are obtained from Arrhenius experimental correlations as functions of temperature for adiabatic gas reservoirs in thermal equilibrium. Favorable agreement with the analytic solutions validates the simulation when applied to relaxation of O2 toward equilibrium in reservoirs dominated by dissociation and recombination, respectively, and when applied to relaxation of air in the temperature range 5000 to 30,000 K. A flow of O2 over a circular cylinder at high Mach number is simulated to demonstrate application of the method to multidimensional reactive flows.
Mid-frequency Band Dynamics of Large Space Structures
NASA Technical Reports Server (NTRS)
Coppolino, Robert N.; Adams, Douglas S.
2004-01-01
High and low intensity dynamic environments experienced by a spacecraft during launch and on-orbit operations, respectively, induce structural loads and motions, which are difficult to reliably predict. Structural dynamics in low- and mid-frequency bands are sensitive to component interface uncertainty and non-linearity as evidenced in laboratory testing and flight operations. Analytical tools for prediction of linear system response are not necessarily adequate for reliable prediction of mid-frequency band dynamics and analysis of measured laboratory and flight data. A new MATLAB toolbox, designed to address the key challenges of mid-frequency band dynamics, is introduced in this paper. Finite-element models of major subassemblies are defined following rational frequency-wavelength guidelines. For computational efficiency, these subassemblies are described as linear, component mode models. The complete structural system model is composed of component mode subassemblies and linear or non-linear joint descriptions. Computation and display of structural dynamic responses are accomplished employing well-established, stable numerical methods, modern signal processing procedures and descriptive graphical tools. Parametric sensitivity and Monte-Carlo based system identification tools are used to reconcile models with experimental data and investigate the effects of uncertainties. Models and dynamic responses are exported for employment in applications, such as detailed structural integrity and mechanical-optical-control performance analyses.
Lessons on electronic decoherence in molecules from exact modeling
NASA Astrophysics Data System (ADS)
Hu, Wenxiang; Gu, Bing; Franco, Ignacio
2018-04-01
Electronic decoherence processes in molecules and materials are usually thought and modeled via schemes for the system-bath evolution in which the bath is treated either implicitly or approximately. Here we present computations of the electronic decoherence dynamics of a model many-body molecular system described by the Su-Schrieffer-Heeger Hamiltonian with Hubbard electron-electron interactions using an exact method in which both electronic and nuclear degrees of freedom are taken into account explicitly and fully quantum mechanically. To represent the electron-nuclear Hamiltonian in matrix form and propagate the dynamics, the computations employ the Jordan-Wigner transformation for the fermionic creation/annihilation operators and the discrete variable representation for the nuclear operators. The simulations offer a standard for electronic decoherence that can be used to test approximations. They also provide a useful platform to answer fundamental questions about electronic decoherence that cannot be addressed through approximate or implicit schemes. Specifically, through simulations, we isolate basic mechanisms for electronic coherence loss and demonstrate that electronic decoherence is possible even for one-dimensional nuclear bath. Furthermore, we show that (i) decreasing the mass of the bath generally leads to faster electronic decoherence; (ii) electron-electron interactions strongly affect the electronic decoherence when the electron-nuclear dynamics is not pure-dephasing; (iii) classical bath models with initial conditions sampled from the Wigner distribution accurately capture the short-time electronic decoherence dynamics; (iv) model separable initial superpositions often used to understand decoherence after photoexcitation are only relevant in experiments that employ delta-like laser pulses to initiate the dynamics. These insights can be employed to interpret and properly model coherence phenomena in molecules.
Boundary formulations for sensitivity analysis without matrix derivatives
NASA Technical Reports Server (NTRS)
Kane, J. H.; Guru Prasad, K.
1993-01-01
A new hybrid approach to continuum structural shape sensitivity analysis employing boundary element analysis (BEA) is presented. The approach uses iterative reanalysis to obviate the need to factor perturbed matrices in the determination of surface displacement and traction sensitivities via a univariate perturbation/finite difference (UPFD) step. The UPFD approach makes it possible to immediately reuse existing subroutines for computation of BEA matrix coefficients in the design sensitivity analysis process. The reanalysis technique computes economical response of univariately perturbed models without factoring perturbed matrices. The approach provides substantial computational economy without the burden of a large-scale reprogramming effort.
VAMPnets for deep learning of molecular kinetics.
Mardt, Andreas; Pasquali, Luca; Wu, Hao; Noé, Frank
2018-01-02
There is an increasing demand for computing the relevant structures, equilibria, and long-timescale kinetics of biomolecular processes, such as protein-drug binding, from high-throughput molecular dynamics simulations. Current methods employ transformation of simulated coordinates into structural features, dimension reduction, clustering the dimension-reduced data, and estimation of a Markov state model or related model of the interconversion rates between molecular structures. This handcrafted approach demands a substantial amount of modeling expertise, as poor decisions at any step will lead to large modeling errors. Here we employ the variational approach for Markov processes (VAMP) to develop a deep learning framework for molecular kinetics using neural networks, dubbed VAMPnets. A VAMPnet encodes the entire mapping from molecular coordinates to Markov states, thus combining the whole data processing pipeline in a single end-to-end framework. Our method performs equally or better than state-of-the-art Markov modeling methods and provides easily interpretable few-state kinetic models.
Multi-Scale Computational Modeling of Two-Phased Metal Using GMC Method
NASA Technical Reports Server (NTRS)
Moghaddam, Masoud Ghorbani; Achuthan, A.; Bednacyk, B. A.; Arnold, S. M.; Pineda, E. J.
2014-01-01
A multi-scale computational model for determining plastic behavior in two-phased CMSX-4 Ni-based superalloys is developed on a finite element analysis (FEA) framework employing crystal plasticity constitutive model that can capture the microstructural scale stress field. The generalized method of cells (GMC) micromechanics model is used for homogenizing the local field quantities. At first, GMC as stand-alone is validated by analyzing a repeating unit cell (RUC) as a two-phased sample with 72.9% volume fraction of gamma'-precipitate in the gamma-matrix phase and comparing the results with those predicted by finite element analysis (FEA) models incorporating the same crystal plasticity constitutive model. The global stress-strain behavior and the local field quantity distributions predicted by GMC demonstrated good agreement with FEA. High computational saving, at the expense of some accuracy in the components of local tensor field quantities, was obtained with GMC. Finally, the capability of the developed multi-scale model linking FEA and GMC to solve real life sized structures is demonstrated by analyzing an engine disc component and determining the microstructural scale details of the field quantities.
solveME: fast and reliable solution of nonlinear ME models.
Yang, Laurence; Ma, Ding; Ebrahim, Ali; Lloyd, Colton J; Saunders, Michael A; Palsson, Bernhard O
2016-09-22
Genome-scale models of metabolism and macromolecular expression (ME) significantly expand the scope and predictive capabilities of constraint-based modeling. ME models present considerable computational challenges: they are much (>30 times) larger than corresponding metabolic reconstructions (M models), are multiscale, and growth maximization is a nonlinear programming (NLP) problem, mainly due to macromolecule dilution constraints. Here, we address these computational challenges. We develop a fast and numerically reliable solution method for growth maximization in ME models using a quad-precision NLP solver (Quad MINOS). Our method was up to 45 % faster than binary search for six significant digits in growth rate. We also develop a fast, quad-precision flux variability analysis that is accelerated (up to 60× speedup) via solver warm-starts. Finally, we employ the tools developed to investigate growth-coupled succinate overproduction, accounting for proteome constraints. Just as genome-scale metabolic reconstructions have become an invaluable tool for computational and systems biologists, we anticipate that these fast and numerically reliable ME solution methods will accelerate the wide-spread adoption of ME models for researchers in these fields.
Thermal modeling of lesion growth with radiofrequency ablation devices
Chang, Isaac A; Nguyen, Uyen D
2004-01-01
Background Temperature is a frequently used parameter to describe the predicted size of lesions computed by computational models. In many cases, however, temperature correlates poorly with lesion size. Although many studies have been conducted to characterize the relationship between time-temperature exposure of tissue heating to cell damage, to date these relationships have not been employed in a finite element model. Methods We present an axisymmetric two-dimensional finite element model that calculates cell damage in tissues and compare lesion sizes using common tissue damage and iso-temperature contour definitions. The model accounts for both temperature-dependent changes in the electrical conductivity of tissue as well as tissue damage-dependent changes in local tissue perfusion. The data is validated using excised porcine liver tissues. Results The data demonstrate the size of thermal lesions is grossly overestimated when calculated using traditional temperature isocontours of 42°C and 47°C. The computational model results predicted lesion dimensions that were within 5% of the experimental measurements. Conclusion When modeling radiofrequency ablation problems, temperature isotherms may not be representative of actual tissue damage patterns. PMID:15298708
Nagashino, Hirofumi; Kinouchi, Yohsuke; Danesh, Ali A; Pandya, Abhijit S
2013-01-01
Tinnitus is the perception of sound in the ears or in the head where no external source is present. Sound therapy is one of the most effective techniques for tinnitus treatment that have been proposed. In order to investigate mechanisms of tinnitus generation and the clinical effects of sound therapy, we have proposed conceptual and computational models with plasticity using a neural oscillator or a neuronal network model. In the present paper, we propose a neuronal network model with simplified tonotopicity of the auditory system as more detailed structure. In this model an integrate-and-fire neuron model is employed and homeostatic plasticity is incorporated. The computer simulation results show that the present model can show the generation of oscillation and its cessation by external input. It suggests that the present framework is promising as a modeling for the tinnitus generation and the effects of sound therapy.
Efficient Conservative Reformulation Schemes for Lithium Intercalation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urisanga, PC; Rife, D; De, S
Porous electrode theory coupled with transport and reaction mechanisms is a widely used technique to model Li-ion batteries employing an appropriate discretization or approximation for solid phase diffusion with electrode particles. One of the major difficulties in simulating Li-ion battery models is the need to account for solid phase diffusion in a second radial dimension r, which increases the computation time/cost to a great extent. Various methods that reduce the computational cost have been introduced to treat this phenomenon, but most of them do not guarantee mass conservation. The aim of this paper is to introduce an inherently mass conservingmore » yet computationally efficient method for solid phase diffusion based on Lobatto III A quadrature. This paper also presents coupling of the new solid phase reformulation scheme with a macro-homogeneous porous electrode theory based pseudo 20 model for Li-ion battery. (C) The Author(s) 2015. Published by ECS. All rights reserved.« less
NASA Technical Reports Server (NTRS)
2001-01-01
Howmet Research Corporation was the first to commercialize an innovative cast metal technology developed at Auburn University, Auburn, Alabama. With funding assistance from NASA's Marshall Space Flight Center, Auburn University's Solidification Design Center (a NASA Commercial Space Center), developed accurate nickel-based superalloy data for casting molten metals. Through a contract agreement, Howmet used the data to develop computer model predictions of molten metals and molding materials in cast metal manufacturing. Howmet Metal Mold (HMM), part of Howmet Corporation Specialty Products, of Whitehall, Michigan, utilizes metal molds to manufacture net shape castings in various alloys and amorphous metal (metallic glass). By implementing the thermophysical property data from by Auburn researchers, Howmet employs its newly developed computer model predictions to offer customers high-quality, low-cost, products with significantly improved mechanical properties. Components fabricated with this new process replace components originally made from forgings or billet. Compared with products manufactured through traditional casting methods, Howmet's computer-modeled castings come out on top.
NASA Technical Reports Server (NTRS)
Nesbitt, James A.
2001-01-01
A finite-difference computer program (COSIM) has been written which models the one-dimensional, diffusional transport associated with high-temperature oxidation and interdiffusion of overlay-coated substrates. The program predicts concentration profiles for up to three elements in the coating and substrate after various oxidation exposures. Surface recession due to solute loss is also predicted. Ternary cross terms and concentration-dependent diffusion coefficients are taken into account. The program also incorporates a previously-developed oxide growth and spalling model to simulate either isothermal or cyclic oxidation exposures. In addition to predicting concentration profiles after various oxidation exposures, the program can also be used to predict coating life based on a concentration dependent failure criterion (e.g., surface solute content drops to 2%). The computer code is written in FORTRAN and employs numerous subroutines to make the program flexible and easily modifiable to other coating oxidation problems.
Development of a distributed-parameter mathematical model for simulation of cryogenic wind tunnels
NASA Technical Reports Server (NTRS)
Tripp, J. S.
1983-01-01
A one-dimensional distributed-parameter dynamic model of a cryogenic wind tunnel was developed which accounts for internal and external heat transfer, viscous momentum losses, and slotted-test-section dynamics. Boundary conditions imposed by liquid-nitrogen injection, gas venting, and the tunnel fan were included. A time-dependent numerical solution to the resultant set of partial differential equations was obtained on a CDC CYBER 203 vector-processing digital computer at a usable computational rate. Preliminary computational studies were performed by using parameters of the Langley 0.3-Meter Transonic Cryogenic Tunnel. Studies were performed by using parameters from the National Transonic Facility (NTF). The NTF wind-tunnel model was used in the design of control loops for Mach number, total temperature, and total pressure and for determining interactions between the control loops. It was employed in the application of optimal linear-regulator theory and eigenvalue-placement techniques to develop Mach number control laws.
A computational substrate for incentive salience.
McClure, Samuel M; Daw, Nathaniel D; Montague, P Read
2003-08-01
Theories of dopamine function are at a crossroads. Computational models derived from single-unit recordings capture changes in dopaminergic neuron firing rate as a prediction error signal. These models employ the prediction error signal in two roles: learning to predict future rewarding events and biasing action choice. Conversely, pharmacological inhibition or lesion of dopaminergic neuron function diminishes the ability of an animal to motivate behaviors directed at acquiring rewards. These lesion experiments have raised the possibility that dopamine release encodes a measure of the incentive value of a contemplated behavioral act. The most complete psychological idea that captures this notion frames the dopamine signal as carrying 'incentive salience'. On the surface, these two competing accounts of dopamine function seem incommensurate. To the contrary, we demonstrate that both of these functions can be captured in a single computational model of the involvement of dopamine in reward prediction for the purpose of reward seeking.
Development and analysis of a finite element model to simulate pulmonary emphysema in CT imaging.
Diciotti, Stefano; Nobis, Alessandro; Ciulli, Stefano; Landini, Nicholas; Mascalchi, Mario; Sverzellati, Nicola; Innocenti, Bernardo
2015-01-01
In CT imaging, pulmonary emphysema appears as lung regions with Low-Attenuation Areas (LAA). In this study we propose a finite element (FE) model of lung parenchyma, based on a 2-D grid of beam elements, which simulates pulmonary emphysema related to smoking in CT imaging. Simulated LAA images were generated through space sampling of the model output. We employed two measurements of emphysema extent: Relative Area (RA) and the exponent D of the cumulative distribution function of LAA clusters size. The model has been used to compare RA and D computed on the simulated LAA images with those computed on the models output. Different mesh element sizes and various model parameters, simulating different physiological/pathological conditions, have been considered and analyzed. A proper mesh element size has been determined as the best trade-off between reliable results and reasonable computational cost. Both RA and D computed on simulated LAA images were underestimated with respect to those calculated on the models output. Such underestimations were larger for RA (≈ -44 ÷ -26%) as compared to those for D (≈ -16 ÷ -2%). Our FE model could be useful to generate standard test images and to design realistic physical phantoms of LAA images for the assessment of the accuracy of descriptors for quantifying emphysema in CT imaging.
Biomechanics of the soft-palate in sleep apnea patients with polycystic ovarian syndrome.
Subramaniam, Dhananjay Radhakrishnan; Arens, Raanan; Wagshul, Mark E; Sin, Sanghun; Wootton, David M; Gutmark, Ephraim J
2018-05-17
Highly compliant tissue supporting the pharynx and low muscle tone enhance the possibility of upper airway occlusion in children with obstructive sleep apnea (OSA). The present study describes subject-specific computational modeling of flow-induced velopharyngeal narrowing in a female child with polycystic ovarian syndrome (PCOS) with OSA and a non-OSA control. Anatomically accurate three-dimensional geometries of the upper airway and soft-palate were reconstructed for both subjects using magnetic resonance (MR) images. A fluid-structure interaction (FSI) shape registration analysis was performed using subject-specific values of flow rate to iteratively compute the biomechanical properties of the soft-palate. The optimized shear modulus for the control was 38 percent higher than the corresponding value for the OSA patient. The proposed computational FSI model was then employed for planning surgical treatment for the apneic subject. A virtual surgery comprising of a combined adenoidectomy, palatoplasty and genioglossus advancement was performed to estimate the resulting post-operative patterns of airflow and tissue displacement. Maximum flow velocity and velopharyngeal resistance decreased by 80 percent and 66 percent respectively following surgery. Post-operative flow-induced forces on the anterior and posterior faces of the soft-palate were equilibrated and the resulting magnitude of tissue displacement was 63 percent lower compared to the pre-operative case. Results from this pilot study indicate that FSI computational modeling can be employed to characterize the mechanical properties of pharyngeal tissue and evaluate the effectiveness of various upper airway surgeries prior to their application. Copyright © 2018. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Santagati, C.; Inzerillo, L.; Di Paola, F.
2013-07-01
3D reconstruction from images has undergone a revolution in the last few years. Computer vision techniques use photographs from data set collection to rapidly build detailed 3D models. The simultaneous applications of different algorithms (MVS), the different techniques of image matching, feature extracting and mesh optimization are inside an active field of research in computer vision. The results are promising: the obtained models are beginning to challenge the precision of laser-based reconstructions. Among all the possibilities we can mainly distinguish desktop and web-based packages. Those last ones offer the opportunity to exploit the power of cloud computing in order to carry out a semi-automatic data processing, thus allowing the user to fulfill other tasks on its computer; whereas desktop systems employ too much processing time and hard heavy approaches. Computer vision researchers have explored many applications to verify the visual accuracy of 3D model but the approaches to verify metric accuracy are few and no one is on Autodesk 123D Catch applied on Architectural Heritage Documentation. Our approach to this challenging problem is to compare the 3Dmodels by Autodesk 123D Catch and 3D models by terrestrial LIDAR considering different object size, from the detail (capitals, moldings, bases) to large scale buildings for practitioner purpose.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 26 Internal Revenue 12 2010-04-01 2010-04-01 false Options available to farmers in computing net earnings from self-employment for taxable years ending after 1954 and before December 31, 1956. 1.1402(a... available to farmers in computing net earnings from self-employment for taxable years ending after 1954 and...
An Improved Swarm Optimization for Parameter Estimation and Biological Model Selection
Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail
2013-01-01
One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data. PMID:23593445
1992-09-01
ease with which a model is employed, may depend on several factors, among them the users’ past experience in modeling, preferences for menu driven...partially on our knowledge of important logistics factors, partially on the past work of Diener (12), and partially on the assumption that comparison of...flexibility in output report selection. The minimum output was used in each instance 74 to conserve computer storage and to minimize the consumption of paper
InPRO: Automated Indoor Construction Progress Monitoring Using Unmanned Aerial Vehicles
NASA Astrophysics Data System (ADS)
Hamledari, Hesam
In this research, an envisioned automated intelligent robotic solution for automated indoor data collection and inspection that employs a series of unmanned aerial vehicles (UAV), entitled "InPRO", is presented. InPRO consists of four stages, namely: 1) automated path planning; 2) autonomous UAV-based indoor inspection; 3) automated computer vision-based assessment of progress; and, 4) automated updating of 4D building information models (BIM). The works presented in this thesis address the third stage of InPRO. A series of computer vision-based methods that automate the assessment of construction progress using images captured at indoor sites are introduced. The proposed methods employ computer vision and machine learning techniques to detect the components of under-construction indoor partitions. In particular, framing (studs), insulation, electrical outlets, and different states of drywall sheets (installing, plastering, and painting) are automatically detected using digital images. High accuracy rates, real-time performance, and operation without a priori information are indicators of the methods' promising performance.
26 CFR 1.1402(a)-3 - Special rules for computing net earnings from self-employment.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 26 Internal Revenue 12 2010-04-01 2010-04-01 false Special rules for computing net earnings from....1402(a)-3 Special rules for computing net earnings from self-employment. For the purpose of computing... by a partnership of which he is a member shall be computed in accordance with the special rules set...
Modeling Cardiac Electrophysiology at the Organ Level in the Peta FLOPS Computing Age
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Lawrence; Bishop, Martin; Hoetzl, Elena
2010-09-30
Despite a steep increase in available compute power, in-silico experimentation with highly detailed models of the heart remains to be challenging due to the high computational cost involved. It is hoped that next generation high performance computing (HPC) resources lead to significant reductions in execution times to leverage a new class of in-silico applications. However, performance gains with these new platforms can only be achieved by engaging a much larger number of compute cores, necessitating strongly scalable numerical techniques. So far strong scalability has been demonstrated only for a moderate number of cores, orders of magnitude below the range requiredmore » to achieve the desired performance boost.In this study, strong scalability of currently used techniques to solve the bidomain equations is investigated. Benchmark results suggest that scalability is limited to 512-4096 cores within the range of relevant problem sizes even when systems are carefully load-balanced and advanced IO strategies are employed.« less
Closed-form solutions of performability. [in computer systems
NASA Technical Reports Server (NTRS)
Meyer, J. F.
1982-01-01
It is noted that if computing system performance is degradable then system evaluation must deal simultaneously with aspects of both performance and reliability. One approach is the evaluation of a system's performability which, relative to a specified performance variable Y, generally requires solution of the probability distribution function of Y. The feasibility of closed-form solutions of performability when Y is continuous are examined. In particular, the modeling of a degradable buffer/multiprocessor system is considered whose performance Y is the (normalized) average throughput rate realized during a bounded interval of time. Employing an approximate decomposition of the model, it is shown that a closed-form solution can indeed be obtained.
High-Fidelity Computational Aerodynamics of the Elytron 4S UAV
NASA Technical Reports Server (NTRS)
Ventura Diaz, Patricia; Yoon, Seokkwan; Theodore, Colin R.
2018-01-01
High-fidelity Computational Fluid Dynamics (CFD) have been carried out for the Elytron 4S Unmanned Aerial Vehicle (UAV), also known as the converticopter "proto12". It is the scaled wind tunnel model of the Elytron 4S, an Urban Air Mobility (UAM) concept, a tilt-wing, box-wing rotorcraft capable of Vertical Take-Off and Landing (VTOL). The three-dimensional unsteady Navier-Stokes equations are solved on overset grids employing high-order accurate schemes, dual-time stepping, and a hybrid turbulence model using NASA's CFD code OVERFLOW. The Elytron 4S UAV has been simulated in airplane mode and in helicopter mode.
Quantum adiabatic computation and adiabatic conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei Zhaohui; Ying Mingsheng
2007-08-15
Recently, quantum adiabatic computation has attracted more and more attention in the literature. It is a novel quantum computation model based on adiabatic approximation, and the analysis of a quantum adiabatic algorithm depends highly on the adiabatic conditions. However, it has been pointed out that the traditional adiabatic conditions are problematic. Thus, results obtained previously should be checked and sufficient adiabatic conditions applicable to adiabatic computation should be proposed. Based on a result of Tong et al. [Phys. Rev. Lett. 98, 150402 (2007)], we propose a modified adiabatic criterion which is more applicable to the analysis of adiabatic algorithms. Asmore » an example, we prove the validity of the local adiabatic search algorithm by employing our criterion.« less
Parallel Computing for Brain Simulation.
Pastur-Romay, L A; Porto-Pazos, A B; Cedron, F; Pazos, A
2017-01-01
The human brain is the most complex system in the known universe, it is therefore one of the greatest mysteries. It provides human beings with extraordinary abilities. However, until now it has not been understood yet how and why most of these abilities are produced. For decades, researchers have been trying to make computers reproduce these abilities, focusing on both understanding the nervous system and, on processing data in a more efficient way than before. Their aim is to make computers process information similarly to the brain. Important technological developments and vast multidisciplinary projects have allowed creating the first simulation with a number of neurons similar to that of a human brain. This paper presents an up-to-date review about the main research projects that are trying to simulate and/or emulate the human brain. They employ different types of computational models using parallel computing: digital models, analog models and hybrid models. This review includes the current applications of these works, as well as future trends. It is focused on various works that look for advanced progress in Neuroscience and still others which seek new discoveries in Computer Science (neuromorphic hardware, machine learning techniques). Their most outstanding characteristics are summarized and the latest advances and future plans are presented. In addition, this review points out the importance of considering not only neurons: Computational models of the brain should also include glial cells, given the proven importance of astrocytes in information processing. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
NASA Astrophysics Data System (ADS)
Cantelli, A.; D'Orta, F.; Cattini, A.; Sebastianelli, F.; Cedola, L.
2015-08-01
A computational model is developed for retrieving the positions and the emission rates of unknown pollution sources, under steady state conditions, starting from the measurements of the concentration of the pollutants. The approach is based on the minimization of a fitness function employing a genetic algorithm paradigm. The model is tested considering both pollutant concentrations generated through a Gaussian model in 25 points in a 3-D test case domain (1000m × 1000m × 50 m) and experimental data such as the Prairie Grass field experiments data in which about 600 receptors were located along five concentric semicircle arcs and the Fusion Field Trials 2007. The results show that the computational model is capable to efficiently retrieve up to three different unknown sources.
26 CFR 31.3221-2 - Rates and computation of employer tax.
Code of Federal Regulations, 2010 CFR
2010-04-01
...-2 Rates and computation of employer tax. (a) Rates—(1)(i) Tier 1 tax. The Tier 1 employer tax rate... disability insurance, and section 3111(b), relating to hospital insurance. The Tier 1 employer tax rate is... Federal Insurance Contributions Act. (ii) Example. The rule in paragraph (a)(1)(i) of this section is...
A Coarse-Grained Protein Model in a Water-like Solvent
NASA Astrophysics Data System (ADS)
Sharma, Sumit; Kumar, Sanat K.; Buldyrev, Sergey V.; Debenedetti, Pablo G.; Rossky, Peter J.; Stanley, H. Eugene
2013-05-01
Simulations employing an explicit atom description of proteins in solvent can be computationally expensive. On the other hand, coarse-grained protein models in implicit solvent miss essential features of the hydrophobic effect, especially its temperature dependence, and have limited ability to capture the kinetics of protein folding. We propose a free space two-letter protein (``H-P'') model in a simple, but qualitatively accurate description for water, the Jagla model, which coarse-grains water into an isotropically interacting sphere. Using Monte Carlo simulations, we design protein-like sequences that can undergo a collapse, exposing the ``Jagla-philic'' monomers to the solvent, while maintaining a ``hydrophobic'' core. This protein-like model manifests heat and cold denaturation in a manner that is reminiscent of proteins. While this protein-like model lacks the details that would introduce secondary structure formation, we believe that these ideas represent a first step in developing a useful, but computationally expedient, means of modeling proteins.
RANS modeling of scalar dispersion from localized sources within a simplified urban-area model
NASA Astrophysics Data System (ADS)
Rossi, Riccardo; Capra, Stefano; Iaccarino, Gianluca
2011-11-01
The dispersion of a passive scalar downstream a localized source within a simplified urban-like geometry is examined by means of RANS scalar flux models. The computations are conducted under conditions of neutral stability and for three different incoming wind directions (0°, 45°, 90°) at a roughness Reynolds number of Ret = 391. A Reynolds stress transport model is used to close the flow governing equations whereas both the standard eddy-diffusivity closure and algebraic flux models are employed to close the transport equation for the passive scalar. The comparison with a DNS database shows improved reliability from algebraic scalar flux models towards predicting both the mean concentration and the plume structure. Since algebraic flux models do not increase substantially the computational effort, the results indicate that the use of tensorial-diffusivity can be promising tool for dispersion simulations for the urban environment.
SOMAR-LES: A framework for multi-scale modeling of turbulent stratified oceanic flows
NASA Astrophysics Data System (ADS)
Chalamalla, Vamsi K.; Santilli, Edward; Scotti, Alberto; Jalali, Masoud; Sarkar, Sutanu
2017-12-01
A new multi-scale modeling technique, SOMAR-LES, is presented in this paper. Localized grid refinement gives SOMAR (the Stratified Ocean Model with Adaptive Resolution) access to small scales of the flow which are normally inaccessible to general circulation models (GCMs). SOMAR-LES drives a LES (Large Eddy Simulation) on SOMAR's finest grids, forced with large scale forcing from the coarser grids. Three-dimensional simulations of internal tide generation, propagation and scattering are performed to demonstrate this multi-scale modeling technique. In the case of internal tide generation at a two-dimensional bathymetry, SOMAR-LES is able to balance the baroclinic energy budget and accurately model turbulence losses at only 10% of the computational cost required by a non-adaptive solver running at SOMAR-LES's fine grid resolution. This relative cost is significantly reduced in situations with intermittent turbulence or where the location of the turbulence is not known a priori because SOMAR-LES does not require persistent, global, high resolution. To illustrate this point, we consider a three-dimensional bathymetry with grids adaptively refined along the tidally generated internal waves to capture remote mixing in regions of wave focusing. The computational cost in this case is found to be nearly 25 times smaller than that of a non-adaptive solver at comparable resolution. In the final test case, we consider the scattering of a mode-1 internal wave at an isolated two-dimensional and three-dimensional topography, and we compare the results with Legg (2014) numerical experiments. We find good agreement with theoretical estimates. SOMAR-LES is less dissipative than the closure scheme employed by Legg (2014) near the bathymetry. Depending on the flow configuration and resolution employed, a reduction of more than an order of magnitude in computational costs is expected, relative to traditional existing solvers.
Employment Trends in Computer Occupations. Bulletin 2101.
ERIC Educational Resources Information Center
Howard, H. Philip; Rothstein, Debra E.
In 1980 1,455,000 persons worked in computer occupations. Two in five were systems analysts or programmers; one in five was a keypunch operator; one in 20 was a computer service technician; and more than one in three were computer and peripheral equipment operators. Employment was concentrated in major urban centers in four major industry…
Hill, Mary C.; Faunt, Claudia C.; Belcher, Wayne; Sweetkind, Donald; Tiedeman, Claire; Kavetski, Dmitri
2013-01-01
This work demonstrates how available knowledge can be used to build more transparent and refutable computer models of groundwater systems. The Death Valley regional groundwater flow system, which surrounds a proposed site for a high level nuclear waste repository of the United States of America, and the Nevada National Security Site (NNSS), where nuclear weapons were tested, is used to explore model adequacy, identify parameters important to (and informed by) observations, and identify existing old and potential new observations important to predictions. Model development is pursued using a set of fundamental questions addressed with carefully designed metrics. Critical methods include using a hydrogeologic model, managing model nonlinearity by designing models that are robust while maintaining realism, using error-based weighting to combine disparate types of data, and identifying important and unimportant parameters and observations and optimizing parameter values with computationally frugal schemes. The frugal schemes employed in this study require relatively few (10–1000 s), parallelizable model runs. This is beneficial because models able to approximate the complex site geology defensibly tend to have high computational cost. The issue of model defensibility is particularly important given the contentious political issues involved.
Critical assessment of Reynolds stress turbulence models using homogeneous flows
NASA Technical Reports Server (NTRS)
Shabbir, Aamir; Shih, Tsan-Hsing
1992-01-01
In modeling the rapid part of the pressure correlation term in the Reynolds stress transport equations, extensive use has been made of its exact properties which were first suggested by Rotta. These, for example, have been employed in obtaining the widely used Launder, Reece and Rodi (LRR) model. Some recent proposals have dropped one of these properties to obtain new models. We demonstrate, by computing some simple homogeneous flows, that doing so does not lead to any significant improvements over the LRR model and it is not the right direction in improving the performance of existing models. The reason for this, in our opinion, is that violation of one of the exact properties can not bring in any new physics into the model. We compute thirteen homogeneous flows using LRR (with a recalibrated rapid term constant), IP and SSG models. The flows computed include the flow through axisymmetric contraction; axisymmetric expansion; distortion by plane strain; and homogeneous shear flows with and without rotation. Results show that for most general representation for a model linear in the anisotropic tensor, performs either better or as good as the other two models of the same level.
Development of an Efficient CFD Model for Nuclear Thermal Thrust Chamber Assembly Design
NASA Technical Reports Server (NTRS)
Cheng, Gary; Ito, Yasushi; Ross, Doug; Chen, Yen-Sen; Wang, Ten-See
2007-01-01
The objective of this effort is to develop an efficient and accurate computational methodology to predict both detailed thermo-fluid environments and global characteristics of the internal ballistics for a hypothetical solid-core nuclear thermal thrust chamber assembly (NTTCA). Several numerical and multi-physics thermo-fluid models, such as real fluid, chemically reacting, turbulence, conjugate heat transfer, porosity, and power generation, were incorporated into an unstructured-grid, pressure-based computational fluid dynamics solver as the underlying computational methodology. The numerical simulations of detailed thermo-fluid environment of a single flow element provide a mechanism to estimate the thermal stress and possible occurrence of the mid-section corrosion of the solid core. In addition, the numerical results of the detailed simulation were employed to fine tune the porosity model mimic the pressure drop and thermal load of the coolant flow through a single flow element. The use of the tuned porosity model enables an efficient simulation of the entire NTTCA system, and evaluating its performance during the design cycle.
Computer simulation of the carbon activity in austenite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murch, G.E.; Thorn, R.J.
1979-02-01
Carbon activity in austenite is described in terms of an Ising-like f.c.c. lattice gas model in which carbon interstitials repel only at the distance of nearest neighbors. A Monte Carlo simulation method in the petit canonical ensemble is employed to calculate directly the carbon activity as a function of composition and temperature. The computed activities are in satisfactory agreement with the experimental data, similarly for the decompostion of the activity to the partial molar enthalpy and entropy.
Numerical solutions of the Navier-Stokes equations for transonic afterbody flows
NASA Technical Reports Server (NTRS)
Swanson, R. C., Jr.
1980-01-01
The time dependent Navier-Stokes equations in mass averaged variables are solved for transonic flow over axisymmetric boattail plume simulator configurations. Numerical solution of these equations is accomplished with the unsplit explict finite difference algorithm of MacCormack. A grid subcycling procedure and computer code vectorization are used to improve computational efficiency. The two layer algebraic turbulence models of Cebeci-Smith and Baldwin-Lomax are employed for investigating turbulence closure. Two relaxation models based on these baseline models are also considered. Results in the form of surface pressure distribution for three different circular arc boattails at two free stream Mach numbers are compared with experimental data. The pressures in the recirculating flow region for all separated cases are poorly predicted with the baseline turbulence models. Significant improvements in the predictions are usually obtained by using the relaxation models.
In most transportation studies, computer models that forecast travel behavior statistics for a future year use static projections of the spatial distribution of future population and employment growth as inputs. As a result, they are unable to account for the temporally dynamic a...
In most transportation studies, computer models that forecast travel behavior statistics for a future year use static projections of the spatial distribution of future population and employment growth as inputs. As a result, they are unable to account for the temporally dynamic a...
Challenging the Context: Perception, Polity, and Power.
ERIC Educational Resources Information Center
Hartfield, Ronne
1994-01-01
"Contextual areas" employ models, replicas, artwork, art materials, tools, interpretive panels, and interactive computer installations to help visitors explore the historical and cultural context of 6 of 12 works of art at the "Art Inside Out" exhibition in the Kraft General Foods Education Center of the Art Institute of Chicago. (MDH)
Agile Port and High Speed Ship Technologies, Vol 1: FY05 Projects 3-6 and 8-10
2008-07-02
Computational Fluid Dynamics DTMB - David Taylor Model Basin JVR - Jet Velocity Ratio NSWCCD - Naval Surface Warfare Center, Carderock Division SDD - Systems...immature current state of the technology employed for the reactor system (multiple closed Brayton Cycle, Helium Cooled Gas reactors); (iii) several
Plank, Gernot; Zhou, Lufang; Greenstein, Joseph L; Cortassa, Sonia; Winslow, Raimond L; O'Rourke, Brian; Trayanova, Natalia A
2008-01-01
Computer simulations of electrical behaviour in the whole ventricles have become commonplace during the last few years. The goals of this article are (i) to review the techniques that are currently employed to model cardiac electrical activity in the heart, discussing the strengths and weaknesses of the various approaches, and (ii) to implement a novel modelling approach, based on physiological reasoning, that lifts some of the restrictions imposed by current state-of-the-art ionic models. To illustrate the latter approach, the present study uses a recently developed ionic model of the ventricular myocyte that incorporates an excitation–contraction coupling and mitochondrial energetics model. A paradigm to bridge the vastly disparate spatial and temporal scales, from subcellular processes to the entire organ, and from sub-microseconds to minutes, is presented. Achieving sufficient computational efficiency is the key to success in the quest to develop multiscale realistic models that are expected to lead to better understanding of the mechanisms of arrhythmia induction following failure at the organelle level, and ultimately to the development of novel therapeutic applications. PMID:18603526
Modeling compressible multiphase flows with dispersed particles in both dense and dilute regimes
NASA Astrophysics Data System (ADS)
McGrath, T.; St. Clair, J.; Balachandar, S.
2018-05-01
Many important explosives and energetics applications involve multiphase formulations employing dispersed particles. While considerable progress has been made toward developing mathematical models and computational methodologies for these flows, significant challenges remain. In this work, we apply a mathematical model for compressible multiphase flows with dispersed particles to existing shock and explosive dispersal problems from the literature. The model is cast in an Eulerian framework, treats all phases as compressible, is hyperbolic, and satisfies the second law of thermodynamics. It directly applies the continuous-phase pressure gradient as a forcing function for particle acceleration and thereby retains relaxed characteristics for the dispersed particle phase that remove the constituent material sound velocity from the eigenvalues. This is consistent with the expected characteristics of dispersed particle phases and can significantly improve the stable time-step size for explicit methods. The model is applied to test cases involving the shock and explosive dispersal of solid particles and compared to data from the literature. Computed results compare well with experimental measurements, providing confidence in the model and computational methods applied.
Cooley, Richard L.
1993-01-01
Calibration data (observed values corresponding to model-computed values of dependent variables) are incorporated into a general method of computing exact Scheffé-type confidence intervals analogous to the confidence intervals developed in part 1 (Cooley, this issue) for a function of parameters derived from a groundwater flow model. Parameter uncertainty is specified by a distribution of parameters conditioned on the calibration data. This distribution was obtained as a posterior distribution by applying Bayes' theorem to the hydrogeologically derived prior distribution of parameters from part 1 and a distribution of differences between the calibration data and corresponding model-computed dependent variables. Tests show that the new confidence intervals can be much smaller than the intervals of part 1 because the prior parameter variance-covariance structure is altered so that combinations of parameters that give poor model fit to the data are unlikely. The confidence intervals of part 1 and the new confidence intervals can be effectively employed in a sequential method of model construction whereby new information is used to reduce confidence interval widths at each stage.
Local rules simulation of the kinetics of virus capsid self-assembly.
Schwartz, R; Shor, P W; Prevelige, P E; Berger, B
1998-12-01
A computer model is described for studying the kinetics of the self-assembly of icosahedral viral capsids. Solution of this problem is crucial to an understanding of the viral life cycle, which currently cannot be adequately addressed through laboratory techniques. The abstract simulation model employed to address this is based on the local rules theory of. Proc. Natl. Acad. Sci. USA. 91:7732-7736). It is shown that the principle of local rules, generalized with a model of kinetics and other extensions, can be used to simulate complicated problems in self-assembly. This approach allows for a computationally tractable molecular dynamics-like simulation of coat protein interactions while retaining many relevant features of capsid self-assembly. Three simple simulation experiments are presented to illustrate the use of this model. These show the dependence of growth and malformation rates on the energetics of binding interactions, the tolerance of errors in binding positions, and the concentration of subunits in the examples. These experiments demonstrate a tradeoff within the model between growth rate and fidelity of assembly for the three parameters. A detailed discussion of the computational model is also provided.
Sayer, Martin D J; Azzopardi, Elaine; Sieber, Arne
2014-12-01
Dive computers are used in some occupational diving sectors to manage decompression but there is little independent assessment of their performance. A significant proportion of occupational diving operations employ single square-wave pressure exposures in support of their work. Single examples of 43 models of dive computer were compressed to five simulated depths between 15 and 50 metres' sea water (msw) and maintained at those depths until they had registered over 30 minutes of decompression. At each depth, and for each model, downloaded data were used to collate the times at which the unit was still registering "no decompression" and the times at which various levels of decompression were indicated or exceeded. Each depth profile was replicated three times for most models. Decompression isopleths for no-stop dives indicated that computers tended to be more conservative than standard decompression tables at depths shallower than 30 msw but less conservative between 30-50 msw. For dives requiring decompression, computers were predominantly more conservative than tables across the whole depth range tested. There was considerable variation between models in the times permitted at all of the depth/decompression combinations. The present study would support the use of some dive computers for controlling single, square-wave diving by some occupational sectors. The choice of which makes and models to use would have to consider their specific dive management characteristics which may additionally be affected by the intended operational depth and whether staged decompression was permitted.
Physics-based subsurface visualization of human tissue.
Sharp, Richard; Adams, Jacob; Machiraju, Raghu; Lee, Robert; Crane, Robert
2007-01-01
In this paper, we present a framework for simulating light transport in three-dimensional tissue with inhomogeneous scattering properties. Our approach employs a computational model to simulate light scattering in tissue through the finite element solution of the diffusion equation. Although our model handles both visible and nonvisible wavelengths, we especially focus on the interaction of near infrared (NIR) light with tissue. Since most human tissue is permeable to NIR light, tools to noninvasively image tumors, blood vasculature, and monitor blood oxygenation levels are being constructed. We apply this model to a numerical phantom to visually reproduce the images generated by these real-world tools. Therefore, in addition to enabling inverse design of detector instruments, our computational tools produce physically-accurate visualizations of subsurface structures.
Estimation of surface temperature in remote pollution measurement experiments
NASA Technical Reports Server (NTRS)
Gupta, S. K.; Tiwari, S. N.
1978-01-01
A simple algorithm has been developed for estimating the actual surface temperature by applying corrections to the effective brightness temperature measured by radiometers mounted on remote sensing platforms. Corrections to effective brightness temperature are computed using an accurate radiative transfer model for the 'basic atmosphere' and several modifications of this caused by deviations of the various atmospheric and surface parameters from their base model values. Model calculations are employed to establish simple analytical relations between the deviations of these parameters and the additional temperature corrections required to compensate for them. Effects of simultaneous variation of two parameters are also examined. Use of these analytical relations instead of detailed radiative transfer calculations for routine data analysis results in a severalfold reduction in computation costs.
Spatial distribution of nuclei in progressive nucleation: Modeling and application
NASA Astrophysics Data System (ADS)
Tomellini, Massimo
2018-04-01
Phase transformations ruled by non-simultaneous nucleation and growth do not lead to random distribution of nuclei. Since nucleation is only allowed in the untransformed portion of space, positions of nuclei are correlated. In this article an analytical approach is presented for computing pair-correlation function of nuclei in progressive nucleation. This quantity is further employed for characterizing the spatial distribution of nuclei through the nearest neighbor distribution function. The modeling is developed for nucleation in 2D space with power growth law and it is applied to describe electrochemical nucleation where correlation effects are significant. Comparison with both computer simulations and experimental data lends support to the model which gives insights into the transition from Poissonian to correlated nearest neighbor probability density.
Local and global Λ polarization in a vortical fluid
Li, Hui; Petersen, Hannah; Pang, Long -Gang; ...
2017-09-25
We compute the fermion spin distribution in the vortical fluid created in off-central high energy heavy-ion collisions. We employ the event-by-event (3+1)D viscous hydrodynamic model. The spin polarization density is proportional to the local fluid vorticity in quantum kinetic theory. As a result of strong collectivity, the spatial distribution of the local vorticity on the freeze-out hyper-surface strongly correlates to the rapidity and azimuthal angle distribution of fermion spins. We investigate the sensitivity of the local polarization to the initial fluid velocity in the hydrodynamic model and compute the global polarization of Λ hyperons by the AMPT model. The energymore » dependence of the global polarization agrees with the STAR data.« less
Local and global Λ polarization in a vortical fluid
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Hui; Petersen, Hannah; Pang, Long -Gang
We compute the fermion spin distribution in the vortical fluid created in off-central high energy heavy-ion collisions. We employ the event-by-event (3+1)D viscous hydrodynamic model. The spin polarization density is proportional to the local fluid vorticity in quantum kinetic theory. As a result of strong collectivity, the spatial distribution of the local vorticity on the freeze-out hyper-surface strongly correlates to the rapidity and azimuthal angle distribution of fermion spins. We investigate the sensitivity of the local polarization to the initial fluid velocity in the hydrodynamic model and compute the global polarization of Λ hyperons by the AMPT model. The energymore » dependence of the global polarization agrees with the STAR data.« less
An Intelligent Ensemble Neural Network Model for Wind Speed Prediction in Renewable Energy Systems.
Ranganayaki, V; Deepa, S N
2016-01-01
Various criteria are proposed to select the number of hidden neurons in artificial neural network (ANN) models and based on the criterion evolved an intelligent ensemble neural network model is proposed to predict wind speed in renewable energy applications. The intelligent ensemble neural model based wind speed forecasting is designed by averaging the forecasted values from multiple neural network models which includes multilayer perceptron (MLP), multilayer adaptive linear neuron (Madaline), back propagation neural network (BPN), and probabilistic neural network (PNN) so as to obtain better accuracy in wind speed prediction with minimum error. The random selection of hidden neurons numbers in artificial neural network results in overfitting or underfitting problem. This paper aims to avoid the occurrence of overfitting and underfitting problems. The selection of number of hidden neurons is done in this paper employing 102 criteria; these evolved criteria are verified by the computed various error values. The proposed criteria for fixing hidden neurons are validated employing the convergence theorem. The proposed intelligent ensemble neural model is applied for wind speed prediction application considering the real time wind data collected from the nearby locations. The obtained simulation results substantiate that the proposed ensemble model reduces the error value to minimum and enhances the accuracy. The computed results prove the effectiveness of the proposed ensemble neural network (ENN) model with respect to the considered error factors in comparison with that of the earlier models available in the literature.
An Intelligent Ensemble Neural Network Model for Wind Speed Prediction in Renewable Energy Systems
Ranganayaki, V.; Deepa, S. N.
2016-01-01
Various criteria are proposed to select the number of hidden neurons in artificial neural network (ANN) models and based on the criterion evolved an intelligent ensemble neural network model is proposed to predict wind speed in renewable energy applications. The intelligent ensemble neural model based wind speed forecasting is designed by averaging the forecasted values from multiple neural network models which includes multilayer perceptron (MLP), multilayer adaptive linear neuron (Madaline), back propagation neural network (BPN), and probabilistic neural network (PNN) so as to obtain better accuracy in wind speed prediction with minimum error. The random selection of hidden neurons numbers in artificial neural network results in overfitting or underfitting problem. This paper aims to avoid the occurrence of overfitting and underfitting problems. The selection of number of hidden neurons is done in this paper employing 102 criteria; these evolved criteria are verified by the computed various error values. The proposed criteria for fixing hidden neurons are validated employing the convergence theorem. The proposed intelligent ensemble neural model is applied for wind speed prediction application considering the real time wind data collected from the nearby locations. The obtained simulation results substantiate that the proposed ensemble model reduces the error value to minimum and enhances the accuracy. The computed results prove the effectiveness of the proposed ensemble neural network (ENN) model with respect to the considered error factors in comparison with that of the earlier models available in the literature. PMID:27034973
Supercritical tests of a self-optimizing, variable-Camber wind tunnel model
NASA Technical Reports Server (NTRS)
Levinsky, E. S.; Palko, R. L.
1979-01-01
A testing procedure was used in a 16-foot Transonic Propulsion Wind Tunnel which leads to optimum wing airfoil sections without stopping the tunnel for model changes. Being experimental, the optimum shapes obtained incorporate various three-dimensional and nonlinear viscous and transonic effects not included in analytical optimization methods. The method is a closed-loop, computer-controlled, interactive procedure and employs a Self-Optimizing Flexible Technology wing semispan model that conformally adapts the airfoil section at two spanwise control stations to maximize or minimize various prescribed merit functions subject to both equality and inequality constraints. The model, which employed twelve independent hydraulic actuator systems and flexible skins, was also used for conventional testing. Although six of seven optimizations attempted were at least partially convergent, further improvements in model skin smoothness and hydraulic reliability are required to make the technique fully operational.
The Preliminary Design of a Standardized Spacecraft Bus for Small Tactical Satellites (Volume 1)
1996-11-01
characteristics, and not detailed design recommendations, the team decided to avoid modeling the interaction among the objective attributes. 47 5.6 Flexibility of...in the Modsat computer model are necessarily "generic" in nature to provide both flexibility in design evaluation and a foundation on which more...the methods employed during the study, the scope of the problem, the value system used to evaluate alternatives, tradeoff studies performed, modeling
Employment of Geoscientists in the Private Sector
NASA Astrophysics Data System (ADS)
Russell, J. L.
2001-05-01
In the private sector, major employers of geoscientists engage in diverse activities ranging from resource exploration and extraction, assessment of geologic hazards, and determination of environmental impacts. These firms actively recruit, from the breadth of geoscience disciplines, technically qualified individuals with the ability to make pragmatic decisions in the context of multidisciplinary teams that commonly include non-scientists. Moreover, they expect applicants to communicate effectively verbally and in writing, as well as demonstrate skills and experience in integrating field investigations, conducting laboratory studies, and accomplishing computer modeling. These applicants should be capable of simultaneously working in multiple projects which are rapidly evolving. Successful recruiting and employment requires interactions between the job applicant and potential employer conducted with honesty and integrity. Resumes and associated transmittal letters should be directed to specific employers based on the applicant's review of information on the firm from the Internet and other sources. "Shotgun" or blanket approaches are seldom productive. Participation in pertinent professional societies, internships, and summer employment can provide valuable experiences and opportunities for networking with potential employers.
Reis, H; Papadopoulos, M G; Grzybowski, A
2006-09-21
This is the second part of a study to elucidate the local field effects on the nonlinear optical properties of p-nitroaniline (pNA) in three solvents of different multipolar character, that is, cyclohexane (CH), 1,4-dioxane (DI), and tetrahydrofuran (THF), employing a discrete description of the solutions. By the use of liquid structure information from molecular dynamics simulations and molecular properties computed by high-level ab initio methods, the local field and local field gradients on p-nitroaniline and the solvent molecules are computed in quadrupolar approximation. To validate the simulations and the induction model, static and dynamic (non)linear properties of the pure solvents are also computed. With the exception of the static dielectric constant of pure THF, a good agreement between computed and experimental refractive indices, dielectric constants, and third harmonic generation signals is obtained for the solvents. For the solutions, it is found that multipole moments up to two orders higher than quadrupole have a negligible influence on the local fields on pNA, if a simple distribution model is employed for the electric properties of pNA. Quadrupole effects are found to be nonnegligible in all three solvents but are especially pronounced in the 1,4-dioxane solvent, in which the local fields are similar to those in THF, although the dielectric constant of DI is 2.2 and that of the simulated THF is 5.4. The electric-field-induced second harmonic generation (EFISH) signal and the hyper-Rayleigh scattering signal of pNA in the solutions computed with the local field are in good to fair agreement with available experimental results. This confirms the effect of the "dioxane anomaly" also on nonlinear optical properties. Predictions based on an ellipsoidal Onsager model as applied by experimentalists are in very good agreement with the discrete model predictions. This is in contrast to a recent discrete reaction field calculation of pNA in 1,4-dioxane, which found that the predicted first hyperpolarizability of pNA deviated strongly from the predictions obtained using Onsager-Lorentz local field factors.
Efficient least angle regression for identification of linear-in-the-parameters models
Beach, Thomas H.; Rezgui, Yacine
2017-01-01
Least angle regression, as a promising model selection method, differentiates itself from conventional stepwise and stagewise methods, in that it is neither too greedy nor too slow. It is closely related to L1 norm optimization, which has the advantage of low prediction variance through sacrificing part of model bias property in order to enhance model generalization capability. In this paper, we propose an efficient least angle regression algorithm for model selection for a large class of linear-in-the-parameters models with the purpose of accelerating the model selection process. The entire algorithm works completely in a recursive manner, where the correlations between model terms and residuals, the evolving directions and other pertinent variables are derived explicitly and updated successively at every subset selection step. The model coefficients are only computed when the algorithm finishes. The direct involvement of matrix inversions is thereby relieved. A detailed computational complexity analysis indicates that the proposed algorithm possesses significant computational efficiency, compared with the original approach where the well-known efficient Cholesky decomposition is involved in solving least angle regression. Three artificial and real-world examples are employed to demonstrate the effectiveness, efficiency and numerical stability of the proposed algorithm. PMID:28293140
Defect Genome of Cubic Perovskites for Fuel Cell Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balachandran, Janakiraman; Lin, Lianshan; Anchell, Jonathan S.
Heterogeneities such as point defects, inherent to material systems, can profoundly influence material functionalities critical for numerous energy applications. This influence in principle can be identified and quantified through development of large defect data sets which we call the defect genome, employing high-throughput ab initio calculations. However, high-throughput screening of material models with point defects dramatically increases the computational complexity and chemical search space, creating major impediments toward developing a defect genome. In this paper, we overcome these impediments by employing computationally tractable ab initio models driven by highly scalable workflows, to study formation and interaction of various point defectsmore » (e.g., O vacancies, H interstitials, and Y substitutional dopant), in over 80 cubic perovskites, for potential proton-conducting ceramic fuel cell (PCFC) applications. The resulting defect data sets identify several promising perovskite compounds that can exhibit high proton conductivity. Furthermore, the data sets also enable us to identify and explain, insightful and novel correlations among defect energies, material identities, and defect-induced local structural distortions. Finally, such defect data sets and resultant correlations are necessary to build statistical machine learning models, which are required to accelerate discovery of new materials.« less
Defect Genome of Cubic Perovskites for Fuel Cell Applications
Balachandran, Janakiraman; Lin, Lianshan; Anchell, Jonathan S.; ...
2017-10-10
Heterogeneities such as point defects, inherent to material systems, can profoundly influence material functionalities critical for numerous energy applications. This influence in principle can be identified and quantified through development of large defect data sets which we call the defect genome, employing high-throughput ab initio calculations. However, high-throughput screening of material models with point defects dramatically increases the computational complexity and chemical search space, creating major impediments toward developing a defect genome. In this paper, we overcome these impediments by employing computationally tractable ab initio models driven by highly scalable workflows, to study formation and interaction of various point defectsmore » (e.g., O vacancies, H interstitials, and Y substitutional dopant), in over 80 cubic perovskites, for potential proton-conducting ceramic fuel cell (PCFC) applications. The resulting defect data sets identify several promising perovskite compounds that can exhibit high proton conductivity. Furthermore, the data sets also enable us to identify and explain, insightful and novel correlations among defect energies, material identities, and defect-induced local structural distortions. Finally, such defect data sets and resultant correlations are necessary to build statistical machine learning models, which are required to accelerate discovery of new materials.« less
NASA Technical Reports Server (NTRS)
Shipman, D. L.
1972-01-01
The development of a model to simulate the information system of a program management type of organization is reported. The model statistically determines the following parameters: type of messages, destinations, delivery durations, type processing, processing durations, communication channels, outgoing messages, and priorites. The total management information system of the program management organization is considered, including formal and informal information flows and both facilities and equipment. The model is written in General Purpose System Simulation 2 computer programming language for use on the Univac 1108, Executive 8 computer. The model is simulated on a daily basis and collects queue and resource utilization statistics for each decision point. The statistics are then used by management to evaluate proposed resource allocations, to evaluate proposed changes to the system, and to identify potential problem areas. The model employs both empirical and theoretical distributions which are adjusted to simulate the information flow being studied.
Modulation of the error-related negativity by response conflict.
Danielmeier, Claudia; Wessel, Jan R; Steinhauser, Marco; Ullsperger, Markus
2009-11-01
An arrow version of the Eriksen flanker task was employed to investigate the influence of conflict on the error-related negativity (ERN). The degree of conflict was modulated by varying the distance between flankers and the target arrow (CLOSE and FAR conditions). Error rates and reaction time data from a behavioral experiment were used to adapt a connectionist model of this task. This model was based on the conflict monitoring theory and simulated behavioral and event-related potential data. The computational model predicted an increased ERN amplitude in FAR incompatible (the low-conflict condition) compared to CLOSE incompatible errors (the high-conflict condition). A subsequent ERP experiment confirmed the model predictions. The computational model explains this finding with larger post-response conflict in far trials. In addition, data and model predictions of the N2 and the LRP support the conflict interpretation of the ERN.
Computer simulations of austenite decomposition of microalloyed 700 MPa steel during cooling
NASA Astrophysics Data System (ADS)
Pohjonen, Aarne; Paananen, Joni; Mourujärvi, Juho; Manninen, Timo; Larkiola, Jari; Porter, David
2018-05-01
We present computer simulations of austenite decomposition to ferrite and bainite during cooling. The phase transformation model is based on Johnson-Mehl-Avrami-Kolmogorov type equations. The model is parameterized by numerical fitting to continuous cooling data obtained with Gleeble thermo-mechanical simulator and it can be used for calculation of the transformation behavior occurring during cooling along any cooling path. The phase transformation model has been coupled with heat conduction simulations. The model includes separate parameters to account for the incubation stage and for the kinetics after the transformation has started. The incubation time is calculated with inversion of the CCT transformation start time. For heat conduction simulations we employed our own parallelized 2-dimensional finite difference code. In addition, the transformation model was also implemented as a subroutine in commercial finite-element software Abaqus which allows for the use of the model in various engineering applications.
Darcy-Forchheimer flow with Cattaneo-Christov heat flux and homogeneous-heterogeneous reactions
Hayat, Tasawar; Haider, Farwa; Alsaedi, Ahmed
2017-01-01
Here Darcy-Forchheimer flow of viscoelastic fluids has been analyzed in the presence of Cattaneo-Christov heat flux and homogeneous-heterogeneous reactions. Results for two viscoelastic fluids are obtained and compared. A linear stretching surface has been used to generate the flow. Flow in porous media is characterized by considering the Darcy-Forchheimer model. Modified version of Fourier's law through Cattaneo-Christov heat flux is employed. Equal diffusion coefficients are employed for both reactants and auto catalyst. Optimal homotopy scheme is employed for solutions development of nonlinear problems. Solutions expressions of velocity, temperature and concentration fields are provided. Skin friction coefficient and heat transfer rate are computed and analyzed. Here the temperature and thermal boundary layer thickness are lower for Cattaneo-Christov heat flux model in comparison to classical Fourier's law of heat conduction. Moreover, the homogeneous and heterogeneous reactions parameters have opposite behaviors for concentration field. PMID:28380014
NASA Astrophysics Data System (ADS)
Fawzy, Diaa E.; Stȩpień, K.
2018-03-01
In the current study we present ab initio numerical computations of the generation and propagation of longitudinal waves in magnetic flux tubes embedded in the atmospheres of late-type stars. The interaction between convective turbulence and the magnetic structure is computed and the obtained longitudinal wave energy flux is used in a self-consistent manner to excite the small-scale magnetic flux tubes. In the current study we reduce the number of assumptions made in our previous studies by considering the full magnetic wave energy fluxes and spectra as well as time-dependent ionization (TDI) of hydrogen, employing multi-level Ca II atomic models, and taking into account departures from local thermodynamic equilibrium. Our models employ the recently confirmed value of the mixing-length parameter α=1.8. Regions with strong magnetic fields (magnetic filling factors of up to 50%) are also considered in the current study. The computed Ca II emission fluxes show a strong dependence on the magnetic filling factors, and the effect of time-dependent ionization (TDI) turns out to be very important in the atmospheres of late-type stars heated by acoustic and magnetic waves. The emitted Ca II fluxes with TDI included into the model are decreased by factors that range from 1.4 to 5.5 for G0V and M0V stars, respectively, compared to models that do not consider TDI. The results of our computations are compared with observations. Excellent agreement between the observed and predicted basal flux is obtained. The predicted trend of Ca II emission flux with magnetic filling factor and stellar surface temperature also agrees well with the observations but the calculated maximum fluxes for stars of different spectral types are about two times lower than observations. Though the longitudinal MHD waves considered here are important for chromosphere heating in high activity stars, additional heating mechanism(s) are apparently present.
User’s Guide for the VTRPE (Variable Terrain Radio Parabolic Equation) Computer Model
1991-10-01
propagation effects and antenna characteristics in radar system performance calculations. the radar transmission equation is oiten employed. Fol- lowing Kerr.2...electromagnetic wave equations for the complex electric and magnetic radiation fields. The model accounts for the effects of nonuniform atmospheric refractivity...mission equation, that is used in the performance prediction and analysis of radar and communication systems. Optimized fast Fourier transform (FFT
An advanced technique for the prediction of decelerator system dynamics.
NASA Technical Reports Server (NTRS)
Talay, T. A.; Morris, W. D.; Whitlock, C. H.
1973-01-01
An advanced two-body six-degree-of-freedom computer model employing an indeterminate structures approach has been developed for the parachute deployment process. The program determines both vehicular and decelerator responses to aerodynamic and physical property inputs. A better insight into the dynamic processes that occur during parachute deployment has been developed. The model is of value in sensitivity studies to isolate important parameters that affect the vehicular response.
Terrestrial solar spectral modeling. [SOLTRAN, BRITE, and FLASH codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bird, R.E.
The utility of accurate computer codes for calculating the solar spectral irradiance under various atmospheric conditions was recognized. New absorption and extraterrestrial spectral data are introduced. Progress is made in radiative transfer modeling outside of the solar community, especially for space and military applications. Three rigorous radiative transfer codes SOLTRAN, BRITE, and FLASH are employed. The SOLTRAN and BRITE codes are described and results from their use are presented.
A Fast Synthetic Aperture Radar Raw Data Simulation Using Cloud Computing.
Li, Zhixin; Su, Dandan; Zhu, Haijiang; Li, Wei; Zhang, Fan; Li, Ruirui
2017-01-08
Synthetic Aperture Radar (SAR) raw data simulation is a fundamental problem in radar system design and imaging algorithm research. The growth of surveying swath and resolution results in a significant increase in data volume and simulation period, which can be considered to be a comprehensive data intensive and computing intensive issue. Although several high performance computing (HPC) methods have demonstrated their potential for accelerating simulation, the input/output (I/O) bottleneck of huge raw data has not been eased. In this paper, we propose a cloud computing based SAR raw data simulation algorithm, which employs the MapReduce model to accelerate the raw data computing and the Hadoop distributed file system (HDFS) for fast I/O access. The MapReduce model is designed for the irregular parallel accumulation of raw data simulation, which greatly reduces the parallel efficiency of graphics processing unit (GPU) based simulation methods. In addition, three kinds of optimization strategies are put forward from the aspects of programming model, HDFS configuration and scheduling. The experimental results show that the cloud computing based algorithm achieves 4_ speedup over the baseline serial approach in an 8-node cloud environment, and each optimization strategy can improve about 20%. This work proves that the proposed cloud algorithm is capable of solving the computing intensive and data intensive issues in SAR raw data simulation, and is easily extended to large scale computing to achieve higher acceleration.
COMPUTATIONAL METHODOLOGIES for REAL-SPACE STRUCTURAL REFINEMENT of LARGE MACROMOLECULAR COMPLEXES
Goh, Boon Chong; Hadden, Jodi A.; Bernardi, Rafael C.; Singharoy, Abhishek; McGreevy, Ryan; Rudack, Till; Cassidy, C. Keith; Schulten, Klaus
2017-01-01
The rise of the computer as a powerful tool for model building and refinement has revolutionized the field of structure determination for large biomolecular systems. Despite the wide availability of robust experimental methods capable of resolving structural details across a range of spatiotemporal resolutions, computational hybrid methods have the unique ability to integrate the diverse data from multimodal techniques such as X-ray crystallography and electron microscopy into consistent, fully atomistic structures. Here, commonly employed strategies for computational real-space structural refinement are reviewed, and their specific applications are illustrated for several large macromolecular complexes: ribosome, virus capsids, chemosensory array, and photosynthetic chromatophore. The increasingly important role of computational methods in large-scale structural refinement, along with current and future challenges, is discussed. PMID:27145875
Cyberdyn supercomputer - a tool for imaging geodinamic processes
NASA Astrophysics Data System (ADS)
Pomeran, Mihai; Manea, Vlad; Besutiu, Lucian; Zlagnean, Luminita
2014-05-01
More and more physical processes developed within the deep interior of our planet, but with significant impact on the Earth's shape and structure, become subject to numerical modelling by using high performance computing facilities. Nowadays, worldwide an increasing number of research centers decide to make use of such powerful and fast computers for simulating complex phenomena involving fluid dynamics and get deeper insight to intricate problems of Earth's evolution. With the CYBERDYN cybernetic infrastructure (CCI), the Solid Earth Dynamics Department in the Institute of Geodynamics of the Romanian Academy boldly steps into the 21st century by entering the research area of computational geodynamics. The project that made possible this advancement, has been jointly supported by EU and Romanian Government through the Structural and Cohesion Funds. It lasted for about three years, ending October 2013. CCI is basically a modern high performance Beowulf-type supercomputer (HPCC), combined with a high performance visualization cluster (HPVC) and a GeoWall. The infrastructure is mainly structured around 1344 cores and 3 TB of RAM. The high speed interconnect is provided by a Qlogic InfiniBand switch, able to transfer up to 40 Gbps. The CCI storage component is a 40 TB Panasas NAS. The operating system is Linux (CentOS). For control and maintenance, the Bright Cluster Manager package is used. The SGE job scheduler manages the job queues. CCI has been designed for a theoretical peak performance up to 11.2 TFlops. Speed tests showed that a high resolution numerical model (256 × 256 × 128 FEM elements) could be resolved with a mean computational speed of 1 time step at 30 seconds, by employing only a fraction of the computing power (20%). After passing the mandatory tests, the CCI has been involved in numerical modelling of various scenarios related to the East Carpathians tectonic and geodynamic evolution, including the Neogene magmatic activity, and the intriguing intermediate-depth seismicity within the so-called Vrancea zone. The CFD code for numerical modelling is CitcomS, a widely employed open source package specifically developed for earth sciences. Several preliminary 3D geodynamic models for simulating an assumed subduction or the effect of a mantle plume will be presented and discussed.
A collision scheme for hybrid fluid-particle simulation of plasmas
NASA Astrophysics Data System (ADS)
Nguyen, Christine; Lim, Chul-Hyun; Verboncoeur, John
2006-10-01
Desorption phenomena at the wall of a tokamak can lead to the introduction of impurities at the edge of a thermonuclear plasma. In particular, the use of carbon as a constituent of the tokamak wall, as planned for ITER, requires the study of carbon and hydrocarbon transport in the plasma, including understanding of collisional interaction with the plasma. These collisions can result in new hydrocarbons, hydrogen, secondary electrons and so on. Computational modeling is a primary tool for studying these phenomena. XOOPIC [1] and OOPD1 are widely used computer modeling tools for the simulation of plasmas. Both are particle type codes. Particle simulation gives more kinetic information than fluid simulation, but more computation time is required. In order to reduce this disadvantage, hybrid simulation has been developed, and applied to the modeling of collisions. Present particle simulation tools such as XOOPIC and OODP1 employ a Monte Carlo model for the collisions between particle species and a neutral background gas defined by its temperature and pressure. In fluid-particle hybrid plasma models, collisions include combinations of particle and fluid interactions categorized by projectile-target pairing: particle-particle, particle-fluid, and fluid-fluid. For verification of this hybrid collision scheme, we compare simulation results to analytic solutions for classical plasma models. [1] Verboncoeur et al. Comput. Phys. Comm. 87, 199 (1995).
Neural network feedforward control of a closed-circuit wind tunnel
NASA Astrophysics Data System (ADS)
Sutcliffe, Peter
Accurate control of wind-tunnel test conditions can be dramatically enhanced using feedforward control architectures which allow operating conditions to be maintained at a desired setpoint through the use of mathematical models as the primary source of prediction. However, as the desired accuracy of the feedforward prediction increases, the model complexity also increases, so that an ever increasing computational load is incurred. This drawback can be avoided by employing a neural network that is trained offline using the output of a high fidelity wind-tunnel mathematical model, so that the neural network can rapidly reproduce the predictions of the model with a greatly reduced computational overhead. A novel neural network database generation method, developed through the use of fractional factorial arrays, was employed such that a neural network can accurately predict wind-tunnel parameters across a wide range of operating conditions whilst trained upon a highly efficient database. The subsequent network was incorporated into a Neural Network Model Predictive Control (NNMPC) framework to allow an optimised output schedule capable of providing accurate control of the wind-tunnel operating parameters. Facilitation of an optimised path through the solution space is achieved through the use of a chaos optimisation algorithm such that a more globally optimum solution is likely to be found with less computational expense than the gradient descent method. The parameters associated with the NNMPC such as the control horizon are determined through the use of a Taguchi methodology enabling the minimum number of experiments to be carried out to determine the optimal combination. The resultant NNMPC scheme was employed upon the Hessert Low Speed Wind Tunnel at the University of Notre Dame to control the test-section temperature such that it follows a pre-determined reference trajectory during changes in the test-section velocity. Experimental testing revealed that the derived NNMPC controller provided an excellent level of control over the test-section temperature in adherence to a reference trajectory even when faced with unforeseen disturbances such as rapid changes in the operating environment.
Effects of walking in deep venous thrombosis: a new integrated solid and fluid mechanics model.
López, Josep M; Fortuny, Gerard; Puigjaner, Dolors; Herrero, Joan; Marimon, Francesc; Garcia-Bennett, Josep
2017-05-01
Deep venous thrombosis (DVT) is a common disease. Large thrombi in venous vessels cause bad blood circulation and pain; and when a blood clot detaches from a vein wall, it causes an embolism whose consequences range from mild to fatal. Walking is recommended to DVT patients as a therapeutical complement. In this study the mechanical effects of walking on a specific patient of DVT were simulated by means of an unprecedented integration of 3 elements: a real geometry, a biomechanical model of body tissues, and a computational fluid dynamics study. A set of computed tomography images of a patient's leg with a thrombus in the popliteal vein was employed to reconstruct a geometry model. Then a biomechanical model was used to compute the new deformed geometry of the vein as a function of the fiber stretch level of the semimembranosus muscle. Finally, a computational fluid dynamics study was performed to compute the blood flow and the wall shear stress (WSS) at the vein and thrombus walls. Calculations showed that either a lengthening or shortening of the semimembranosus muscle led to a decrease of WSS levels up to 10%. Notwithstanding, changes in blood viscosity properties or blood flow rate may easily have a greater impact in WSS. Copyright © 2016 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Kunkun, E-mail: ktg@illinois.edu; Inria Bordeaux – Sud-Ouest, Team Cardamom, 200 avenue de la Vieille Tour, 33405 Talence; Congedo, Pietro M.
The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable formore » real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.« less
KAMINSKI, GEORGE A.; STERN, HARRY A.; BERNE, B. J.; FRIESNER, RICHARD A.; CAO, YIXIANG X.; MURPHY, ROBERT B.; ZHOU, RUHONG; HALGREN, THOMAS A.
2014-01-01
We present results of developing a methodology suitable for producing molecular mechanics force fields with explicit treatment of electrostatic polarization for proteins and other molecular system of biological interest. The technique allows simulation of realistic-size systems. Employing high-level ab initio data as a target for fitting allows us to avoid the problem of the lack of detailed experimental data. Using the fast and reliable quantum mechanical methods supplies robust fitting data for the resulting parameter sets. As a result, gas-phase many-body effects for dipeptides are captured within the average RMSD of 0.22 kcal/mol from their ab initio values, and conformational energies for the di- and tetrapeptides are reproduced within the average RMSD of 0.43 kcal/mol from their quantum mechanical counterparts. The latter is achieved in part because of application of a novel torsional fitting technique recently developed in our group, which has already been used to greatly improve accuracy of the peptide conformational equilibrium prediction with the OPLS-AA force field.1 Finally, we have employed the newly developed first-generation model in computing gas-phase conformations of real proteins, as well as in molecular dynamics studies of the systems. The results show that, although the overall accuracy is no better than what can be achieved with a fixed-charges model, the methodology produces robust results, permits reasonably low computational cost, and avoids other computational problems typical for polarizable force fields. It can be considered as a solid basis for building a more accurate and complete second-generation model. PMID:12395421
Inverse Problems in Geodynamics Using Machine Learning Algorithms
NASA Astrophysics Data System (ADS)
Shahnas, M. H.; Yuen, D. A.; Pysklywec, R. N.
2018-01-01
During the past few decades numerical studies have been widely employed to explore the style of circulation and mixing in the mantle of Earth and other planets. However, in geodynamical studies there are many properties from mineral physics, geochemistry, and petrology in these numerical models. Machine learning, as a computational statistic-related technique and a subfield of artificial intelligence, has rapidly emerged recently in many fields of sciences and engineering. We focus here on the application of supervised machine learning (SML) algorithms in predictions of mantle flow processes. Specifically, we emphasize on estimating mantle properties by employing machine learning techniques in solving an inverse problem. Using snapshots of numerical convection models as training samples, we enable machine learning models to determine the magnitude of the spin transition-induced density anomalies that can cause flow stagnation at midmantle depths. Employing support vector machine algorithms, we show that SML techniques can successfully predict the magnitude of mantle density anomalies and can also be used in characterizing mantle flow patterns. The technique can be extended to more complex geodynamic problems in mantle dynamics by employing deep learning algorithms for putting constraints on properties such as viscosity, elastic parameters, and the nature of thermal and chemical anomalies.
NASA Technical Reports Server (NTRS)
Beech, G. S.; Hampton, R. D.; Rupert, J. K.
2004-01-01
Many microgravity space-science experiments require vibratory acceleration levels that are unachievable without active isolation. The Boeing Corporation's active rack isolation system (ARIS) employs a novel combination of magnetic actuation and mechanical linkages to address these isolation requirements on the International Space Station. Effective model-based vibration isolation requires: (1) An isolation device, (2) an adequate dynamic; i.e., mathematical, model of that isolator, and (3) a suitable, corresponding controller. This Technical Memorandum documents the validation of that high-fidelity dynamic model of ARIS. The verification of this dynamics model was achieved by utilizing two commercial off-the-shelf (COTS) software tools: Deneb's ENVISION(registered trademark), and Online Dynamics Autolev(trademark). ENVISION is a robotics software package developed for the automotive industry that employs three-dimensional computer-aided design models to facilitate both forward and inverse kinematics analyses. Autolev is a DOS-based interpreter designed, in general, to solve vector-based mathematical problems and specifically to solve dynamics problems using Kane's method. The simplification of this model was achieved using the small-angle theorem for the joint angle of the ARIS actuators. This simplification has a profound effect on the overall complexity of the closed-form solution while yielding a closed-form solution easily employed using COTS control hardware.
A comparison of upwind schemes for computation of three-dimensional hypersonic real-gas flows
NASA Technical Reports Server (NTRS)
Gerbsch, R. A.; Agarwal, R. K.
1992-01-01
The method of Suresh and Liou (1992) is extended, and the resulting explicit noniterative upwind finite-volume algorithm is applied to the integration of 3D parabolized Navier-Stokes equations to model 3D hypersonic real-gas flowfields. The solver is second-order accurate in the marching direction and employs flux-limiters to make the algorithm second-order accurate, with total variation diminishing in the cross-flow direction. The algorithm is used to compute hypersonic flow over a yawed cone and over the Ames All-Body Hypersonic Vehicle. The solutions obtained agree well with other computational results and with experimental data.
Amplified crossflow disturbances in the laminar boundary layer on swept wings with suction
NASA Technical Reports Server (NTRS)
Dagenhart, J. R.
1981-01-01
Solution charts of the Orr-Sommerfeld equation for stationary crossflow disturbances are presented for 10 typical velocity profiles on a swept laminar flow control wing. The critical crossflow Reynolds number is shown to be a function of a boundary layer shape factor. Amplification rates for crossflow disturbances are shown to be proportional to the maximum crossflow velocity. A computer stability program called MARIA, employing the amplification rate data for the 10 crossflow velocity profiles, is constructed. This code is shown to adequately approximate more involved computer stability codes using less than two percent as much computer time while retaining the essential physical disturbance growth model.
Machine learning methods for classifying human physical activity from on-body accelerometers.
Mannini, Andrea; Sabatini, Angelo Maria
2010-01-01
The use of on-body wearable sensors is widespread in several academic and industrial domains. Of great interest are their applications in ambulatory monitoring and pervasive computing systems; here, some quantitative analysis of human motion and its automatic classification are the main computational tasks to be pursued. In this paper, we discuss how human physical activity can be classified using on-body accelerometers, with a major emphasis devoted to the computational algorithms employed for this purpose. In particular, we motivate our current interest for classifiers based on Hidden Markov Models (HMMs). An example is illustrated and discussed by analysing a dataset of accelerometer time series.
Efficient computation of the joint sample frequency spectra for multiple populations.
Kamm, John A; Terhorst, Jonathan; Song, Yun S
2017-01-01
A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity.
Efficient computation of the joint sample frequency spectra for multiple populations
Kamm, John A.; Terhorst, Jonathan; Song, Yun S.
2016-01-01
A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity. PMID:28239248
Nam, Junghyun; Choo, Kim-Kwang Raymond; Han, Sangchul; Kim, Moonseong; Paik, Juryon; Won, Dongho
2015-01-01
A smart-card-based user authentication scheme for wireless sensor networks (hereafter referred to as a SCA-WSN scheme) is designed to ensure that only users who possess both a smart card and the corresponding password are allowed to gain access to sensor data and their transmissions. Despite many research efforts in recent years, it remains a challenging task to design an efficient SCA-WSN scheme that achieves user anonymity. The majority of published SCA-WSN schemes use only lightweight cryptographic techniques (rather than public-key cryptographic techniques) for the sake of efficiency, and have been demonstrated to suffer from the inability to provide user anonymity. Some schemes employ elliptic curve cryptography for better security but require sensors with strict resource constraints to perform computationally expensive scalar-point multiplications; despite the increased computational requirements, these schemes do not provide user anonymity. In this paper, we present a new SCA-WSN scheme that not only achieves user anonymity but also is efficient in terms of the computation loads for sensors. Our scheme employs elliptic curve cryptography but restricts its use only to anonymous user-to-gateway authentication, thereby allowing sensors to perform only lightweight cryptographic operations. Our scheme also enjoys provable security in a formal model extended from the widely accepted Bellare-Pointcheval-Rogaway (2000) model to capture the user anonymity property and various SCA-WSN specific attacks (e.g., stolen smart card attacks, node capture attacks, privileged insider attacks, and stolen verifier attacks).
Nam, Junghyun; Choo, Kim-Kwang Raymond; Han, Sangchul; Kim, Moonseong; Paik, Juryon; Won, Dongho
2015-01-01
A smart-card-based user authentication scheme for wireless sensor networks (hereafter referred to as a SCA-WSN scheme) is designed to ensure that only users who possess both a smart card and the corresponding password are allowed to gain access to sensor data and their transmissions. Despite many research efforts in recent years, it remains a challenging task to design an efficient SCA-WSN scheme that achieves user anonymity. The majority of published SCA-WSN schemes use only lightweight cryptographic techniques (rather than public-key cryptographic techniques) for the sake of efficiency, and have been demonstrated to suffer from the inability to provide user anonymity. Some schemes employ elliptic curve cryptography for better security but require sensors with strict resource constraints to perform computationally expensive scalar-point multiplications; despite the increased computational requirements, these schemes do not provide user anonymity. In this paper, we present a new SCA-WSN scheme that not only achieves user anonymity but also is efficient in terms of the computation loads for sensors. Our scheme employs elliptic curve cryptography but restricts its use only to anonymous user-to-gateway authentication, thereby allowing sensors to perform only lightweight cryptographic operations. Our scheme also enjoys provable security in a formal model extended from the widely accepted Bellare-Pointcheval-Rogaway (2000) model to capture the user anonymity property and various SCA-WSN specific attacks (e.g., stolen smart card attacks, node capture attacks, privileged insider attacks, and stolen verifier attacks). PMID:25849359
NASA Astrophysics Data System (ADS)
Mergili, Martin; Fischer, Jan-Thomas; Krenn, Julia; Pudasaini, Shiva P.
2017-02-01
r.avaflow represents an innovative open-source computational tool for routing rapid mass flows, avalanches, or process chains from a defined release area down an arbitrary topography to a deposition area. In contrast to most existing computational tools, r.avaflow (i) employs a two-phase, interacting solid and fluid mixture model (Pudasaini, 2012); (ii) is suitable for modelling more or less complex process chains and interactions; (iii) explicitly considers both entrainment and stopping with deposition, i.e. the change of the basal topography; (iv) allows for the definition of multiple release masses, and/or hydrographs; and (v) serves with built-in functionalities for validation, parameter optimization, and sensitivity analysis. r.avaflow is freely available as a raster module of the GRASS GIS software, employing the programming languages Python and C along with the statistical software R. We exemplify the functionalities of r.avaflow by means of two sets of computational experiments: (1) generic process chains consisting in bulk mass and hydrograph release into a reservoir with entrainment of the dam and impact downstream; (2) the prehistoric Acheron rock avalanche, New Zealand. The simulation results are generally plausible for (1) and, after the optimization of two key parameters, reasonably in line with the corresponding observations for (2). However, we identify some potential to enhance the analytic and numerical concepts. Further, thorough parameter studies will be necessary in order to make r.avaflow fit for reliable forward simulations of possible future mass flow events.
Parallel stochastic simulation of macroscopic calcium currents.
González-Vélez, Virginia; González-Vélez, Horacio
2007-06-01
This work introduces MACACO, a macroscopic calcium currents simulator. It provides a parameter-sweep framework which computes macroscopic Ca(2+) currents from the individual aggregation of unitary currents, using a stochastic model for L-type Ca(2+) channels. MACACO uses a simplified 3-state Markov model to simulate the response of each Ca(2+) channel to different voltage inputs to the cell. In order to provide an accurate systematic view for the stochastic nature of the calcium channels, MACACO is composed of an experiment generator, a central simulation engine and a post-processing script component. Due to the computational complexity of the problem and the dimensions of the parameter space, the MACACO simulation engine employs a grid-enabled task farm. Having been designed as a computational biology tool, MACACO heavily borrows from the way cell physiologists conduct and report their experimental work.
Computation of Reacting Flows in Combustion Processes
NASA Technical Reports Server (NTRS)
Keith, Theo G., Jr.; Chen, Kuo-Huey
1997-01-01
The main objective of this research was to develop an efficient three-dimensional computer code for chemically reacting flows. The main computer code developed is ALLSPD-3D. The ALLSPD-3D computer program is developed for the calculation of three-dimensional, chemically reacting flows with sprays. The ALL-SPD code employs a coupled, strongly implicit solution procedure for turbulent spray combustion flows. A stochastic droplet model and an efficient method for treatment of the spray source terms in the gas-phase equations are used to calculate the evaporating liquid sprays. The chemistry treatment in the code is general enough that an arbitrary number of reaction and species can be defined by the users. Also, it is written in generalized curvilinear coordinates with both multi-block and flexible internal blockage capabilities to handle complex geometries. In addition, for general industrial combustion applications, the code provides both dilution and transpiration cooling capabilities. The ALLSPD algorithm, which employs the preconditioning and eigenvalue rescaling techniques, is capable of providing efficient solution for flows with a wide range of Mach numbers. Although written for three-dimensional flows in general, the code can be used for two-dimensional and axisymmetric flow computations as well. The code is written in such a way that it can be run in various computer platforms (supercomputers, workstations and parallel processors) and the GUI (Graphical User Interface) should provide a user-friendly tool in setting up and running the code.
Flow effects of blood constitutive equations in 3D models of vascular anomalies
NASA Astrophysics Data System (ADS)
Neofytou, Panagiotis; Tsangaris, Sokrates
2006-06-01
The effects of different blood rheological models are investigated numerically utilizing two three- dimensional (3D) models of vascular anomalies, namely a stenosis and an abdominal aortic aneurysm model. The employed CFD code incorporates the SIMPLE scheme in conjunction with the finite-volume method with collocated arrangement of variables. The approximation of the convection terms is carried out using the QUICK differencing scheme, whereas the code enables also multi-block computations, which are useful in order to cope with the two-block grid structure of the current computational domain. Three non-Newtonian models are employed, namely the Casson, Power-Law and Quemada models, which have been introduced in the past for modelling the rheological behaviour of blood and cover both the viscous as well as the two-phase character of blood. In view of the haemodynamical mechanisms related to abnormalities in the vascular network and the role of the wall shear stress in initiating and further developing of arterial diseases, the present study focuses on the 3D flow field and in particular on the distribution as well as on both low and high values of the wall shear stress in the vicinity of the anomaly. Finally, a comparison is made between the effects of each rheological model on the aforementioned parameters. Results show marked differences between simulating blood as Newtonian and non-Newtonian fluid and furthermore the Power-Law model exhibits different behaviour in all cases compared to the other models whereas Quemada and Casson models exhibit similar behaviour in the case of the stenosis but different behaviour in the case of the aneurysm.
Recognizing Spoken Words: The Neighborhood Activation Model
Luce, Paul A.; Pisoni, David B.
2012-01-01
Objective A fundamental problem in the study of human spoken word recognition concerns the structural relations among the sound patterns of words in memory and the effects these relations have on spoken word recognition. In the present investigation, computational and experimental methods were employed to address a number of fundamental issues related to the representation and structural organization of spoken words in the mental lexicon and to lay the groundwork for a model of spoken word recognition. Design Using a computerized lexicon consisting of transcriptions of 20,000 words, similarity neighborhoods for each of the transcriptions were computed. Among the variables of interest in the computation of the similarity neighborhoods were: 1) the number of words occurring in a neighborhood, 2) the degree of phonetic similarity among the words, and 3) the frequencies of occurrence of the words in the language. The effects of these variables on auditory word recognition were examined in a series of behavioral experiments employing three experimental paradigms: perceptual identification of words in noise, auditory lexical decision, and auditory word naming. Results The results of each of these experiments demonstrated that the number and nature of words in a similarity neighborhood affect the speed and accuracy of word recognition. A neighborhood probability rule was developed that adequately predicted identification performance. This rule, based on Luce's (1959) choice rule, combines stimulus word intelligibility, neighborhood confusability, and frequency into a single expression. Based on this rule, a model of auditory word recognition, the neighborhood activation model, was proposed. This model describes the effects of similarity neighborhood structure on the process of discriminating among the acoustic-phonetic representations of words in memory. The results of these experiments have important implications for current conceptions of auditory word recognition in normal and hearing impaired populations of children and adults. PMID:9504270
NASA Astrophysics Data System (ADS)
Tang, Kunkun; Congedo, Pietro M.; Abgrall, Rémi
2016-06-01
The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.
Identification of quasi-steady compressor characteristics from transient data
NASA Technical Reports Server (NTRS)
Nunes, K. B.; Rock, S. M.
1984-01-01
The principal goal was to demonstrate that nonlinear compressor map parameters, which govern an in-stall response, can be identified from test data using parameter identification techniques. The tasks included developing and then applying an identification procedure to data generated by NASA LeRC on a hybrid computer. Two levels of model detail were employed. First was a lumped compressor rig model; second was a simplified turbofan model. The main outputs are the tools and procedures generated to accomplish the identification.
NASA Astrophysics Data System (ADS)
Blanco, Francesco; La Rocca, Paola; Petta, Catia; Riggi, Francesco
2009-01-01
An educational model simulation of the sound produced by lightning in the sky has been employed to demonstrate realistic signatures of thunder and its connection to the particular structure of the lightning channel. Algorithms used in the past have been revisited and implemented, making use of current computer techniques. The basic properties of the mathematical model, together with typical results and suggestions for additional developments are discussed. The paper is intended as a teaching aid for students and teachers in the context of introductory physics courses at university level.
NASA Technical Reports Server (NTRS)
Raju, M. S.
1998-01-01
The state of the art in multidimensional combustor modeling as evidenced by the level of sophistication employed in terms of modeling and numerical accuracy considerations, is also dictated by the available computer memory and turnaround times afforded by present-day computers. With the aim of advancing the current multi-dimensional computational tools used in the design of advanced technology combustors, a solution procedure is developed that combines the novelty of the coupled CFD/spray/scalar Monte Carlo PDF (Probability Density Function) computations on unstructured grids with the ability to run on parallel architectures. In this approach, the mean gas-phase velocity and turbulence fields are determined from a standard turbulence model, the joint composition of species and enthalpy from the solution of a modeled PDF transport equation, and a Lagrangian-based dilute spray model is used for the liquid-phase representation. The gas-turbine combustor flows are often characterized by a complex interaction between various physical processes associated with the interaction between the liquid and gas phases, droplet vaporization, turbulent mixing, heat release associated with chemical kinetics, radiative heat transfer associated with highly absorbing and radiating species, among others. The rate controlling processes often interact with each other at various disparate time 1 and length scales. In particular, turbulence plays an important role in determining the rates of mass and heat transfer, chemical reactions, and liquid phase evaporation in many practical combustion devices.
Learning-based computing techniques in geoid modeling for precise height transformation
NASA Astrophysics Data System (ADS)
Erol, B.; Erol, S.
2013-03-01
Precise determination of local geoid is of particular importance for establishing height control in geodetic GNSS applications, since the classical leveling technique is too laborious. A geoid model can be accurately obtained employing properly distributed benchmarks having GNSS and leveling observations using an appropriate computing algorithm. Besides the classical multivariable polynomial regression equations (MPRE), this study attempts an evaluation of learning based computing algorithms: artificial neural networks (ANNs), adaptive network-based fuzzy inference system (ANFIS) and especially the wavelet neural networks (WNNs) approach in geoid surface approximation. These algorithms were developed parallel to advances in computer technologies and recently have been used for solving complex nonlinear problems of many applications. However, they are rather new in dealing with precise modeling problem of the Earth gravity field. In the scope of the study, these methods were applied to Istanbul GPS Triangulation Network data. The performances of the methods were assessed considering the validation results of the geoid models at the observation points. In conclusion the ANFIS and WNN revealed higher prediction accuracies compared to ANN and MPRE methods. Beside the prediction capabilities, these methods were also compared and discussed from the practical point of view in conclusions.
Soft tissue deformation estimation by spatio-temporal Kalman filter finite element method.
Yarahmadian, Mehran; Zhong, Yongmin; Gu, Chengfan; Shin, Jaehyun
2018-01-01
Soft tissue modeling plays an important role in the development of surgical training simulators as well as in robot-assisted minimally invasive surgeries. It has been known that while the traditional Finite Element Method (FEM) promises the accurate modeling of soft tissue deformation, it still suffers from a slow computational process. This paper presents a Kalman filter finite element method to model soft tissue deformation in real time without sacrificing the traditional FEM accuracy. The proposed method employs the FEM equilibrium equation and formulates it as a filtering process to estimate soft tissue behavior using real-time measurement data. The model is temporally discretized using the Newmark method and further formulated as the system state equation. Simulation results demonstrate that the computational time of KF-FEM is approximately 10 times shorter than the traditional FEM and it is still as accurate as the traditional FEM. The normalized root-mean-square error of the proposed KF-FEM in reference to the traditional FEM is computed as 0.0116. It is concluded that the proposed method significantly improves the computational performance of the traditional FEM without sacrificing FEM accuracy. The proposed method also filters noises involved in system state and measurement data.
A parallel computing engine for a class of time critical processes.
Nabhan, T M; Zomaya, A Y
1997-01-01
This paper focuses on the efficient parallel implementation of systems of numerically intensive nature over loosely coupled multiprocessor architectures. These analytical models are of significant importance to many real-time systems that have to meet severe time constants. A parallel computing engine (PCE) has been developed in this work for the efficient simplification and the near optimal scheduling of numerical models over the different cooperating processors of the parallel computer. First, the analytical system is efficiently coded in its general form. The model is then simplified by using any available information (e.g., constant parameters). A task graph representing the interconnections among the different components (or equations) is generated. The graph can then be compressed to control the computation/communication requirements. The task scheduler employs a graph-based iterative scheme, based on the simulated annealing algorithm, to map the vertices of the task graph onto a Multiple-Instruction-stream Multiple-Data-stream (MIMD) type of architecture. The algorithm uses a nonanalytical cost function that properly considers the computation capability of the processors, the network topology, the communication time, and congestion possibilities. Moreover, the proposed technique is simple, flexible, and computationally viable. The efficiency of the algorithm is demonstrated by two case studies with good results.
Vector computer memory bank contention
NASA Technical Reports Server (NTRS)
Bailey, D. H.
1985-01-01
A number of vector supercomputers feature very large memories. Unfortunately the large capacity memory chips that are used in these computers are much slower than the fast central processing unit (CPU) circuitry. As a result, memory bank reservation times (in CPU ticks) are much longer than on previous generations of computers. A consequence of these long reservation times is that memory bank contention is sharply increased, resulting in significantly lowered performance rates. The phenomenon of memory bank contention in vector computers is analyzed using both a Markov chain model and a Monte Carlo simulation program. The results of this analysis indicate that future generations of supercomputers must either employ much faster memory chips or else feature very large numbers of independent memory banks.
Vector computer memory bank contention
NASA Technical Reports Server (NTRS)
Bailey, David H.
1987-01-01
A number of vector supercomputers feature very large memories. Unfortunately the large capacity memory chips that are used in these computers are much slower than the fast central processing unit (CPU) circuitry. As a result, memory bank reservation times (in CPU ticks) are much longer than on previous generations of computers. A consequence of these long reservation times is that memory bank contention is sharply increased, resulting in significantly lowered performance rates. The phenomenon of memory bank contention in vector computers is analyzed using both a Markov chain model and a Monte Carlo simulation program. The results of this analysis indicate that future generations of supercomputers must either employ much faster memory chips or else feature very large numbers of independent memory banks.
3D segmentation of annulus fibrosus and nucleus pulposus from T2-weighted magnetic resonance images
NASA Astrophysics Data System (ADS)
Castro-Mateos, Isaac; Pozo, Jose M.; Eltes, Peter E.; Del Rio, Luis; Lazary, Aron; Frangi, Alejandro F.
2014-12-01
Computational medicine aims at employing personalised computational models in diagnosis and treatment planning. The use of such models to help physicians in finding the best treatment for low back pain (LBP) is becoming popular. One of the challenges of creating such models is to derive patient-specific anatomical and tissue models of the lumbar intervertebral discs (IVDs), as a prior step. This article presents a segmentation scheme that obtains accurate results irrespective of the degree of IVD degeneration, including pathological discs with protrusion or herniation. The segmentation algorithm, employing a novel feature selector, iteratively deforms an initial shape, which is projected into a statistical shape model space at first and then, into a B-Spline space to improve accuracy. The method was tested on a MR dataset of 59 patients suffering from LBP. The images follow a standard T2-weighted protocol in coronal and sagittal acquisitions. These two image volumes were fused in order to overcome large inter-slice spacing. The agreement between expert-delineated structures, used here as gold-standard, and our automatic segmentation was evaluated using Dice Similarity Index and surface-to-surface distances, obtaining a mean error of 0.68 mm in the annulus segmentation and 1.88 mm in the nucleus, which are the best results with respect to the image resolution in the current literature.
Maleckar, Mary M; Edwards, Andrew G; Louch, William E; Lines, Glenn T
2017-01-01
Excitation-contraction coupling in cardiac myocytes requires calcium influx through L-type calcium channels in the sarcolemma, which gates calcium release through sarcoplasmic reticulum ryanodine receptors in a process known as calcium-induced calcium release, producing a myoplasmic calcium transient and enabling cardiomyocyte contraction. The spatio-temporal dynamics of calcium release, buffering, and reuptake into the sarcoplasmic reticulum play a central role in excitation-contraction coupling in both normal and diseased cardiac myocytes. However, further quantitative understanding of these cells' calcium machinery and the study of mechanisms that underlie both normal cardiac function and calcium-dependent etiologies in heart disease requires accurate knowledge of cardiac ultrastructure, protein distribution and subcellular function. As current imaging techniques are limited in spatial resolution, limiting insight into changes in calcium handling, computational models of excitation-contraction coupling have been increasingly employed to probe these structure-function relationships. This review will focus on the development of structural models of cardiac calcium dynamics at the subcellular level, orienting the reader broadly towards the development of models of subcellular calcium handling in cardiomyocytes. Specific focus will be given to progress in recent years in terms of multi-scale modeling employing resolved spatial models of subcellular calcium machinery. A review of the state-of-the-art will be followed by a review of emergent insights into calcium-dependent etiologies in heart disease and, finally, we will offer a perspective on future directions for related computational modeling and simulation efforts.
ERIC Educational Resources Information Center
Saavedra, Jose M.
This interactive module contains 33 windows of text and three graphics, in which Freud's topographical (unconscious, pre-conscious, and conscious) and structural (id, ego, and superego) models of the psyche are studied. Seventeen fill-in questions are interspersed within the text. The module stresses the importance of comprehending the concept of…
Computer Simulations of Epoxy Adhesive Monomer Interactions with Alumina Surfaces
1992-08-01
using Sybyl"’ molecular modeling software 9 on a Digital Equipment Corporation microVAXw cluster. The Tripos force field was employed in the...Ave, N, Ft Leonard Wood, MO 65473-5000 1 ATM- Libary Commander, US. Army Enginer Waterways Expenment Station, P.O. Box 631, Vicksburg, MS 39180 1 ATTN
Towards a Sufficient Theory of Transition in Cognitive Development.
ERIC Educational Resources Information Center
Wallace, J. G.
The work reported aims at the construction of a sufficient theory of transition in cognitive development. The method of theory construction employed is computer simulation of cognitive process. The core of the model of transition presented comprises self-modification processes that, as a result of continuously monitoring an exhaustive record of…
A Simulation of AI Programming Techniques in BASIC.
ERIC Educational Resources Information Center
Mandell, Alan
1986-01-01
Explains the functions of and the techniques employed in expert systems. Offers the program "The Periodic Table Expert," as a model for using artificial intelligence techniques in BASIC. Includes the program listing and directions for its use on: Tandy 1000, 1200, and 2000; IBM PC; PC Jr; TRS-80; and Apple computers. (ML)
Employment of CB models for non-linear dynamic analysis
NASA Technical Reports Server (NTRS)
Klein, M. R. M.; Deloo, P.; Fournier-Sicre, A.
1990-01-01
The non-linear dynamic analysis of large structures is always very time, effort and CPU consuming. Whenever possible the reduction of the size of the mathematical model involved is of main importance to speed up the computational procedures. Such reduction can be performed for the part of the structure which perform linearly. Most of the time, the classical Guyan reduction process is used. For non-linear dynamic process where the non-linearity is present at interfaces between different structures, Craig-Bampton models can provide a very rich information, and allow easy selection of the relevant modes with respect to the phenomenon driving the non-linearity. The paper presents the employment of Craig-Bampton models combined with Newmark direct integration for solving non-linear friction problems appearing at the interface between the Hubble Space Telescope and its solar arrays during in-orbit maneuvers. Theory, implementation in the FEM code ASKA, and practical results are shown.
Steindl, Theodora M; Crump, Carolyn E; Hayden, Frederick G; Langer, Thierry
2005-10-06
The development and application of a sophisticated virtual screening and selection protocol to identify potential, novel inhibitors of the human rhinovirus coat protein employing various computer-assisted strategies are described. A large commercially available database of compounds was screened using a highly selective, structure-based pharmacophore model generated with the program Catalyst. A docking study and a principal component analysis were carried out within the software package Cerius and served to validate and further refine the obtained results. These combined efforts led to the selection of six candidate structures, for which in vitro anti-rhinoviral activity could be shown in a biological assay.
NASA Astrophysics Data System (ADS)
Parvin, Salma; Sultana, Aysha
2017-06-01
The influence of High Intensity Focused Ultrasound (HIFU) on the obstacle through blood vessel is studied numerically. A three-dimensional acoustics-thermal-fluid coupling model is employed to compute the temperature field around the obstacle through blood vessel. The model construction is based on the linear Westervelt and conjugate heat transfer equations for the obstacle through blood vessel. The system of equations is solved using Finite Element Method (FEM). We found from this three-dimensional numerical study that the rate of heat transfer is increasing from the obstacle and both the convective cooling and acoustic streaming can considerably change the temperature field.
Close to real life. [solving for transonic flow about lifting airfoils using supercomputers
NASA Technical Reports Server (NTRS)
Peterson, Victor L.; Bailey, F. Ron
1988-01-01
NASA's Numerical Aerodynamic Simulation (NAS) facility for CFD modeling of highly complex aerodynamic flows employs as its basic hardware two Cray-2s, an ETA-10 Model Q, an Amdahl 5880 mainframe computer that furnishes both support processing and access to 300 Gbytes of disk storage, several minicomputers and superminicomputers, and a Thinking Machines 16,000-device 'connection machine' processor. NAS, which was the first supercomputer facility to standardize operating-system and communication software on all processors, has done important Space Shuttle aerodynamics simulations and will be critical to the configurational refinement of the National Aerospace Plane and its intergrated powerplant, which will involve complex, high temperature reactive gasdynamic computations.
NASA Astrophysics Data System (ADS)
He, Xibing; Shinoda, Wataru; DeVane, Russell; Anderson, Kelly L.; Klein, Michael L.
2010-02-01
A coarse-grained (CG) forcefield for linear alkylbenzene sulfonates (LAS) was systematically parameterized. Thermodynamic data from experiments and structural data obtained from all-atom molecular dynamics were used as targets to parameterize CG potentials for the bonded and non-bonded interactions. The added computational efficiency permits one to employ computer simulation to probe the self-assembly of LAS aqueous solutions into different morphologies starting from a random configuration. The present CG model is shown to accurately reproduce the phase behavior of solutions of pure isomers of sodium dodecylbenzene sulfonate, despite the fact that phase behavior was not directly taken into account in the forcefield parameterization.
Daily pan evaporation modelling using a neuro-fuzzy computing technique
NASA Astrophysics Data System (ADS)
Kişi, Özgür
2006-10-01
SummaryEvaporation, as a major component of the hydrologic cycle, is important in water resources development and management. This paper investigates the abilities of neuro-fuzzy (NF) technique to improve the accuracy of daily evaporation estimation. Five different NF models comprising various combinations of daily climatic variables, that is, air temperature, solar radiation, wind speed, pressure and humidity are developed to evaluate degree of effect of each of these variables on evaporation. A comparison is made between the estimates provided by the NF model and the artificial neural networks (ANNs). The Stephens-Stewart (SS) method is also considered for the comparison. Various statistic measures are used to evaluate the performance of the models. Based on the comparisons, it was found that the NF computing technique could be employed successfully in modelling evaporation process from the available climatic data. The ANN also found to perform better than the SS method.
QSAR modeling: where have you been? Where are you going to?
Cherkasov, Artem; Muratov, Eugene N; Fourches, Denis; Varnek, Alexandre; Baskin, Igor I; Cronin, Mark; Dearden, John; Gramatica, Paola; Martin, Yvonne C; Todeschini, Roberto; Consonni, Viviana; Kuz'min, Victor E; Cramer, Richard; Benigni, Romualdo; Yang, Chihae; Rathman, James; Terfloth, Lothar; Gasteiger, Johann; Richard, Ann; Tropsha, Alexander
2014-06-26
Quantitative structure-activity relationship modeling is one of the major computational tools employed in medicinal chemistry. However, throughout its entire history it has drawn both praise and criticism concerning its reliability, limitations, successes, and failures. In this paper, we discuss (i) the development and evolution of QSAR; (ii) the current trends, unsolved problems, and pressing challenges; and (iii) several novel and emerging applications of QSAR modeling. Throughout this discussion, we provide guidelines for QSAR development, validation, and application, which are summarized in best practices for building rigorously validated and externally predictive QSAR models. We hope that this Perspective will help communications between computational and experimental chemists toward collaborative development and use of QSAR models. We also believe that the guidelines presented here will help journal editors and reviewers apply more stringent scientific standards to manuscripts reporting new QSAR studies, as well as encourage the use of high quality, validated QSARs for regulatory decision making.
QSAR Modeling: Where have you been? Where are you going to?
Cherkasov, Artem; Muratov, Eugene N.; Fourches, Denis; Varnek, Alexandre; Baskin, Igor I.; Cronin, Mark; Dearden, John; Gramatica, Paola; Martin, Yvonne C.; Todeschini, Roberto; Consonni, Viviana; Kuz'min, Victor E.; Cramer, Richard; Benigni, Romualdo; Yang, Chihae; Rathman, James; Terfloth, Lothar; Gasteiger, Johann; Richard, Ann; Tropsha, Alexander
2014-01-01
Quantitative Structure-Activity Relationship modeling is one of the major computational tools employed in medicinal chemistry. However, throughout its entire history it has drawn both praise and criticism concerning its reliability, limitations, successes, and failures. In this paper, we discuss: (i) the development and evolution of QSAR; (ii) the current trends, unsolved problems, and pressing challenges; and (iii) several novel and emerging applications of QSAR modeling. Throughout this discussion, we provide guidelines for QSAR development, validation, and application, which are summarized in best practices for building rigorously validated and externally predictive QSAR models. We hope that this Perspective will help communications between computational and experimental chemists towards collaborative development and use of QSAR models. We also believe that the guidelines presented here will help journal editors and reviewers apply more stringent scientific standards to manuscripts reporting new QSAR studies, as well as encourage the use of high quality, validated QSARs for regulatory decision making. PMID:24351051
Veksler, Vladislav D; Buchler, Norbou; Hoffman, Blaine E; Cassenti, Daniel N; Sample, Char; Sugrim, Shridat
2018-01-01
Computational models of cognitive processes may be employed in cyber-security tools, experiments, and simulations to address human agency and effective decision-making in keeping computational networks secure. Cognitive modeling can addresses multi-disciplinary cyber-security challenges requiring cross-cutting approaches over the human and computational sciences such as the following: (a) adversarial reasoning and behavioral game theory to predict attacker subjective utilities and decision likelihood distributions, (b) human factors of cyber tools to address human system integration challenges, estimation of defender cognitive states, and opportunities for automation, (c) dynamic simulations involving attacker, defender, and user models to enhance studies of cyber epidemiology and cyber hygiene, and (d) training effectiveness research and training scenarios to address human cyber-security performance, maturation of cyber-security skill sets, and effective decision-making. Models may be initially constructed at the group-level based on mean tendencies of each subject's subgroup, based on known statistics such as specific skill proficiencies, demographic characteristics, and cultural factors. For more precise and accurate predictions, cognitive models may be fine-tuned to each individual attacker, defender, or user profile, and updated over time (based on recorded behavior) via techniques such as model tracing and dynamic parameter fitting.
Computational Aeroelastic Analysis of the Semi-Span Super-Sonic Transport (S4T) Wind-Tunnel Model
NASA Technical Reports Server (NTRS)
Sanetrik, Mark D.; Silva, Walter A.; Hur, Jiyoung
2012-01-01
A summary of the computational aeroelastic analysis for the Semi-Span Super-Sonic Transport (S4T) wind-tunnel model is presented. A broad range of analysis techniques, including linear, nonlinear and Reduced Order Models (ROMs) were employed in support of a series of aeroelastic (AE) and aeroservoelastic (ASE) wind-tunnel tests conducted in the Transonic Dynamics Tunnel (TDT) at NASA Langley Research Center. This research was performed in support of the ASE element in the Supersonics Program, part of NASA's Fundamental Aeronautics Program. The analysis concentrated on open-loop flutter predictions, which were in good agreement with experimental results. This paper is one in a series that comprise a special S4T technical session, which summarizes the S4T project.
Written and Computer-Mediated Accounting Communication Skills: An Employer Perspective
ERIC Educational Resources Information Center
Jones, Christopher G.
2011-01-01
Communication skills are a fundamental personal competency for a successful career in accounting. What is not so obvious is the specific written communication skill set employers look for and the extent those skills are computer mediated. Using survey research, this article explores the particular skills employers desire and their satisfaction…
ERIC Educational Resources Information Center
Adodo, S. O.; Adewole, Timothy
2013-01-01
This study investigated acquired and required competencies in interactive computer technology (ICT) in labour data were collected from employers' and employees'. The study is a descriptive research of the survey type. The population of the study consisted of unemployed graduates, employed graduates and various parastatal where graduates seek for…
A fast recursive algorithm for molecular dynamics simulation
NASA Technical Reports Server (NTRS)
Jain, A.; Vaidehi, N.; Rodriguez, G.
1993-01-01
The present recursive algorithm for solving molecular systems' dynamical equations of motion employs internal variable models that reduce such simulations' computation time by an order of magnitude, relative to Cartesian models. Extensive use is made of spatial operator methods recently developed for analysis and simulation of the dynamics of multibody systems. A factor-of-450 speedup over the conventional O(N-cubed) algorithm is demonstrated for the case of a polypeptide molecule with 400 residues.
Trygve Haavelmo and the Emergence of Causal Calculus
2014-06-01
Department, Los Angeles, CA, 90095-1596, USA ; e-mail: judea@cs.ucla.edu. c© Cambridge University Press 2014 1 Published online: 10 June 2014...Is this employer guilty of gender discrimination? Formally, each query Qi ∈ Q should be computable from a fully speci- fied theoretical model M in...Pearl, 2006) and cyclic (Phiromswad and Hoover, 2013) models. The instrumental inequality (Pearl, 2009a, p. 279) and tight bounds on the binary Roy
Extended MHD modeling of nonlinear instabilities in fusion and space plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Germaschewski, Kai
A number of different sub-projects where pursued within this DOE early career project. The primary focus was on using fully nonlinear, curvilinear, extended MHD simulations of instabilities with applications to fusion and space plasmas. In particular, we performed comprehensive studies of the dynamics of the double tearing mode in different regimes and confi gurations, using Cartesian and cyclindrical geometry and investigating both linear and non-linear dynamics. In addition to traditional extended MHD involving Hall term and electron pressure gradient, we also employed a new multi-fluid moment model, which shows great promise to incorporate kinetic effects, in particular off-diagonal elements ofmore » the pressure tensor, in a fluid model, which is naturally computationally much cheaper than fully kinetic particle or Vlasov simulations. We used our Vlasov code for detailed studies of how weak collisions effect plasma echos. In addition, we have played an important supporting role working with the PPPL theory group around Will Fox and Amitava Bhattacharjee on providing simulation support for HED plasma experiments performed at high-powered laser facilities like OMEGA-EP in Rochester, NY. This project has support a great number of computational advances in our fluid and kinetic plasma models, and has been crucial to winning multiple INCITE computer time awards that supported our computational modeling.« less
NASA Technical Reports Server (NTRS)
Ellison, Donald; Conway, Bruce; Englander, Jacob
2015-01-01
A significant body of work exists showing that providing a nonlinear programming (NLP) solver with expressions for the problem constraint gradient substantially increases the speed of program execution and can also improve the robustness of convergence, especially for local optimizers. Calculation of these derivatives is often accomplished through the computation of spacecraft's state transition matrix (STM). If the two-body gravitational model is employed as is often done in the context of preliminary design, closed form expressions for these derivatives may be provided. If a high fidelity dynamics model, that might include perturbing forces such as the gravitational effect from multiple third bodies and solar radiation pressure is used then these STM's must be computed numerically. We present a method for the power hardward model and a full ephemeris model. An adaptive-step embedded eight order Dormand-Prince numerical integrator is discussed and a method for the computation of the time of flight derivatives in this framework is presented. The use of these numerically calculated derivatieves offer a substantial improvement over finite differencing in the context of a global optimizer. Specifically the inclusion of these STM's into the low thrust missiondesign tool chain in use at NASA Goddard Spaceflight Center allows for an increased preliminary mission design cadence.
The effective application of a discrete transition model to explore cell-cycle regulation in yeast
2013-01-01
Background Bench biologists often do not take part in the development of computational models for their systems, and therefore, they frequently employ them as “black-boxes”. Our aim was to construct and test a model that does not depend on the availability of quantitative data, and can be directly used without a need for intensive computational background. Results We present a discrete transition model. We used cell-cycle in budding yeast as a paradigm for a complex network, demonstrating phenomena such as sequential protein expression and activity, and cell-cycle oscillation. The structure of the network was validated by its response to computational perturbations such as mutations, and its response to mating-pheromone or nitrogen depletion. The model has a strong predicative capability, demonstrating how the activity of a specific transcription factor, Hcm1, is regulated, and what determines commitment of cells to enter and complete the cell-cycle. Conclusion The model presented herein is intuitive, yet is expressive enough to elucidate the intrinsic structure and qualitative behavior of large and complex regulatory networks. Moreover our model allowed us to examine multiple hypotheses in a simple and intuitive manner, giving rise to testable predictions. This methodology can be easily integrated as a useful approach for the study of networks, enriching experimental biology with computational insights. PMID:23915717
A new class of actuator surface models for wind turbines
NASA Astrophysics Data System (ADS)
Yang, Xiaolei; Sotiropoulos, Fotis
2018-05-01
Actuator line model has been widely employed in wind turbine simulations. However, the standard actuator line model does not include a model for the turbine nacelle which can significantly impact turbine wake characteristics as shown in the literature. Another disadvantage of the standard actuator line model is that more geometrical features of turbine blades cannot be resolved on a finer mesh. To alleviate these disadvantages of the standard model, we develop a new class of actuator surface models for turbine blades and nacelle to take into account more geometrical details of turbine blades and include the effect of turbine nacelle. In the actuator surface model for blade, the aerodynamic forces calculated using the blade element method are distributed from the surface formed by the foil chords at different radial locations. In the actuator surface model for nacelle, the forces are distributed from the actual nacelle surface with the normal force component computed in the same way as in the direct forcing immersed boundary method and the tangential force component computed using a friction coefficient and a reference velocity of the incoming flow. The actuator surface model for nacelle is evaluated by simulating the flow over periodically placed nacelles. Both the actuator surface simulation and the wall-resolved large-eddy simulation are carried out. The comparison shows that the actuator surface model is able to give acceptable results especially at far wake locations on a very coarse mesh. It is noted that although this model is employed for the turbine nacelle in this work, it is also applicable to other bluff bodies. The capability of the actuator surface model in predicting turbine wakes is assessed by simulating the flow over the MEXICO (Model experiments in Controlled Conditions) turbine and a hydrokinetic turbine.
NASA Astrophysics Data System (ADS)
Coclite, A.; Pascazio, G.; De Palma, P.; Cutrone, L.
2016-07-01
Flamelet-Progress-Variable (FPV) combustion models allow the evaluation of all thermochemical quantities in a reacting flow by computing only the mixture fraction Z and a progress variable C. When using such a method to predict turbulent combustion in conjunction with a turbulence model, a probability density function (PDF) is required to evaluate statistical averages (e. g., Favre averages) of chemical quantities. The choice of the PDF is a compromise between computational costs and accuracy level. The aim of this paper is to investigate the influence of the PDF choice and its modeling aspects to predict turbulent combustion. Three different models are considered: the standard one, based on the choice of a β-distribution for Z and a Dirac-distribution for C; a model employing a β-distribution for both Z and C; and the third model obtained using a β-distribution for Z and the statistically most likely distribution (SMLD) for C. The standard model, although widely used, does not take into account the interaction between turbulence and chemical kinetics as well as the dependence of the progress variable not only on its mean but also on its variance. The SMLD approach establishes a systematic framework to incorporate informations from an arbitrary number of moments, thus providing an improvement over conventionally employed presumed PDF closure models. The rational behind the choice of the three PDFs is described in some details and the prediction capability of the corresponding models is tested vs. well-known test cases, namely, the Sandia flames, and H2-air supersonic combustion.
Neilson, Matthew P; Mackenzie, John A; Webb, Steven D; Insall, Robert H
2010-11-01
In this paper we present a computational tool that enables the simulation of mathematical models of cell migration and chemotaxis on an evolving cell membrane. Recent models require the numerical solution of systems of reaction-diffusion equations on the evolving cell membrane and then the solution state is used to drive the evolution of the cell edge. Previous work involved moving the cell edge using a level set method (LSM). However, the LSM is computationally very expensive, which severely limits the practical usefulness of the algorithm. To address this issue, we have employed the parameterised finite element method (PFEM) as an alternative method for evolving a cell boundary. We show that the PFEM is far more efficient and robust than the LSM. We therefore suggest that the PFEM potentially has an essential role to play in computational modelling efforts towards the understanding of many of the complex issues related to chemotaxis.
Computational simulation of the creep-rupture process in filamentary composite materials
NASA Technical Reports Server (NTRS)
Slattery, Kerry T.; Hackett, Robert M.
1991-01-01
A computational simulation of the internal damage accumulation which causes the creep-rupture phenomenon in filamentary composite materials is developed. The creep-rupture process involves complex interactions between several damage mechanisms. A statistically-based computational simulation using a time-differencing approach is employed to model these progressive interactions. The finite element method is used to calculate the internal stresses. The fibers are modeled as a series of bar elements which are connected transversely by matrix elements. Flaws are distributed randomly throughout the elements in the model. Load is applied, and the properties of the individual elements are updated at the end of each time step as a function of the stress history. The simulation is continued until failure occurs. Several cases, with different initial flaw dispersions, are run to establish a statistical distribution of the time-to-failure. The calculations are performed on a supercomputer. The simulation results compare favorably with the results of creep-rupture experiments conducted at the Lawrence Livermore National Laboratory.
Multilayer Perceptron for Robust Nonlinear Interval Regression Analysis Using Genetic Algorithms
2014-01-01
On the basis of fuzzy regression, computational models in intelligence such as neural networks have the capability to be applied to nonlinear interval regression analysis for dealing with uncertain and imprecise data. When training data are not contaminated by outliers, computational models perform well by including almost all given training data in the data interval. Nevertheless, since training data are often corrupted by outliers, robust learning algorithms employed to resist outliers for interval regression analysis have been an interesting area of research. Several approaches involving computational intelligence are effective for resisting outliers, but the required parameters for these approaches are related to whether the collected data contain outliers or not. Since it seems difficult to prespecify the degree of contamination beforehand, this paper uses multilayer perceptron to construct the robust nonlinear interval regression model using the genetic algorithm. Outliers beyond or beneath the data interval will impose slight effect on the determination of data interval. Simulation results demonstrate that the proposed method performs well for contaminated datasets. PMID:25110755
Multilayer perceptron for robust nonlinear interval regression analysis using genetic algorithms.
Hu, Yi-Chung
2014-01-01
On the basis of fuzzy regression, computational models in intelligence such as neural networks have the capability to be applied to nonlinear interval regression analysis for dealing with uncertain and imprecise data. When training data are not contaminated by outliers, computational models perform well by including almost all given training data in the data interval. Nevertheless, since training data are often corrupted by outliers, robust learning algorithms employed to resist outliers for interval regression analysis have been an interesting area of research. Several approaches involving computational intelligence are effective for resisting outliers, but the required parameters for these approaches are related to whether the collected data contain outliers or not. Since it seems difficult to prespecify the degree of contamination beforehand, this paper uses multilayer perceptron to construct the robust nonlinear interval regression model using the genetic algorithm. Outliers beyond or beneath the data interval will impose slight effect on the determination of data interval. Simulation results demonstrate that the proposed method performs well for contaminated datasets.
NASA Technical Reports Server (NTRS)
Nesbitt, James A.
2000-01-01
A finite-difference computer program (COSIM) has been written which models the one-dimensional, diffusional transport associated with high-temperature oxidation and interdiffusion of overlay-coated substrates. The program predicts concentration profiles for up to three elements in the coating and substrate after various oxidation exposures. Surface recession due to solute loss is also predicted. Ternary cross terms and concentration-dependent diffusion coefficients are taken into account. The program also incorporates a previously-developed oxide growth and spalling model to simulate either isothermal or cyclic oxidation exposures. In addition to predicting concentration profiles after various oxidation exposures, the program can also be used to predict coating fife based on a concentration dependent failure criterion (e.g., surface solute content drops to two percent). The computer code, written in an extension of FORTRAN 77, employs numerous subroutines to make the program flexible and easily modifiable to other coating oxidation problems.
Versino, Daniele; Bronkhorst, Curt Allan
2018-01-31
The computational formulation of a micro-mechanical material model for the dynamic failure of ductile metals is presented in this paper. The statistical nature of porosity initiation is accounted for by introducing an arbitrary probability density function which describes the pores nucleation pressures. Each micropore within the representative volume element is modeled as a thick spherical shell made of plastically incompressible material. The treatment of porosity by a distribution of thick-walled spheres also allows for the inclusion of micro-inertia effects under conditions of shock and dynamic loading. The second order ordinary differential equation governing the microscopic porosity evolution is solved withmore » a robust implicit procedure. A new Chebyshev collocation method is employed to approximate the porosity distribution and remapping is used to optimize memory usage. The adaptive approximation of the porosity distribution leads to a reduction of computational time and memory usage of up to two orders of magnitude. Moreover, the proposed model affords consistent performance: changing the nucleation pressure probability density function and/or the applied strain rate does not reduce accuracy or computational efficiency of the material model. The numerical performance of the model and algorithms presented is tested against three problems for high density tantalum: single void, one-dimensional uniaxial strain, and two-dimensional plate impact. Here, the results using the integration and algorithmic advances suggest a significant improvement in computational efficiency and accuracy over previous treatments for dynamic loading conditions.« less
Thermal radiation view factor: Methods, accuracy and computer-aided procedures
NASA Technical Reports Server (NTRS)
Kadaba, P. V.
1982-01-01
The computer aided thermal analysis programs which predicts the result of predetermined acceptable temperature range prior to stationing of these orbiting equipment in various attitudes with respect to the Sun and the Earth was examined. Complexity of the surface geometries suggests the use of numerical schemes for the determination of these viewfactors. Basic definitions and standard methods which form the basis for various digital computer methods and various numerical methods are presented. The physical model and the mathematical methods on which a number of available programs are built are summarized. The strength and the weaknesses of the methods employed, the accuracy of the calculations and the time required for computations are evaluated. The situations where accuracies are important for energy calculations are identified and methods to save computational times are proposed. Guide to best use of the available programs at several centers and the future choices for efficient use of digital computers are included in the recommendations.
Intersecting surface defects and instanton partition functions
NASA Astrophysics Data System (ADS)
Pan, Yiwen; Peelaers, Wolfger
2017-07-01
We analyze intersecting surface defects inserted in interacting four-dimensional N=2 supersymmetric quantum field theories. We employ the realization of a class of such systems as the infrared fixed points of renormalization group flows from larger theories, triggered by perturbed Seiberg-Witten monopole-like configurations, to compute their partition functions. These results are cast into the form of a partition function of 4d/2d/0d coupled systems. Our computations provide concrete expressions for the instanton partition function in the presence of intersecting defects and we study the corresponding ADHM model.
Computation of turbulent pipe and duct flow using third order upwind scheme
NASA Technical Reports Server (NTRS)
Kawamura, T.
1986-01-01
The fully developed turbulence in a circular pipe and in a square duct is simulated directly without using turbulence models in the Navier-Stokes equations. The utilized method employs a third-order upwind scheme for the approximation to the nonlinear term and the second-order Adams-Bashforth method for the time derivative in the Navier-Stokes equation. The computational results appear to capture the large-scale turbulent structures at least qualitatively. The significance of the artificial viscosity inherent in the present scheme is discussed.
The database management system: A topic and a tool
NASA Technical Reports Server (NTRS)
Plummer, O. R.
1984-01-01
Data structures and data base management systems are common tools employed to deal with the administrative information of a university. An understanding of these topics is needed by a much wider audience, ranging from those interested in computer aided design and manufacturing to those using microcomputers. These tools are becoming increasingly valuable to academic programs as they develop comprehensive computer support systems. The wide use of these tools relies upon the relational data model as a foundation. Experience with the use of the IPAD RIM5.0 program is described.
Development of a support software system for real-time HAL/S applications
NASA Technical Reports Server (NTRS)
Smith, R. S.
1984-01-01
Methodologies employed in defining and implementing a software support system for the HAL/S computer language for real-time operations on the Shuttle are detailed. Attention is also given to the management and validation techniques used during software development and software maintenance. Utilities developed to support the real-time operating conditions are described. With the support system being produced on Cyber computers and executable code then processed through Cyber or PDP machines, the support system has a production level status and can serve as a model for other software development projects.
Real-time data reduction capabilities at the Langley 7 by 10 foot high speed tunnel
NASA Technical Reports Server (NTRS)
Fox, C. H., Jr.
1980-01-01
The 7 by 10 foot high speed tunnel performs a wide range of tests employing a variety of model installation methods. To support the reduction of static data from this facility, a generalized wind tunnel data reduction program had been developed for use on the Langley central computer complex. The capabilities of a version of this generalized program adapted for real time use on a dedicated on-site computer are discussed. The input specifications, instructions for the console operator, and full descriptions of the algorithms are included.
A theoretical case study of type I and type II beta-turns.
Czinki, Eszter; Császár, Attila G; Perczel, András
2003-03-03
NMR chemical shielding anisotropy tensors have been computed by employing a medium size basis set and the GIAO-DFT(B3LYP) formalism of electronic structure theory for all of the atoms of type I and type II beta-turn models. The models contain all possible combinations of the amino acid residues Gly, Ala, Val, and Ser, with all possible side-chain orientations where applicable in a dipeptide. The several hundred structures investigated contain either constrained or optimized phi, psi, and chi dihedral angles. A statistical analysis of the resulting large database was performed and multidimensional (2D and 3D) chemical-shift/chemical-shift plots were generated. The (1)H(alpha-13)C(alpha), (13)C(alpha-1)H(alpha-13)C(beta), and (13)C(alpha-1)H(alpha-13)C' 2D and 3D plots have the notable feature that the conformers clearly cluster in distinct regions. This allows straightforward identification of the backbone and side-chain conformations of the residues forming beta-turns. Chemical shift calculations on larger For-(L-Ala)(n)-NH(2) (n=4, 6, 8) models, containing a single type I or type II beta-turn, prove that the simple models employed are adequate. A limited number of chemical shift calculations performed at the highly correlated CCSD(T) level prove the adequacy of the computational method chosen. For all nuclei, statistically averaged theoretical and experimental shifts taken from the BioMagnetic Resonance Bank (BMRB) exhibit good correlation. These results confirm and extend our previous findings that chemical shift information from selected multiple-pulse NMR experiments could be employed directly to extract folding information for polypeptides and proteins.
Transonic Flow Field Analysis for Wing-Fuselage Configurations
NASA Technical Reports Server (NTRS)
Boppe, C. W.
1980-01-01
A computational method for simulating the aerodynamics of wing-fuselage configurations at transonic speeds is developed. The finite difference scheme is characterized by a multiple embedded mesh system coupled with a modified or extended small disturbance flow equation. This approach permits a high degree of computational resolution in addition to coordinate system flexibility for treating complex realistic aircraft shapes. To augment the analysis method and permit applications to a wide range of practical engineering design problems, an arbitrary fuselage geometry modeling system is incorporated as well as methodology for computing wing viscous effects. Configuration drag is broken down into its friction, wave, and lift induced components. Typical computed results for isolated bodies, isolated wings, and wing-body combinations are presented. The results are correlated with experimental data. A computer code which employs this methodology is described.
Liu, Hui; Chen, Fu; Sun, Huiyong; Li, Dan; Hou, Tingjun
2017-04-11
By means of estimators based on non-equilibrium work, equilibrium free energy differences or potentials of mean force (PMFs) of a system of interest can be computed from biased molecular dynamics (MD) simulations. The approach, however, is often plagued by slow conformational sampling and poor convergence, especially when the solvent effects are taken into account. Here, as a possible way to alleviate the problem, several widely used implicit-solvent models, which are derived from the analytic generalized Born (GB) equation and implemented in the AMBER suite of programs, were employed in free energy calculations based on non-equilibrium work and evaluated for their abilities to emulate explicit water. As a test case, pulling MD simulations were carried out on an alanine polypeptide with different solvent models and protocols, followed by comparisons of the reconstructed PMF profiles along the unfolding coordinate. The results show that when employing the non-equilibrium work method, sampling with an implicit-solvent model is several times faster and, more importantly, converges more rapidly than that with explicit water due to reduction of dissipation. Among the assessed GB models, the Neck variants outperform the OBC and HCT variants in terms of accuracy, whereas their computational costs are comparable. In addition, for the best-performing models, the impact of the solvent-accessible surface area (SASA) dependent nonpolar solvation term was also examined. The present study highlights the advantages of implicit-solvent models for non-equilibrium sampling.
NASA Technical Reports Server (NTRS)
Rosenberg, L. S.; Revere, W. R.; Selcuk, M. K.
1981-01-01
A computer simulation code was employed to evaluate several generic types of solar power systems (up to 10 MWe). Details of the simulation methodology, and the solar plant concepts are given along with cost and performance results. The Solar Energy Simulation computer code (SESII) was used, which optimizes the size of the collector field and energy storage subsystem for given engine-generator and energy-transport characteristics. Nine plant types were examined which employed combinations of different technology options, such as: distributed or central receivers with one- or two-axis tracking or no tracking; point- or line-focusing concentrator; central or distributed power conversion; Rankin, Brayton, or Stirling thermodynamic cycles; and thermal or electrical storage. Optimal cost curves were plotted as a function of levelized busbar energy cost and annualized plant capacity. Point-focusing distributed receiver systems were found to be most efficient (17-26 percent).
Modeling of Flow Transition Using an Intermittency Transport Equation
NASA Technical Reports Server (NTRS)
Suzen, Y. B.; Huang, P. G.
1999-01-01
A new transport equation for intermittency factor is proposed to model transitional flows. The intermittent behavior of the transitional flows is incorporated into the computations by modifying the eddy viscosity, mu(sub t), obtainable from a turbulence model, with the intermittency factor, gamma: mu(sub t, sup *) = gamma.mu(sub t). In this paper, Menter's SST model (Menter, 1994) is employed to compute mu(sub t) and other turbulent quantities. The proposed intermittency transport equation can be considered as a blending of two models - Steelant and Dick (1996) and Cho and Chung (1992). The former was proposed for near-wall flows and was designed to reproduce the streamwise variation of the intermittency factor in the transition zone following Dhawan and Narasimha correlation (Dhawan and Narasimha, 1958) and the latter was proposed for free shear flows and was used to provide a realistic cross-stream variation of the intermittency profile. The new model was used to predict the T3 series experiments assembled by Savill (1993a, 1993b) including flows with different freestream turbulence intensities and two pressure-gradient cases. For all test cases good agreements between the computed results and the experimental data are observed.
ASME V\\&V challenge problem: Surrogate-based V&V
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beghini, Lauren L.; Hough, Patricia D.
2015-12-18
The process of verification and validation can be resource intensive. From the computational model perspective, the resource demand typically arises from long simulation run times on multiple cores coupled with the need to characterize and propagate uncertainties. In addition, predictive computations performed for safety and reliability analyses have similar resource requirements. For this reason, there is a tradeoff between the time required to complete the requisite studies and the fidelity or accuracy of the results that can be obtained. At a high level, our approach is cast within a validation hierarchy that provides a framework in which we perform sensitivitymore » analysis, model calibration, model validation, and prediction. The evidence gathered as part of these activities is mapped into the Predictive Capability Maturity Model to assess credibility of the model used for the reliability predictions. With regard to specific technical aspects of our analysis, we employ surrogate-based methods, primarily based on polynomial chaos expansions and Gaussian processes, for model calibration, sensitivity analysis, and uncertainty quantification in order to reduce the number of simulations that must be done. The goal is to tip the tradeoff balance to improving accuracy without increasing the computational demands.« less
Vezér, Martin A
2016-04-01
To study climate change, scientists employ computer models, which approximate target systems with various levels of skill. Given the imperfection of climate models, how do scientists use simulations to generate knowledge about the causes of observed climate change? Addressing a similar question in the context of biological modelling, Levins (1966) proposed an account grounded in robustness analysis. Recent philosophical discussions dispute the confirmatory power of robustness, raising the question of how the results of computer modelling studies contribute to the body of evidence supporting hypotheses about climate change. Expanding on Staley's (2004) distinction between evidential strength and security, and Lloyd's (2015) argument connecting variety-of-evidence inferences and robustness analysis, I address this question with respect to recent challenges to the epistemology robustness analysis. Applying this epistemology to case studies of climate change, I argue that, despite imperfections in climate models, and epistemic constraints on variety-of-evidence reasoning and robustness analysis, this framework accounts for the strength and security of evidence supporting climatological inferences, including the finding that global warming is occurring and its primary causes are anthropogenic. Copyright © 2016 Elsevier Ltd. All rights reserved.
Chung, Chi-Jung; Kuo, Yu-Chen; Hsieh, Yun-Yu; Li, Tsai-Chung; Lin, Cheng-Chieh; Liang, Wen-Miin; Liao, Li-Na; Li, Chia-Ing; Lin, Hsueh-Chun
2017-11-01
This study applied open source technology to establish a subject-enabled analytics model that can enhance measurement statistics of case studies with the public health data in cloud computing. The infrastructure of the proposed model comprises three domains: 1) the health measurement data warehouse (HMDW) for the case study repository, 2) the self-developed modules of online health risk information statistics (HRIStat) for cloud computing, and 3) the prototype of a Web-based process automation system in statistics (PASIS) for the health risk assessment of case studies with subject-enabled evaluation. The system design employed freeware including Java applications, MySQL, and R packages to drive a health risk expert system (HRES). In the design, the HRIStat modules enforce the typical analytics methods for biomedical statistics, and the PASIS interfaces enable process automation of the HRES for cloud computing. The Web-based model supports both modes, step-by-step analysis and auto-computing process, respectively for preliminary evaluation and real time computation. The proposed model was evaluated by computing prior researches in relation to the epidemiological measurement of diseases that were caused by either heavy metal exposures in the environment or clinical complications in hospital. The simulation validity was approved by the commercial statistics software. The model was installed in a stand-alone computer and in a cloud-server workstation to verify computing performance for a data amount of more than 230K sets. Both setups reached efficiency of about 10 5 sets per second. The Web-based PASIS interface can be used for cloud computing, and the HRIStat module can be flexibly expanded with advanced subjects for measurement statistics. The analytics procedure of the HRES prototype is capable of providing assessment criteria prior to estimating the potential risk to public health. Copyright © 2017 Elsevier B.V. All rights reserved.
A General Cross-Layer Cloud Scheduling Framework for Multiple IoT Computer Tasks.
Wu, Guanlin; Bao, Weidong; Zhu, Xiaomin; Zhang, Xiongtao
2018-05-23
The diversity of IoT services and applications brings enormous challenges to improving the performance of multiple computer tasks' scheduling in cross-layer cloud computing systems. Unfortunately, the commonly-employed frameworks fail to adapt to the new patterns on the cross-layer cloud. To solve this issue, we design a new computer task scheduling framework for multiple IoT services in cross-layer cloud computing systems. Specifically, we first analyze the features of the cross-layer cloud and computer tasks. Then, we design the scheduling framework based on the analysis and present detailed models to illustrate the procedures of using the framework. With the proposed framework, the IoT services deployed in cross-layer cloud computing systems can dynamically select suitable algorithms and use resources more effectively to finish computer tasks with different objectives. Finally, the algorithms are given based on the framework, and extensive experiments are also given to validate its effectiveness, as well as its superiority.
Volcanic ash modeling with the NMMB-MONARCH-ASH model: quantification of offline modeling errors
NASA Astrophysics Data System (ADS)
Marti, Alejandro; Folch, Arnau
2018-03-01
Volcanic ash modeling systems are used to simulate the atmospheric dispersion of volcanic ash and to generate forecasts that quantify the impacts from volcanic eruptions on infrastructures, air quality, aviation, and climate. The efficiency of response and mitigation actions is directly associated with the accuracy of the volcanic ash cloud detection and modeling systems. Operational forecasts build on offline coupled modeling systems in which meteorological variables are updated at the specified coupling intervals. Despite the concerns from other communities regarding the accuracy of this strategy, the quantification of the systematic errors and shortcomings associated with the offline modeling systems has received no attention. This paper employs the NMMB-MONARCH-ASH model to quantify these errors by employing different quantitative and categorical evaluation scores. The skills of the offline coupling strategy are compared against those from an online forecast considered to be the best estimate of the true outcome. Case studies are considered for a synthetic eruption with constant eruption source parameters and for two historical events, which suitably illustrate the severe aviation disruptive effects of European (2010 Eyjafjallajökull) and South American (2011 Cordón Caulle) volcanic eruptions. Evaluation scores indicate that systematic errors due to the offline modeling are of the same order of magnitude as those associated with the source term uncertainties. In particular, traditional offline forecasts employed in operational model setups can result in significant uncertainties, failing to reproduce, in the worst cases, up to 45-70 % of the ash cloud of an online forecast. These inconsistencies are anticipated to be even more relevant in scenarios in which the meteorological conditions change rapidly in time. The outcome of this paper encourages operational groups responsible for real-time advisories for aviation to consider employing computationally efficient online dispersal models.
Petri net modelling of biological networks.
Chaouiya, Claudine
2007-07-01
Mathematical modelling is increasingly used to get insights into the functioning of complex biological networks. In this context, Petri nets (PNs) have recently emerged as a promising tool among the various methods employed for the modelling and analysis of molecular networks. PNs come with a series of extensions, which allow different abstraction levels, from purely qualitative to more complex quantitative models. Noteworthily, each of these models preserves the underlying graph, which depicts the interactions between the biological components. This article intends to present the basics of the approach and to foster the potential role PNs could play in the development of the computational systems biology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carrington, David Bradley; Waters, Jiajia
KIVA-hpFE is a high performance computer software for solving the physics of multi-species and multiphase turbulent reactive flow in complex geometries having immersed moving parts. The code is written in Fortran 90/95 and can be used on any computer platform with any popular complier. The code is in two versions, a serial version and a parallel version utilizing MPICH2 type Message Passing Interface (MPI or Intel MPI) for solving distributed domains. The parallel version is at least 30x faster than the serial version and much faster than our previous generation of parallel engine modeling software, by many factors. The 5thmore » generation algorithm construction is a Galerkin type Finite Element Method (FEM) solving conservative momentum, species, and energy transport equations along with two-equation turbulent model k-ω Reynolds Averaged Navier-Stokes (RANS) model and a Vreman type dynamic Large Eddy Simulation (LES) method. The LES method is capable modeling transitional flow from laminar to fully turbulent; therefore, this LES method does not require special hybrid or blending to walls. The FEM projection method also uses a Petrov-Galerkin (P-G) stabilization along with pressure stabilization. We employ hierarchical basis sets, constructed on the fly with enrichment in areas associated with relatively larger error as determined by error estimation methods. In addition, when not using the hp-adaptive module, the code employs Lagrangian basis or shape functions. The shape functions are constructed for hexahedral, prismatic and tetrahedral elements. The software is designed to solve many types of reactive flow problems, from burners to internal combustion engines and turbines. In addition, the formulation allows for direct integration of solid bodies (conjugate heat transfer), as in heat transfer through housings, parts, cylinders. It can also easily be extended to stress modeling of solids, used in fluid structure interactions problems, solidification, porous media modeling and magneto hydrodynamics.« less
Biglino, Giovanni; Giardini, Alessandro; Hsia, Tain-Yen; Figliola, Richard; Taylor, Andrew M.; Schievano, Silvia
2013-01-01
First stage palliation of hypoplastic left heart syndrome, i.e., the Norwood operation, results in a complex physiological arrangement, involving different shunting options (modified Blalock-Taussig, RV-PA conduit, central shunt from the ascending aorta) and enlargement of the hypoplastic ascending aorta. Engineering techniques, both computational and experimental, can aid in the understanding of the Norwood physiology and their correct implementation can potentially lead to refinement of the decision-making process, by means of patient-specific simulations. This paper presents some of the available tools that can corroborate clinical evidence by providing detailed insight into the fluid dynamics of the Norwood circulation as well as alternative surgical scenarios (i.e., virtual surgery). Patient-specific anatomies can be manufactured by means of rapid prototyping and such models can be inserted in experimental set-ups (mock circulatory loops) that can provide a valuable source of validation data as well as hydrodynamic information. Such models can be tuned to respond to differing the patient physiologies. Experimental set-ups can also be compatible with visualization techniques, like particle image velocimetry and cardiovascular magnetic resonance, further adding to the knowledge of the local fluid dynamics. Multi-scale computational models include detailed three-dimensional (3D) anatomical information coupled to a lumped parameter network representing the remainder of the circulation. These models output both overall hemodynamic parameters while also enabling to investigate the local fluid dynamics of the aortic arch or the shunt. As an alternative, pure lumped parameter models can also be employed to model Stage 1 palliation, taking advantage of a much lower computational cost, albeit missing the 3D anatomical component. Finally, analytical techniques, such as wave intensity analysis, can be employed to study the Norwood physiology, providing a mechanistic perspective on the ventriculo-arterial coupling for this specific surgical scenario. PMID:24400277
A Fast Synthetic Aperture Radar Raw Data Simulation Using Cloud Computing
Li, Zhixin; Su, Dandan; Zhu, Haijiang; Li, Wei; Zhang, Fan; Li, Ruirui
2017-01-01
Synthetic Aperture Radar (SAR) raw data simulation is a fundamental problem in radar system design and imaging algorithm research. The growth of surveying swath and resolution results in a significant increase in data volume and simulation period, which can be considered to be a comprehensive data intensive and computing intensive issue. Although several high performance computing (HPC) methods have demonstrated their potential for accelerating simulation, the input/output (I/O) bottleneck of huge raw data has not been eased. In this paper, we propose a cloud computing based SAR raw data simulation algorithm, which employs the MapReduce model to accelerate the raw data computing and the Hadoop distributed file system (HDFS) for fast I/O access. The MapReduce model is designed for the irregular parallel accumulation of raw data simulation, which greatly reduces the parallel efficiency of graphics processing unit (GPU) based simulation methods. In addition, three kinds of optimization strategies are put forward from the aspects of programming model, HDFS configuration and scheduling. The experimental results show that the cloud computing based algorithm achieves 4× speedup over the baseline serial approach in an 8-node cloud environment, and each optimization strategy can improve about 20%. This work proves that the proposed cloud algorithm is capable of solving the computing intensive and data intensive issues in SAR raw data simulation, and is easily extended to large scale computing to achieve higher acceleration. PMID:28075343
Validation of a RANS transition model using a high-order weighted compact nonlinear scheme
NASA Astrophysics Data System (ADS)
Tu, GuoHua; Deng, XiaoGang; Mao, MeiLiang
2013-04-01
A modified transition model is given based on the shear stress transport (SST) turbulence model and an intermittency transport equation. The energy gradient term in the original model is replaced by flow strain rate to saving computational costs. The model employs local variables only, and then it can be conveniently implemented in modern computational fluid dynamics codes. The fifth-order weighted compact nonlinear scheme and the fourth-order staggered scheme are applied to discrete the governing equations for the purpose of minimizing discretization errors, so as to mitigate the confusion between numerical errors and transition model errors. The high-order package is compared with a second-order TVD method on simulating the transitional flow of a flat plate. Numerical results indicate that the high-order package give better grid convergence property than that of the second-order method. Validation of the transition model is performed for transitional flows ranging from low speed to hypersonic speed.
Studying the precision of ray tracing techniques with Szekeres models
NASA Astrophysics Data System (ADS)
Koksbang, S. M.; Hannestad, S.
2015-07-01
The simplest standard ray tracing scheme employing the Born and Limber approximations and neglecting lens-lens coupling is used for computing the convergence along individual rays in mock N-body data based on Szekeres swiss cheese and onion models. The results are compared with the exact convergence computed using the exact Szekeres metric combined with the Sachs formalism. A comparison is also made with an extension of the simple ray tracing scheme which includes the Doppler convergence. The exact convergence is reproduced very precisely as the sum of the gravitational and Doppler convergences along rays in Lemaitre-Tolman-Bondi swiss cheese and single void models. This is not the case when the swiss cheese models are based on nonsymmetric Szekeres models. For such models, there is a significant deviation between the exact and ray traced paths and hence also the corresponding convergences. There is also a clear deviation between the exact and ray tracing results obtained when studying both nonsymmetric and spherically symmetric Szekeres onion models.
A Computational Model of Multidimensional Shape
Liu, Xiuwen; Shi, Yonggang; Dinov, Ivo
2010-01-01
We develop a computational model of shape that extends existing Riemannian models of curves to multidimensional objects of general topological type. We construct shape spaces equipped with geodesic metrics that measure how costly it is to interpolate two shapes through elastic deformations. The model employs a representation of shape based on the discrete exterior derivative of parametrizations over a finite simplicial complex. We develop algorithms to calculate geodesics and geodesic distances, as well as tools to quantify local shape similarities and contrasts, thus obtaining a formulation that accounts for regional differences and integrates them into a global measure of dissimilarity. The Riemannian shape spaces provide a common framework to treat numerous problems such as the statistical modeling of shapes, the comparison of shapes associated with different individuals or groups, and modeling and simulation of shape dynamics. We give multiple examples of geodesic interpolations and illustrations of the use of the models in brain mapping, particularly, the analysis of anatomical variation based on neuroimaging data. PMID:21057668
Exploring the Use of Computer Simulations in Unraveling Research and Development Governance Problems
NASA Technical Reports Server (NTRS)
Balaban, Mariusz A.; Hester, Patrick T.
2012-01-01
Understanding Research and Development (R&D) enterprise relationships and processes at a governance level is not a simple task, but valuable decision-making insight and evaluation capabilities can be gained from their exploration through computer simulations. This paper discusses current Modeling and Simulation (M&S) methods, addressing their applicability to R&D enterprise governance. Specifically, the authors analyze advantages and disadvantages of the four methodologies used most often by M&S practitioners: System Dynamics (SO), Discrete Event Simulation (DES), Agent Based Modeling (ABM), and formal Analytic Methods (AM) for modeling systems at the governance level. Moreover, the paper describes nesting models using a multi-method approach. Guidance is provided to those seeking to employ modeling techniques in an R&D enterprise for the purposes of understanding enterprise governance. Further, an example is modeled and explored for potential insight. The paper concludes with recommendations regarding opportunities for concentration of future work in modeling and simulating R&D governance relationships and processes.
The application of SSADM to modelling the logical structure of proteins.
Saldanha, J; Eccles, J
1991-10-01
A logical design that describes the overall structure of proteins, together with a more detailed design describing secondary and some supersecondary structures, has been constructed using the computer-aided software engineering (CASE) tool, Auto-mate. Auto-mate embodies the philosophy of the Structured Systems Analysis and Design Method (SSADM) which enables the logical design of computer systems. Our design will facilitate the building of large information systems, such as databases and knowledgebases in the field of protein structure, by the derivation of system requirements from our logical model prior to producing the final physical system. In addition, the study has highlighted the ease of employing SSADM as a formalism in which to conduct the transferral of concepts from an expert into a design for a knowledge-based system that can be implemented on a computer (the knowledge-engineering exercise). It has been demonstrated how SSADM techniques may be extended for the purpose of modelling the constituent Prolog rules. This facilitates the integration of the logical system design model with the derived knowledge-based system.
A computational efficient modelling of laminar separation bubbles
NASA Technical Reports Server (NTRS)
Dini, Paolo; Maughmer, Mark D.
1990-01-01
In predicting the aerodynamic characteristics of airfoils operating at low Reynolds numbers, it is often important to account for the effects of laminar (transitional) separation bubbles. Previous approaches to the modelling of this viscous phenomenon range from fast but sometimes unreliable empirical correlations for the length of the bubble and the associated increase in momentum thickness, to more accurate but significantly slower displacement-thickness iteration methods employing inverse boundary-layer formulations in the separated regions. Since the penalty in computational time associated with the more general methods is unacceptable for airfoil design applications, use of an accurate yet computationally efficient model is highly desirable. To this end, a semi-empirical bubble model was developed and incorporated into the Eppler and Somers airfoil design and analysis program. The generality and the efficiency was achieved by successfully approximating the local viscous/inviscid interaction, the transition location, and the turbulent reattachment process within the framework of an integral boundary-layer method. Comparisons of the predicted aerodynamic characteristics with experimental measurements for several airfoils show excellent and consistent agreement for Reynolds numbers from 2,000,000 down to 100,000.
Computational Modeling of Fluid–Structure–Acoustics Interaction during Voice Production
Jiang, Weili; Zheng, Xudong; Xue, Qian
2017-01-01
The paper presented a three-dimensional, first-principle based fluid–structure–acoustics interaction computer model of voice production, which employed a more realistic human laryngeal and vocal tract geometries. Self-sustained vibrations, important convergent–divergent vibration pattern of the vocal folds, and entrainment of the two dominant vibratory modes were captured. Voice quality-associated parameters including the frequency, open quotient, skewness quotient, and flow rate of the glottal flow waveform were found to be well within the normal physiological ranges. The analogy between the vocal tract and a quarter-wave resonator was demonstrated. The acoustic perturbed flux and pressure inside the glottis were found to be at the same order with their incompressible counterparts, suggesting strong source–filter interactions during voice production. Such high fidelity computational model will be useful for investigating a variety of pathological conditions that involve complex vibrations, such as vocal fold paralysis, vocal nodules, and vocal polyps. The model is also an important step toward a patient-specific surgical planning tool that can serve as a no-risk trial and error platform for different procedures, such as injection of biomaterials and thyroplastic medialization. PMID:28243588
Cern, Ahuva; Barenholz, Yechezkel; Tropsha, Alexander; Goldblum, Amiram
2014-01-10
Previously we have developed and statistically validated Quantitative Structure Property Relationship (QSPR) models that correlate drugs' structural, physical and chemical properties as well as experimental conditions with the relative efficiency of remote loading of drugs into liposomes (Cern et al., J. Control. Release 160 (2012) 147-157). Herein, these models have been used to virtually screen a large drug database to identify novel candidate molecules for liposomal drug delivery. Computational hits were considered for experimental validation based on their predicted remote loading efficiency as well as additional considerations such as availability, recommended dose and relevance to the disease. Three compounds were selected for experimental testing which were confirmed to be correctly classified by our previously reported QSPR models developed with Iterative Stochastic Elimination (ISE) and k-Nearest Neighbors (kNN) approaches. In addition, 10 new molecules with known liposome remote loading efficiency that were not used by us in QSPR model development were identified in the published literature and employed as an additional model validation set. The external accuracy of the models was found to be as high as 82% or 92%, depending on the model. This study presents the first successful application of QSPR models for the computer-model-driven design of liposomal drugs. © 2013.
Cern, Ahuva; Barenholz, Yechezkel; Tropsha, Alexander; Goldblum, Amiram
2014-01-01
Previously we have developed and statistically validated Quantitative Structure Property Relationship (QSPR) models that correlate drugs’ structural, physical and chemical properties as well as experimental conditions with the relative efficiency of remote loading of drugs into liposomes (Cern et al, Journal of Controlled Release, 160(2012) 14–157). Herein, these models have been used to virtually screen a large drug database to identify novel candidate molecules for liposomal drug delivery. Computational hits were considered for experimental validation based on their predicted remote loading efficiency as well as additional considerations such as availability, recommended dose and relevance to the disease. Three compounds were selected for experimental testing which were confirmed to be correctly classified by our previously reported QSPR models developed with Iterative Stochastic Elimination (ISE) and k-nearest neighbors (kNN) approaches. In addition, 10 new molecules with known liposome remote loading efficiency that were not used in QSPR model development were identified in the published literature and employed as an additional model validation set. The external accuracy of the models was found to be as high as 82% or 92%, depending on the model. This study presents the first successful application of QSPR models for the computer-model-driven design of liposomal drugs. PMID:24184343
Design Aspects of the Rayleigh Convection Code
NASA Astrophysics Data System (ADS)
Featherstone, N. A.
2017-12-01
Understanding the long-term generation of planetary or stellar magnetic field requires complementary knowledge of the large-scale fluid dynamics pervading large fractions of the object's interior. Such large-scale motions are sensitive to the system's geometry which, in planets and stars, is spherical to a good approximation. As a result, computational models designed to study such systems often solve the MHD equations in spherical geometry, frequently employing a spectral approach involving spherical harmonics. We present computational and user-interface design aspects of one such modeling tool, the Rayleigh convection code, which is suitable for deployment on desktop and petascale-hpc architectures alike. In this poster, we will present an overview of this code's parallel design and its built-in diagnostics-output package. Rayleigh has been developed with NSF support through the Computational Infrastructure for Geodynamics and is expected to be released as open-source software in winter 2017/2018.
Computing resonant frequency of C-shaped compact microstrip antennas by using ANFIS
NASA Astrophysics Data System (ADS)
Akdagli, Ali; Kayabasi, Ahmet; Develi, Ibrahim
2015-03-01
In this work, the resonant frequency of C-shaped compact microstrip antennas (CCMAs) operating at UHF band is computed by using the adaptive neuro-fuzzy inference system (ANFIS). For this purpose, 144 CCMAs with various relative dielectric constants and different physical dimensions were simulated by the XFDTD software package based on the finite-difference time domain (FDTD) method. One hundred and twenty-nine CCMAs were employed for training, while the remaining 15 CCMAs were used for testing of the ANFIS model. Average percentage error (APE) values were obtained as 0.8413% and 1.259% for training and testing, respectively. In order to demonstrate its validity and accuracy, the proposed ANFIS model was also tested over the simulation data given in the literature, and APE was obtained as 0.916%. These results show that ANFIS can be successfully used to compute the resonant frequency of CCMAs.
A Comparison of Approximation Modeling Techniques: Polynomial Versus Interpolating Models
NASA Technical Reports Server (NTRS)
Giunta, Anthony A.; Watson, Layne T.
1998-01-01
Two methods of creating approximation models are compared through the calculation of the modeling accuracy on test problems involving one, five, and ten independent variables. Here, the test problems are representative of the modeling challenges typically encountered in realistic engineering optimization problems. The first approximation model is a quadratic polynomial created using the method of least squares. This type of polynomial model has seen considerable use in recent engineering optimization studies due to its computational simplicity and ease of use. However, quadratic polynomial models may be of limited accuracy when the response data to be modeled have multiple local extrema. The second approximation model employs an interpolation scheme known as kriging developed in the fields of spatial statistics and geostatistics. This class of interpolating model has the flexibility to model response data with multiple local extrema. However, this flexibility is obtained at an increase in computational expense and a decrease in ease of use. The intent of this study is to provide an initial exploration of the accuracy and modeling capabilities of these two approximation methods.
Complete one-loop renormalization of the Higgs-electroweak chiral Lagrangian
NASA Astrophysics Data System (ADS)
Buchalla, G.; Catà, O.; Celis, A.; Knecht, M.; Krause, C.
2018-03-01
Employing background-field method and super-heat-kernel expansion, we compute the complete one-loop renormalization of the electroweak chiral Lagrangian with a light Higgs boson. Earlier results from purely scalar fluctuations are confirmed as a special case. We also recover the one-loop renormalization of the conventional Standard Model in the appropriate limit.
Toward a Theory of Variation in the Organization of the Word Reading System
ERIC Educational Resources Information Center
Rueckl, Jay G.
2016-01-01
The strategy underlying most computational models of word reading is to specify the organization of the reading system--its architecture and the processes and representations it employs--and to demonstrate that this organization would give rise to the behavior observed in word reading tasks. This approach fails to adequately address the variation…
Embedded, real-time UAV control for improved, image-based 3D scene reconstruction
Jean Liénard; Andre Vogs; Demetrios Gatziolis; Nikolay Strigul
2016-01-01
Unmanned Aerial Vehicles (UAVs) are already broadly employed for 3D modeling of large objects such as trees and monuments via photogrammetry. The usual workflow includes two distinct steps: image acquisition with UAV and computationally demanding postflight image processing. Insufficient feature overlaps across images is a common shortcoming in post-flight image...
Measurement of Productivity and Quality in Non-Marketable Services: With Application to Schools
ERIC Educational Resources Information Center
Fare, R.; Grosskopf, S.; Forsund, F. R.; Hayes, K.; Heshmati, A.
2006-01-01
Purpose: This paper seeks to model and compute productivity, including a measure of quality, of a service which does not have marketable outputs--namely public education at the micro level. This application is a case study for Sweden public schools. Design/methodology/approach: A Malmquist productivity index is employed which allows for multiple…
Users' Perceptions of the Web As Revealed by Transaction Log Analysis.
ERIC Educational Resources Information Center
Moukdad, Haidar; Large, Andrew
2001-01-01
Describes the results of a transaction log analysis of a Web search engine, WebCrawler, to analyze user's queries for information retrieval. Results suggest most users do not employ advanced search features, and the linguistic structure often resembles a human-human communication model that is not always successful in human-computer communication.…
Designing a Mobile Training System in Rural Areas with Bayesian Factor Models
ERIC Educational Resources Information Center
Omidi Najafabadi, Maryam; Mirdamadi, Seyed Mehdi; Payandeh Najafabadi, Amir Teimour
2014-01-01
The facts that the wireless technologies (1) are more convenient; and (2) need less skill than desktop computers, play a crucial role to decrease digital gap in rural areas. This study employed the Bayesian Confirmatory Factor Analysis (CFA) to design a mobile training system in rural areas of Iran. It categorized challenges, potential, and…
ERIC Educational Resources Information Center
Meyers, Jason L.; Murphy, Stephen; Goodman, Joshua; Turhan, Ahmet
2012-01-01
Operational testing programs employing item response theory (IRT) applications benefit from of the property of item parameter invariance whereby item parameter estimates obtained from one sample can be applied to other samples (when the underlying assumptions are satisfied). In theory, this feature allows for applications such as computer-adaptive…
Multiphysics Analysis of a Solid-Core Nuclear Thermal Engine Thrust Chamber
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Canabal, Francisco; Cheng, Gary; Chen, Yen-Sen
2006-01-01
The objective of this effort is to develop an efficient and accurate thermo-fluid computational methodology to predict environments for a hypothetical solid-core, nuclear thermal engine thrust chamber. The computational methodology is based on an unstructured-grid, pressure-based computational fluid dynamics methodology. Formulations for heat transfer in solids and porous media were implemented and anchored. A two-pronged approach was employed in this effort: A detailed thermo-fluid analysis on a multi-channel flow element for mid-section corrosion investigation; and a global modeling of the thrust chamber to understand the effect of hydrogen dissociation and recombination on heat transfer and thrust performance. The formulations and preliminary results on both aspects are presented.
System identification using Nuclear Norm & Tabu Search optimization
NASA Astrophysics Data System (ADS)
Ahmed, Asif A.; Schoen, Marco P.; Bosworth, Ken W.
2018-01-01
In recent years, subspace System Identification (SI) algorithms have seen increased research, stemming from advanced minimization methods being applied to the Nuclear Norm (NN) approach in system identification. These minimization algorithms are based on hard computing methodologies. To the authors’ knowledge, as of now, there has been no work reported that utilizes soft computing algorithms to address the minimization problem within the nuclear norm SI framework. A linear, time-invariant, discrete time system is used in this work as the basic model for characterizing a dynamical system to be identified. The main objective is to extract a mathematical model from collected experimental input-output data. Hankel matrices are constructed from experimental data, and the extended observability matrix is employed to define an estimated output of the system. This estimated output and the actual - measured - output are utilized to construct a minimization problem. An embedded rank measure assures minimum state realization outcomes. Current NN-SI algorithms employ hard computing algorithms for minimization. In this work, we propose a simple Tabu Search (TS) algorithm for minimization. TS algorithm based SI is compared with the iterative Alternating Direction Method of Multipliers (ADMM) line search optimization based NN-SI. For comparison, several different benchmark system identification problems are solved by both approaches. Results show improved performance of the proposed SI-TS algorithm compared to the NN-SI ADMM algorithm.
A potential-energy scaling model to simulate the initial stages of thin-film growth
NASA Technical Reports Server (NTRS)
Heinbockel, J. H.; Outlaw, R. A.; Walker, G. H.
1983-01-01
A solid on solid (SOS) Monte Carlo computer simulation employing a potential energy scaling technique was used to model the initial stages of thin film growth. The model monitors variations in the vertical interaction potential that occur due to the arrival or departure of selected adatoms or impurities at all sites in the 400 sq. ft. array. Boltzmann ordered statistics are used to simulate fluctuations in vibrational energy at each site in the array, and the resulting site energy is compared with threshold levels of possible atomic events. In addition to adsorption, desorption, and surface migration, adatom incorporation and diffusion of a substrate atom to the surface are also included. The lateral interaction of nearest, second nearest, and third nearest neighbors is also considered. A series of computer experiments are conducted to illustrate the behavior of the model.
Network-based stochastic semisupervised learning.
Silva, Thiago Christiano; Zhao, Liang
2012-03-01
Semisupervised learning is a machine learning approach that is able to employ both labeled and unlabeled samples in the training process. In this paper, we propose a semisupervised data classification model based on a combined random-preferential walk of particles in a network (graph) constructed from the input dataset. The particles of the same class cooperate among themselves, while the particles of different classes compete with each other to propagate class labels to the whole network. A rigorous model definition is provided via a nonlinear stochastic dynamical system and a mathematical analysis of its behavior is carried out. A numerical validation presented in this paper confirms the theoretical predictions. An interesting feature brought by the competitive-cooperative mechanism is that the proposed model can achieve good classification rates while exhibiting low computational complexity order in comparison to other network-based semisupervised algorithms. Computer simulations conducted on synthetic and real-world datasets reveal the effectiveness of the model.
Ab initio results for intermediate-mass, open-shell nuclei
NASA Astrophysics Data System (ADS)
Baker, Robert B.; Dytrych, Tomas; Launey, Kristina D.; Draayer, Jerry P.
2017-01-01
A theoretical understanding of nuclei in the intermediate-mass region is vital to astrophysical models, especially for nucleosynthesis. Here, we employ the ab initio symmetry-adapted no-core shell model (SA-NCSM) in an effort to push first-principle calculations across the sd-shell region. The ab initio SA-NCSM's advantages come from its ability to control the growth of model spaces by including only physically relevant subspaces, which allows us to explore ultra-large model spaces beyond the reach of other methods. We report on calculations for 19Ne and 20Ne up through 13 harmonic oscillator shells using realistic interactions and discuss the underlying structure as well as implications for various astrophysical reactions. This work was supported by the U.S. NSF (OCI-0904874 and ACI -1516338) and the U.S. DOE (DE-SC0005248), and also benefitted from the Blue Waters sustained-petascale computing project and high performance computing resources provided by LSU.
A computationally efficient modelling of laminar separation bubbles
NASA Technical Reports Server (NTRS)
Maughmer, Mark D.
1988-01-01
The goal of this research is to accurately predict the characteristics of the laminar separation bubble and its effects on airfoil performance. To this end, a model of the bubble is under development and will be incorporated in the analysis section of the Eppler and Somers program. As a first step in this direction, an existing bubble model was inserted into the program. It was decided to address the problem of the short bubble before attempting the prediction of the long bubble. In the second place, an integral boundary-layer method is believed more desirable than a finite difference approach. While these two methods achieve similar prediction accuracy, finite-difference methods tend to involve significantly longer computer run times than the integral methods. Finally, as the boundary-layer analysis in the Eppler and Somers program employs the momentum and kinetic energy integral equations, a short-bubble model compatible with these equations is most preferable.
NASA Technical Reports Server (NTRS)
Wang, Ten-See
1993-01-01
The objective of this study is to benchmark a four-engine clustered nozzle base flowfield with a computational fluid dynamics (CFD) model. The CFD model is a three-dimensional pressure-based, viscous flow formulation. An adaptive upwind scheme is employed for the spatial discretization. The upwind scheme is based on second and fourth order central differencing with adaptive artificial dissipation. Qualitative base flow features such as the reverse jet, wall jet, recompression shock, and plume-plume impingement have been captured. The computed quantitative flow properties such as the radial base pressure distribution, model centerline Mach number and static pressure variation, and base pressure characteristic curve agreed reasonably well with those of the measurement. Parametric study on the effect of grid resolution, turbulence model, inlet boundary condition and difference scheme on convective terms has been performed. The results showed that grid resolution had a strong influence on the accuracy of the base flowfield prediction.
Idea Paper: The Lifecycle of Software for Scientific Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dubey, Anshu; McInnes, Lois C.
The software lifecycle is a well researched topic that has produced many models to meet the needs of different types of software projects. However, one class of projects, software development for scientific computing, has received relatively little attention from lifecycle researchers. In particular, software for end-to-end computations for obtaining scientific results has received few lifecycle proposals and no formalization of a development model. An examination of development approaches employed by the teams implementing large multicomponent codes reveals a great deal of similarity in their strategies. This idea paper formalizes these related approaches into a lifecycle model for end-to-end scientific applicationmore » software, featuring loose coupling between submodels for development of infrastructure and scientific capability. We also invite input from stakeholders to converge on a model that captures the complexity of this development processes and provides needed lifecycle guidance to the scientific software community.« less
NASA Technical Reports Server (NTRS)
Dash, S. M.; Pergament, H. S.
1978-01-01
The development of a computational model (BOAT) for calculating nearfield jet entrainment, and its incorporation in an existing methodology for the prediction of nozzle boattail pressures, is discussed. The model accounts for the detailed turbulence and thermochemical processes occurring in the mixing layer formed between a jet exhaust and surrounding external stream while interfacing with the inviscid exhaust and external flowfield regions in an overlaid, interactive manner. The ability of the BOAT model to analyze simple free shear flows is assessed by comparisons with fundamental laboratory data. The overlaid procedure for incorporating variable pressures into BOAT and the entrainment correction employed to yield an effective plume boundary for the inviscid external flow are demonstrated. This is accomplished via application of BOAT in conjunction with the codes comprising the NASA/LRC patched viscous/inviscid methodology for determining nozzle boattail drag for subsonic/transonic external flows.
Evolution of the solar radiative forcing on climate during the Holocene
NASA Astrophysics Data System (ADS)
Vieira, Luis Eduardo; Solanki, Sami K.; Krivova, Natalie
The main external heating source of the Earth's coupled atmosphere-ocean system is the solar radiative energy input. The variability of this energy source produces corresponding changes on the coupled system. However, there is still significant uncertainty on the level of changes. One way to distinguish the influence of the Sun on the climate from other sources is to search for its influence in the pre-industrial period, when the influence of human activities on the atmosphere composition and Earth's surface properties can be neglected. Such studies require long time series of solar and geophysical parameters, ideally covering the whole Holocene. Here, we compute the total and spectral irradiance for the Holocene employing the reconstructions of the open flux and sunspot number obtained from the cosmogenic isotope 14C. The model employed in this study is identical to the spectral and total irradiance reconstruction (SATIRE) models employed to study these parameters on time scales from days to centuries, but adapted to work with decadal averaged data. The model is tested by comparing to the total and spectral solar irradiance reconstructions from the sunspot number for the last 4 centuries. We also discuss limits and uncertainties of the model.
Fully-coupled analysis of jet mixing problems. Three-dimensional PNS model, SCIP3D
NASA Technical Reports Server (NTRS)
Wolf, D. E.; Sinha, N.; Dash, S. M.
1988-01-01
Numerical procedures formulated for the analysis of 3D jet mixing problems, as incorporated in the computer model, SCIP3D, are described. The overall methodology closely parallels that developed in the earlier 2D axisymmetric jet mixing model, SCIPVIS. SCIP3D integrates the 3D parabolized Navier-Stokes (PNS) jet mixing equations, cast in mapped cartesian or cylindrical coordinates, employing the explicit MacCormack Algorithm. A pressure split variant of this algorithm is employed in subsonic regions with a sublayer approximation utilized for treating the streamwise pressure component. SCIP3D contains both the ks and kW turbulence models, and employs a two component mixture approach to treat jet exhausts of arbitrary composition. Specialized grid procedures are used to adjust the grid growth in accordance with the growth of the jet, including a hybrid cartesian/cylindrical grid procedure for rectangular jets which moves the hybrid coordinate origin towards the flow origin as the jet transitions from a rectangular to circular shape. Numerous calculations are presented for rectangular mixing problems, as well as for a variety of basic unit problems exhibiting overall capabilities of SCIP3D.
Fast maximum likelihood estimation using continuous-time neural point process models.
Lepage, Kyle Q; MacDonald, Christopher J
2015-06-01
A recent report estimates that the number of simultaneously recorded neurons is growing exponentially. A commonly employed statistical paradigm using discrete-time point process models of neural activity involves the computation of a maximum-likelihood estimate. The time to computate this estimate, per neuron, is proportional to the number of bins in a finely spaced discretization of time. By using continuous-time models of neural activity and the optimally efficient Gaussian quadrature, memory requirements and computation times are dramatically decreased in the commonly encountered situation where the number of parameters p is much less than the number of time-bins n. In this regime, with q equal to the quadrature order, memory requirements are decreased from O(np) to O(qp), and the number of floating-point operations are decreased from O(np(2)) to O(qp(2)). Accuracy of the proposed estimates is assessed based upon physiological consideration, error bounds, and mathematical results describing the relation between numerical integration error and numerical error affecting both parameter estimates and the observed Fisher information. A check is provided which is used to adapt the order of numerical integration. The procedure is verified in simulation and for hippocampal recordings. It is found that in 95 % of hippocampal recordings a q of 60 yields numerical error negligible with respect to parameter estimate standard error. Statistical inference using the proposed methodology is a fast and convenient alternative to statistical inference performed using a discrete-time point process model of neural activity. It enables the employment of the statistical methodology available with discrete-time inference, but is faster, uses less memory, and avoids any error due to discretization.
ERIC Educational Resources Information Center
Ormerod, Dana E.
Kent State University (Ohio) Regional Campuses have conducted surveys of their applied business associate degree graduates in office management, accounting, business management, and their employers. Responses indicated the need for computer literacy appropriate to the employment situation. In addition, instructors of traditional liberal arts…
Novel approach for dam break flow modeling using computational intelligence
NASA Astrophysics Data System (ADS)
Seyedashraf, Omid; Mehrabi, Mohammad; Akhtari, Ali Akbar
2018-04-01
A new methodology based on the computational intelligence (CI) system is proposed and tested for modeling the classic 1D dam-break flow problem. The reason to seek for a new solution lies in the shortcomings of the existing analytical and numerical models. This includes the difficulty of using the exact solutions and the unwanted fluctuations, which arise in the numerical results. In this research, the application of the radial-basis-function (RBF) and multi-layer-perceptron (MLP) systems is detailed for the solution of twenty-nine dam-break scenarios. The models are developed using seven variables, i.e. the length of the channel, the depths of the up-and downstream sections, time, and distance as the inputs. Moreover, the depths and velocities of each computational node in the flow domain are considered as the model outputs. The models are validated against the analytical, and Lax-Wendroff and MacCormack FDM schemes. The findings indicate that the employed CI models are able to replicate the overall shape of the shock- and rarefaction-waves. Furthermore, the MLP system outperforms RBF and the tested numerical schemes. A new monolithic equation is proposed based on the best fitting model, which can be used as an efficient alternative to the existing piecewise analytic equations.
GPU-accelerated element-free reverse-time migration with Gauss points partition
NASA Astrophysics Data System (ADS)
Zhou, Zhen; Jia, Xiaofeng; Qiang, Xiaodong
2018-06-01
An element-free method (EFM) has been demonstrated successfully in elasticity, heat conduction and fatigue crack growth problems. We present the theory of EFM and its numerical applications in seismic modelling and reverse time migration (RTM). Compared with the finite difference method and the finite element method, the EFM has unique advantages: (1) independence of grids in computation and (2) lower expense and more flexibility (because only the information of the nodes and the boundary of the concerned area is required). However, in EFM, due to improper computation and storage of some large sparse matrices, such as the mass matrix and the stiffness matrix, the method is difficult to apply to seismic modelling and RTM for a large velocity model. To solve the problem of storage and computation efficiency, we propose a concept of Gauss points partition and utilise the graphics processing unit to improve the computational efficiency. We employ the compressed sparse row format to compress the intermediate large sparse matrices and attempt to simplify the operations by solving the linear equations with CULA solver. To improve the computation efficiency further, we introduce the concept of the lumped mass matrix. Numerical experiments indicate that the proposed method is accurate and more efficient than the regular EFM.
Computational analysis of high resolution unsteady airloads for rotor aeroacoustics
NASA Technical Reports Server (NTRS)
Quackenbush, Todd R.; Lam, C.-M. Gordon; Wachspress, Daniel A.; Bliss, Donald B.
1994-01-01
The study of helicopter aerodynamic loading for acoustics applications requires the application of efficient yet accurate simulations of the velocity field induced by the rotor's vortex wake. This report summarizes work to date on the development of such an analysis, which builds on the Constant Vorticity Contour (CVC) free wake model, previously implemented for the study of vibratory loading in the RotorCRAFT computer code. The present effort has focused on implementation of an airload reconstruction approach that computes high resolution airload solutions of rotor/rotor-wake interactions required for acoustics computations. Supplementary efforts on the development of improved vortex core modeling, unsteady aerodynamic effects, higher spatial resolution of rotor loading, and fast vortex wake implementations have substantially enhanced the capabilities of the resulting software, denoted RotorCRAFT/AA (AeroAcoustics). Results of validation calculations using recently acquired model rotor data show that by employing airload reconstruction it is possible to apply the CVC wake analysis with temporal and spatial resolution suitable for acoustics applications while reducing the computation time required by one to two orders of magnitude relative to that required by direct calculations. Promising correlation with this body of airload and noise data has been obtained for a variety of rotor configurations and operating conditions.
Dhar, Purbarun; Maganti, Lakshmi Sirisha; Harikrishnan, A R
2018-05-30
Electrorheological (ER) fluids are known to exhibit enhanced viscous effects under an electric field stimulus. The present article reports the hitherto unreported phenomenon of greatly enhanced thermal conductivity in such electro-active colloidal dispersions in the presence of an externally applied electric field. Typical ER fluids are synthesized employing dielectric fluids and nanoparticles and experiments are performed employing an in-house designed setup. Greatly augmented thermal conductivity under a field's influence was observed. Enhanced thermal conduction along the fibril structures under the field effect is theorized as the crux of the mechanism. The formation of fibril structures has also been experimentally verified employing microscopy. Based on classical models for ER fluids, a mathematical formalism has been developed to predict the propensity of chain formation and statistically feasible chain dynamics at given Mason numbers. Further, a thermal resistance network model is employed to computationally predict the enhanced thermal conduction across the fibrillary colloid microstructure. Good agreement between the mathematical model and the experimental observations is achieved. The domineering role of thermal conductivity over relative permittivity has been shown by proposing a modified Hashin-Shtrikman (HS) formalism. The findings have implications towards better physical understanding and design of ER fluids from both 'smart' viscoelastic as well as thermally active materials points of view.
Numerical Simulations of Single Flow Element in a Nuclear Thermal Thrust Chamber
NASA Technical Reports Server (NTRS)
Cheng, Gary; Ito, Yasushi; Ross, Doug; Chen, Yen-Sen; Wang, Ten-See
2007-01-01
The objective of this effort is to develop an efficient and accurate computational methodology to predict both detailed and global thermo-fluid environments of a single now element in a hypothetical solid-core nuclear thermal thrust chamber assembly, Several numerical and multi-physics thermo-fluid models, such as chemical reactions, turbulence, conjugate heat transfer, porosity, and power generation, were incorporated into an unstructured-grid, pressure-based computational fluid dynamics solver. The numerical simulations of a single now element provide a detailed thermo-fluid environment for thermal stress estimation and insight for possible occurrence of mid-section corrosion. In addition, detailed conjugate heat transfer simulations were employed to develop the porosity models for efficient pressure drop and thermal load calculations.
NASA Technical Reports Server (NTRS)
1988-01-01
Macrodyne, Inc.'s laser velocimeter (LV) is a system used in wind tunnel testing of aircraft, missiles and spacecraft employing electro optical techniques to probe the flow field as the tunnel blows air over a model of flight vehicle and to determine velocity of air and its direction at many points around the model. However, current state-of-the-art minicomputers cannot handle the massive flow of real time data from several sources simultaneously. Langley developed instrument Laser Velocimeter Autocovariance Buffer Interface (LVABI). LVABI is interconnecting instrument between LV and computer. It acquires data from as many as six LV channels at high real time data rates, stores it in memory and sends it to computer on command. LVABI has application in variety of research, industrial and defense functions requiring precise flow measurement.
NASA Technical Reports Server (NTRS)
Maskew, B.
1982-01-01
VSAERO is a computer program used to predict the nonlinear aerodynamic characteristics of arbitrary three-dimensional configurations in subsonic flow. Nonlinear effects of vortex separation and vortex surface interaction are treated in an iterative wake-shape calculation procedure, while the effects of viscosity are treated in an iterative loop coupling potential-flow and integral boundary-layer calculations. The program employs a surface singularity panel method using quadrilateral panels on which doublet and source singularities are distributed in a piecewise constant form. This user's manual provides a brief overview of the mathematical model, instructions for configuration modeling and a description of the input and output data. A listing of a sample case is included.
Wildfire simulation using LES with synthetic-velocity SGS models
NASA Astrophysics Data System (ADS)
McDonough, J. M.; Tang, Tingting
2016-11-01
Wildland fires are becoming more prevalent and intense worldwide as climate change leads to warmer, drier conditions; and large-eddy simulation (LES) is receiving increasing attention for fire spread predictions as computing power continues to improve (see, e.g.,). We report results from wildfire simulations over general terrain employing implicit LES for solution of the incompressible Navier-Stokes (N.-S.) and thermal energy equations with Boussinesq approximation, altered with Darcy, Forchheimer and Brinkman extensions, to represent forested regions as porous media with varying (in both space and time) porosity and permeability. We focus on subgrid-scale (SGS) behaviors computed with a synthetic-velocity model, a discrete dynamical system, based on the poor man's N.-S. equations and investigate the ability of this model to produce fire whirls (tornadoes of fire) at the (unresolved) SGS level. Professor, Mechanical Engineering and Mathematics.
Acetylcholine-modulated plasticity in reward-driven navigation: a computational study.
Zannone, Sara; Brzosko, Zuzanna; Paulsen, Ole; Clopath, Claudia
2018-06-21
Neuromodulation plays a fundamental role in the acquisition of new behaviours. In previous experimental work, we showed that acetylcholine biases hippocampal synaptic plasticity towards depression, and the subsequent application of dopamine can retroactively convert depression into potentiation. We also demonstrated that incorporating this sequentially neuromodulated Spike-Timing-Dependent Plasticity (STDP) rule in a network model of navigation yields effective learning of changing reward locations. Here, we employ computational modelling to further characterize the effects of cholinergic depression on behaviour. We find that acetylcholine, by allowing learning from negative outcomes, enhances exploration over the action space. We show that this results in a variety of effects, depending on the structure of the model, the environment and the task. Interestingly, sequentially neuromodulated STDP also yields flexible learning, surpassing the performance of other reward-modulated plasticity rules.
Carson, Anne; Troy, Douglas
2007-01-01
Nursing and computer science students and faculty worked with the American Red Cross to investigate the potential for information technology to provide Red Cross disaster services nurses with improved access to accurate community resources in times of disaster. Funded by a national three-year grant, this interdisciplinary partnership led to field testing of an information system to support local community disaster preparedness at seven Red Cross chapters across the United States. The field test results demonstrate the benefits of the technology and the value of interdisciplinary research. The work also created a sustainable learning and research model for the future. This paper describes the collaborative model employed in this interdisciplinary research and exemplifies the benefits to faculty and students of well-timed interdisciplinary and community collaboration. PMID:18600129
Electromagnetic Modeling of Human Body Using High Performance Computing
NASA Astrophysics Data System (ADS)
Ng, Cho-Kuen; Beall, Mark; Ge, Lixin; Kim, Sanghoek; Klaas, Ottmar; Poon, Ada
Realistic simulation of electromagnetic wave propagation in the actual human body can expedite the investigation of the phenomenon of harvesting implanted devices using wireless powering coupled from external sources. The parallel electromagnetics code suite ACE3P developed at SLAC National Accelerator Laboratory is based on the finite element method for high fidelity accelerator simulation, which can be enhanced to model electromagnetic wave propagation in the human body. Starting with a CAD model of a human phantom that is characterized by a number of tissues, a finite element mesh representing the complex geometries of the individual tissues is built for simulation. Employing an optimal power source with a specific pattern of field distribution, the propagation and focusing of electromagnetic waves in the phantom has been demonstrated. Substantial speedup of the simulation is achieved by using multiple compute cores on supercomputers.
NASA Astrophysics Data System (ADS)
Vasefi, Fartash; MacKinnon, Nicholas; Farkas, Daniel L.
2014-03-01
We have developed a multimode imaging dermoscope that combines polarization and hyperspectral imaging with a computationally rapid analytical model. This approach employs specific spectral ranges of visible and near infrared wavelengths for mapping the distribution of specific skin bio-molecules. This corrects for the melanin-hemoglobin misestimation common to other systems, without resorting to complex and computationally intensive tissue optical models that are prone to inaccuracies due to over-modeling. Various human skin measurements including a melanocytic nevus, and venous occlusion conditions were investigated and compared with other ratiometric spectral imaging approaches. Access to the broad range of hyperspectral data in the visible and near-infrared range allows our algorithm to flexibly use different wavelength ranges for chromophore estimation while minimizing melanin-hemoglobin optical signature cross-talk.
Modelling brain emergent behaviours through coevolution of neural agents.
Maniadakis, Michail; Trahanias, Panos
2006-06-01
Recently, many research efforts focus on modelling partial brain areas with the long-term goal to support cognitive abilities of artificial organisms. Existing models usually suffer from heterogeneity, which constitutes their integration very difficult. The present work introduces a computational framework to address brain modelling tasks, emphasizing on the integrative performance of substructures. Moreover, implemented models are embedded in a robotic platform to support its behavioural capabilities. We follow an agent-based approach in the design of substructures to support the autonomy of partial brain structures. Agents are formulated to allow the emergence of a desired behaviour after a certain amount of interaction with the environment. An appropriate collaborative coevolutionary algorithm, able to emphasize both the speciality of brain areas and their cooperative performance, is employed to support design specification of agent structures. The effectiveness of the proposed approach is illustrated through the implementation of computational models for motor cortex and hippocampus, which are successfully tested on a simulated mobile robot.
NASA Astrophysics Data System (ADS)
Fischer, T.; Naumov, D.; Sattler, S.; Kolditz, O.; Walther, M.
2015-11-01
We offer a versatile workflow to convert geological models built with the ParadigmTM GOCAD© (Geological Object Computer Aided Design) software into the open-source VTU (Visualization Toolkit unstructured grid) format for usage in numerical simulation models. Tackling relevant scientific questions or engineering tasks often involves multidisciplinary approaches. Conversion workflows are needed as a way of communication between the diverse tools of the various disciplines. Our approach offers an open-source, platform-independent, robust, and comprehensible method that is potentially useful for a multitude of environmental studies. With two application examples in the Thuringian Syncline, we show how a heterogeneous geological GOCAD model including multiple layers and faults can be used for numerical groundwater flow modeling, in our case employing the OpenGeoSys open-source numerical toolbox for groundwater flow simulations. The presented workflow offers the chance to incorporate increasingly detailed data, utilizing the growing availability of computational power to simulate numerical models.
Zur, Hadas; Tuller, Tamir
2016-01-01
mRNA translation is the fundamental process of decoding the information encoded in mRNA molecules by the ribosome for the synthesis of proteins. The centrality of this process in various biomedical disciplines such as cell biology, evolution and biotechnology, encouraged the development of dozens of mathematical and computational models of translation in recent years. These models aimed at capturing various biophysical aspects of the process. The objective of this review is to survey these models, focusing on those based and/or validated on real large-scale genomic data. We consider aspects such as the complexity of the models, the biophysical aspects they regard and the predictions they may provide. Furthermore, we survey the central systems biology discoveries reported on their basis. This review demonstrates the fundamental advantages of employing computational biophysical translation models in general, and discusses the relative advantages of the different approaches and the challenges in the field. PMID:27591251
Zhang, Fan; Liu, Runsheng; Zheng, Jie
2016-12-23
Linking computational models of signaling pathways to predicted cellular responses such as gene expression regulation is a major challenge in computational systems biology. In this work, we present Sig2GRN, a Cytoscape plugin that is able to simulate time-course gene expression data given the user-defined external stimuli to the signaling pathways. A generalized logical model is used in modeling the upstream signaling pathways. Then a Boolean model and a thermodynamics-based model are employed to predict the downstream changes in gene expression based on the simulated dynamics of transcription factors in signaling pathways. Our empirical case studies show that the simulation of Sig2GRN can predict changes in gene expression patterns induced by DNA damage signals and drug treatments. As a software tool for modeling cellular dynamics, Sig2GRN can facilitate studies in systems biology by hypotheses generation and wet-lab experimental design. http://histone.scse.ntu.edu.sg/Sig2GRN/.