Massaroni, Carlo; Cassetta, Eugenio; Silvestri, Sergio
2017-10-01
Respiratory assessment can be carried out by using motion capture systems. A geometrical model is mandatory in order to compute the breathing volume as a function of time from the markers' trajectories. This study describes a novel model to compute volume changes and calculate respiratory parameters by using a motion capture system. The novel method, ie, prism-based method, computes the volume enclosed within the chest by defining 82 prisms from the 89 markers attached to the subject chest. Volumes computed with this method are compared to spirometry volumes and to volumes computed by a conventional method based on the tetrahedron's decomposition of the chest wall and integrated in a commercial motion capture system. Eight healthy volunteers were enrolled and 30 seconds of quiet breathing data collected from each of them. Results show a better agreement between volumes computed by the prism-based method and the spirometry (discrepancy of 2.23%, R 2 = .94) compared to the agreement between volumes computed by the conventional method and the spirometry (discrepancy of 3.56%, R 2 = .92). The proposed method also showed better performances in the calculation of respiratory parameters. Our findings open up prospects for the further use of the new method in the breathing assessment via motion capture systems.
An Isopycnal Box Model with predictive deep-ocean structure for biogeochemical cycling applications
NASA Astrophysics Data System (ADS)
Goodwin, Philip
2012-07-01
To simulate global ocean biogeochemical tracer budgets a model must accurately determine both the volume and surface origins of each water-mass. Water-mass volumes are dynamically linked to the ocean circulation in General Circulation Models, but at the cost of high computational load. In computationally efficient Box Models the water-mass volumes are simply prescribed and do not vary when the circulation transport rates or water mass densities are perturbed. A new computationally efficient Isopycnal Box Model is presented in which the sub-surface box volumes are internally calculated from the prescribed circulation using a diffusive conceptual model of the thermocline, in which upwelling of cold dense water is balanced by a downward diffusion of heat. The volumes of the sub-surface boxes are set so that the density stratification satisfies an assumed link between diapycnal diffusivity, κd, and buoyancy frequency, N: κd = c/(Nα), where c and α are user prescribed parameters. In contrast to conventional Box Models, the volumes of the sub-surface ocean boxes in the Isopycnal Box Model are dynamically linked to circulation, and automatically respond to circulation perturbations. This dynamical link allows an important facet of ocean biogeochemical cycling to be simulated in a highly computationally efficient model framework.
Estimation of the fractional coverage of rainfall in climate models
NASA Technical Reports Server (NTRS)
Eltahir, E. A. B.; Bras, R. L.
1993-01-01
The fraction of the grid cell area covered by rainfall, mu, is an essential parameter in descriptions of land surface hydrology in climate models. A simple procedure is presented for estimating this fraction, based on extensive observations of storm areas and rainfall volumes. Storm area and rainfall volume are often linearly related; this relation can be used to compute the storm area from the volume of rainfall simulated by a climate model. A formula is developed for computing mu, which describes the dependence of the fractional coverage of rainfall on the season of the year, the geographical region, rainfall volume, and the spatial and temporal resolution of the model. The new formula is applied in computing mu over the Amazon region. Significant temporal variability in the fractional coverage of rainfall is demonstrated. The implications of this variability for the modeling of land surface hydrology in climate models are discussed.
Du, Fengzhou; Li, Binghang; Yin, Ningbei; Cao, Yilin; Wang, Yongqian
2017-03-01
Knowing the volume of a graft is essential in repairing alveolar bone defects. This study investigates the 2 advanced preoperative volume measurement methods: three-dimensional (3D) printing and computer-aided engineering (CAE). Ten unilateral alveolar cleft patients were enrolled in this study. Their computed tomographic data were sent to 3D printing and CAE software. A simulated graft was used on the 3D-printed model, and the graft volume was measured by water displacement. The volume calculated by CAE software used mirror-reverses technique. The authors compared the actual volumes of the simulated grafts with the CAE software-derived volumes. The average volume of the simulated bone grafts by 3D-printed models was 1.52 mL, higher than the mean volume of 1.47 calculated by CAE software. The difference between the 2 volumes was from -0.18 to 0.42 mL. The paired Student t test showed no statistically significant difference between the volumes derived from the 2 methods. This study demonstrated that the mirror-reversed technique by CAE software is as accurate as the simulated operation on 3D-printed models in unilateral alveolar cleft patients. These findings further validate the use of 3D printing and CAE technique in alveolar defect repairing.
Surface Modeling and Grid Generation of Orbital Sciences X34 Vehicle. Phase 1
NASA Technical Reports Server (NTRS)
Alter, Stephen J.
1997-01-01
The surface modeling and grid generation requirements, motivations, and methods used to develop Computational Fluid Dynamic volume grids for the X34-Phase 1 are presented. The requirements set forth by the Aerothermodynamics Branch at the NASA Langley Research Center serve as the basis for the final techniques used in the construction of all volume grids, including grids for parametric studies of the X34. The Integrated Computer Engineering and Manufacturing code for Computational Fluid Dynamics (ICEM/CFD), the Grid Generation code (GRIDGEN), the Three-Dimensional Multi-block Advanced Grid Generation System (3DMAGGS) code, and Volume Grid Manipulator (VGM) code are used to enable the necessary surface modeling, surface grid generation, volume grid generation, and grid alterations, respectively. All volume grids generated for the X34, as outlined in this paper, were used for CFD simulations within the Aerothermodynamics Branch.
Prediction of resource volumes at untested locations using simple local prediction models
Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.
2006-01-01
This paper shows how local spatial nonparametric prediction models can be applied to estimate volumes of recoverable gas resources at individual undrilled sites, at multiple sites on a regional scale, and to compute confidence bounds for regional volumes based on the distribution of those estimates. An approach that combines cross-validation, the jackknife, and bootstrap procedures is used to accomplish this task. Simulation experiments show that cross-validation can be applied beneficially to select an appropriate prediction model. The cross-validation procedure worked well for a wide range of different states of nature and levels of information. Jackknife procedures are used to compute individual prediction estimation errors at undrilled locations. The jackknife replicates also are used with a bootstrap resampling procedure to compute confidence bounds for the total volume. The method was applied to data (partitioned into a training set and target set) from the Devonian Antrim Shale continuous-type gas play in the Michigan Basin in Otsego County, Michigan. The analysis showed that the model estimate of total recoverable volumes at prediction sites is within 4 percent of the total observed volume. The model predictions also provide frequency distributions of the cell volumes at the production unit scale. Such distributions are the basis for subsequent economic analyses. ?? Springer Science+Business Media, LLC 2007.
Material point method modeling in oil and gas reservoirs
Vanderheyden, William Brian; Zhang, Duan
2016-06-28
A computer system and method of simulating the behavior of an oil and gas reservoir including changes in the margins of frangible solids. A system of equations including state equations such as momentum, and conservation laws such as mass conservation and volume fraction continuity, are defined and discretized for at least two phases in a modeled volume, one of which corresponds to frangible material. A material point model technique for numerically solving the system of discretized equations, to derive fluid flow at each of a plurality of mesh nodes in the modeled volume, and the velocity of at each of a plurality of particles representing the frangible material in the modeled volume. A time-splitting technique improves the computational efficiency of the simulation while maintaining accuracy on the deformation scale. The method can be applied to derive accurate upscaled model equations for larger volume scale simulations.
Frandsen, Michael W.; Wessol, Daniel E.; Wheeler, Floyd J.
2001-01-16
Methods and computer executable instructions are disclosed for ultimately developing a dosimetry plan for a treatment volume targeted for irradiation during cancer therapy. The dosimetry plan is available in "real-time" which especially enhances clinical use for in vivo applications. The real-time is achieved because of the novel geometric model constructed for the planned treatment volume which, in turn, allows for rapid calculations to be performed for simulated movements of particles along particle tracks there through. The particles are exemplary representations of neutrons emanating from a neutron source during BNCT. In a preferred embodiment, a medical image having a plurality of pixels of information representative of a treatment volume is obtained. The pixels are: (i) converted into a plurality of substantially uniform volume elements having substantially the same shape and volume of the pixels; and (ii) arranged into a geometric model of the treatment volume. An anatomical material associated with each uniform volume element is defined and stored. Thereafter, a movement of a particle along a particle track is defined through the geometric model along a primary direction of movement that begins in a starting element of the uniform volume elements and traverses to a next element of the uniform volume elements. The particle movement along the particle track is effectuated in integer based increments along the primary direction of movement until a position of intersection occurs that represents a condition where the anatomical material of the next element is substantially different from the anatomical material of the starting element. This position of intersection is then useful for indicating whether a neutron has been captured, scattered or exited from the geometric model. From this intersection, a distribution of radiation doses can be computed for use in the cancer therapy. The foregoing represents an advance in computational times by multiple factors of time magnitudes.
The Voronoi volume and molecular representation of molar volume: equilibrium simple fluids.
Hunjan, Jagtar Singh; Eu, Byung Chan
2010-04-07
The Voronoi volume of simple fluids was previously made use of in connection with volume transport phenomena in nonequilibrium simple fluids. To investigate volume transport phenomena, it is important to develop a method to compute the Voronoi volume of fluids in nonequilibrium. In this work, as a first step to this goal, we investigate the equilibrium limit of the nonequilibrium Voronoi volume together with its attendant related molar (molal) and specific volumes. It is proved that the equilibrium Voronoi volume is equivalent to the molar (molal) volume. The latter, in turn, is proved equivalent to the specific volume. This chain of equivalences provides an alternative procedure of computing the equilibrium Voronoi volume from the molar volume/specific volume. We also show approximate methods of computing the Voronoi and molar volumes from the information on the pair correlation function. These methods may be employed for their quick estimation, but also provide some aspects of the fluid structure and its relation to the Voronoi volume. The Voronoi volume obtained from computer simulations is fitted to a function of temperature and pressure in the region above the triple point but below the critical point. Since the fitting function is given in terms of reduced variables for the Lennard-Jones (LJ) model and the kindred volumes (i.e., specific and molar volumes) are in essence equivalent to the equation of state, the formula obtained is a reduced equation state for simple fluids obeying the LJ model potential in the range of temperature and pressure examined and hence can be used for other simple fluids.
NASA Astrophysics Data System (ADS)
Hegedűs, Árpád
2018-03-01
In this paper, using the light-cone lattice regularization, we compute the finite volume expectation values of the composite operator \\overline{Ψ}Ψ between pure fermion states in the Massive Thirring Model. In the light-cone regularized picture, this expectation value is related to 2-point functions of lattice spin operators being located at neighboring sites of the lattice. The operator \\overline{Ψ}Ψ is proportional to the trace of the stress-energy tensor. This is why the continuum finite volume expectation values can be computed also from the set of non-linear integral equations (NLIE) governing the finite volume spectrum of the theory. Our results for the expectation values coming from the computation of lattice correlators agree with those of the NLIE computations. Previous conjectures for the LeClair-Mussardo-type series representation of the expectation values are also checked.
NASA Astrophysics Data System (ADS)
Liu, George S.; Kim, Jinkyung; Applegate, Brian E.; Oghalai, John S.
2017-07-01
Diseases that cause hearing loss and/or vertigo in humans such as Meniere's disease are often studied using animal models. The volume of endolymph within the inner ear varies with these diseases. Here, we used a mouse model of increased endolymph volume, endolymphatic hydrops, to develop a computer-aided objective approach to measure endolymph volume from images collected in vivo using optical coherence tomography. The displacement of Reissner's membrane from its normal position was measured in cochlear cross sections. We validated our computer-aided measurements with manual measurements and with trained observer labels. This approach allows for computer-aided detection of endolymphatic hydrops in mice, with test performance showing sensitivity of 91% and specificity of 87% using a running average of five measurements. These findings indicate that this approach is accurate and reliable for classifying endolymphatic hydrops and quantifying endolymph volume.
NASA Technical Reports Server (NTRS)
Louis, P.; Gokhale, A. M.
1996-01-01
Computer simulation is a powerful tool for analyzing the geometry of three-dimensional microstructure. A computer simulation model is developed to represent the three-dimensional microstructure of a two-phase particulate composite where particles may be in contact with one another but do not overlap significantly. The model is used to quantify the "connectedness" of the particulate phase of a polymer matrix composite containing hollow carbon particles in a dielectric polymer resin matrix. The simulations are utilized to estimate the morphological percolation volume fraction for electrical conduction, and the effective volume fraction of the particles that actually take part in the electrical conduction. The calculated values of the effective volume fraction are used as an input for a self-consistent physical model for electrical conductivity. The predicted values of electrical conductivity are in very good agreement with the corresponding experimental data on a series of specimens having different particulate volume fraction.
4D-CT motion estimation using deformable image registration and 5D respiratory motion modeling.
Yang, Deshan; Lu, Wei; Low, Daniel A; Deasy, Joseph O; Hope, Andrew J; El Naqa, Issam
2008-10-01
Four-dimensional computed tomography (4D-CT) imaging technology has been developed for radiation therapy to provide tumor and organ images at the different breathing phases. In this work, a procedure is proposed for estimating and modeling the respiratory motion field from acquired 4D-CT imaging data and predicting tissue motion at the different breathing phases. The 4D-CT image data consist of series of multislice CT volume segments acquired in ciné mode. A modified optical flow deformable image registration algorithm is used to compute the image motion from the CT segments to a common full volume 3D-CT reference. This reference volume is reconstructed using the acquired 4D-CT data at the end-of-exhalation phase. The segments are optimally aligned to the reference volume according to a proposed a priori alignment procedure. The registration is applied using a multigrid approach and a feature-preserving image downsampling maxfilter to achieve better computational speed and higher registration accuracy. The registration accuracy is about 1.1 +/- 0.8 mm for the lung region according to our verification using manually selected landmarks and artificially deformed CT volumes. The estimated motion fields are fitted to two 5D (spatial 3D+tidal volume+airflow rate) motion models: forward model and inverse model. The forward model predicts tissue movements and the inverse model predicts CT density changes as a function of tidal volume and airflow rate. A leave-one-out procedure is used to validate these motion models. The estimated modeling prediction errors are about 0.3 mm for the forward model and 0.4 mm for the inverse model.
A computer model of the pediatric circulatory system for testing pediatric assist devices.
Giridharan, Guruprasad A; Koenig, Steven C; Mitchell, Michael; Gartner, Mark; Pantalos, George M
2007-01-01
Lumped parameter computer models of the pediatric circulatory systems for 1- and 4-year-olds were developed to predict hemodynamic responses to mechanical circulatory support devices. Model parameters, including resistance, compliance and volume, were adjusted to match hemodynamic pressure and flow waveforms, pressure-volume loops, percent systole, and heart rate of pediatric patients (n = 6) with normal ventricles. Left ventricular failure was modeled by adjusting the time-varying compliance curve of the left heart to produce aortic pressures and cardiac outputs consistent with those observed clinically. Models of pediatric continuous flow (CF) and pulsatile flow (PF) ventricular assist devices (VAD) and intraaortic balloon pump (IABP) were developed and integrated into the heart failure pediatric circulatory system models. Computer simulations were conducted to predict acute hemodynamic responses to PF and CF VAD operating at 50%, 75% and 100% support and 2.5 and 5 ml IABP operating at 1:1 and 1:2 support modes. The computer model of the pediatric circulation matched the human pediatric hemodynamic waveform morphology to within 90% and cardiac function parameters with 95% accuracy. The computer model predicted PF VAD and IABP restore aortic pressure pulsatility and variation in end-systolic and end-diastolic volume, but diminish with increasing CF VAD support.
Application of Local Discretization Methods in the NASA Finite-Volume General Circulation Model
NASA Technical Reports Server (NTRS)
Yeh, Kao-San; Lin, Shian-Jiann; Rood, Richard B.
2002-01-01
We present the basic ideas of the dynamics system of the finite-volume General Circulation Model developed at NASA Goddard Space Flight Center for climate simulations and other applications in meteorology. The dynamics of this model is designed with emphases on conservative and monotonic transport, where the property of Lagrangian conservation is used to maintain the physical consistency of the computational fluid for long-term simulations. As the model benefits from the noise-free solutions of monotonic finite-volume transport schemes, the property of Lagrangian conservation also partly compensates the accuracy of transport for the diffusion effects due to the treatment of monotonicity. By faithfully maintaining the fundamental laws of physics during the computation, this model is able to achieve sufficient accuracy for the global consistency of climate processes. Because the computing algorithms are based on local memory, this model has the advantage of efficiency in parallel computation with distributed memory. Further research is yet desirable to reduce the diffusion effects of monotonic transport for better accuracy, and to mitigate the limitation due to fast-moving gravity waves for better efficiency.
Network Aggregation in Transportation Planning : Volume I : Summary and Survey
DOT National Transportation Integrated Search
1978-04-01
Volume 1 summarizes research on network aggregation in transportation models. It includes a survey of network aggregation practices, definition of an extraction aggregation model, computational results on a heuristic implementation of the model, and ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fiveland, W.A.; Oberjohn, W.J.; Cornelius, D.K.
1985-12-01
This report summarizes the work conducted during a 30-month contract with the United States Department of Energy (DOE) Pittsburgh Energy Technology Center (PETC). The general objective is to develop and verify a computer code capable of modeling the major aspects of pulverized coal combustion. Achieving this objective will lead to design methods applicable to industrial and utility furnaces. The combustion model (COMO) is based mainly on an existing Babcock and Wilcox (B and W) computer program. The model consists of a number of relatively independent modules that represent the major processes involved in pulverized coal combustion: flow, heterogeneous and homogeneousmore » chemical reaction, and heat transfer. As models are improved or as new ones are developed, this modular structure allows portions of the COMO model to be updated with minimal impact on the remainder of the program. The report consists of two volumes. This volume (Volume 1) contains a technical summary of the COMO model, results of predictions for gas phase combustion, pulverized coal combustion, and a detailed description of the COMO model. Volume 2 is the Users Guide for COMO and contains detailed instructions for preparing the input data and a description of the program output. Several example cases have been included to aid the user in usage of the computer program for pulverized coal applications. 66 refs., 41 figs., 21 tabs.« less
Salient regions detection using convolutional neural networks and color volume
NASA Astrophysics Data System (ADS)
Liu, Guang-Hai; Hou, Yingkun
2018-03-01
Convolutional neural network is an important technique in machine learning, pattern recognition and image processing. In order to reduce the computational burden and extend the classical LeNet-5 model to the field of saliency detection, we propose a simple and novel computing model based on LeNet-5 network. In the proposed model, hue, saturation and intensity are utilized to extract depth cues, and then we integrate depth cues and color volume to saliency detection following the basic structure of the feature integration theory. Experimental results show that the proposed computing model outperforms some existing state-of-the-art methods on MSRA1000 and ECSSD datasets.
Predicting uncertainty in future marine ice sheet volume using Bayesian statistical methods
NASA Astrophysics Data System (ADS)
Davis, A. D.
2015-12-01
The marine ice instability can trigger rapid retreat of marine ice streams. Recent observations suggest that marine ice systems in West Antarctica have begun retreating. However, unknown ice dynamics, computationally intensive mathematical models, and uncertain parameters in these models make predicting retreat rate and ice volume difficult. In this work, we fuse current observational data with ice stream/shelf models to develop probabilistic predictions of future grounded ice sheet volume. Given observational data (e.g., thickness, surface elevation, and velocity) and a forward model that relates uncertain parameters (e.g., basal friction and basal topography) to these observations, we use a Bayesian framework to define a posterior distribution over the parameters. A stochastic predictive model then propagates uncertainties in these parameters to uncertainty in a particular quantity of interest (QoI)---here, the volume of grounded ice at a specified future time. While the Bayesian approach can in principle characterize the posterior predictive distribution of the QoI, the computational cost of both the forward and predictive models makes this effort prohibitively expensive. To tackle this challenge, we introduce a new Markov chain Monte Carlo method that constructs convergent approximations of the QoI target density in an online fashion, yielding accurate characterizations of future ice sheet volume at significantly reduced computational cost.Our second goal is to attribute uncertainty in these Bayesian predictions to uncertainties in particular parameters. Doing so can help target data collection, for the purpose of constraining the parameters that contribute most strongly to uncertainty in the future volume of grounded ice. For instance, smaller uncertainties in parameters to which the QoI is highly sensitive may account for more variability in the prediction than larger uncertainties in parameters to which the QoI is less sensitive. We use global sensitivity analysis to help answer this question, and make the computation of sensitivity indices computationally tractable using a combination of polynomial chaos and Monte Carlo techniques.
Estimation of regional gas and tissue volumes of the lung in supine man using computed tomography.
Denison, D M; Morgan, M D; Millar, A B
1986-08-01
This study was intended to discover how well computed tomography could recover the volume and weight of lung like foams in a body like shell, and then how well it could recover the volume and weight of the lungs in supine man. Model thoraces were made with various loaves of bread submerged in water. Computed tomography scans recovered the volume of the model lungs (true volume range 250-12,500 ml) within +0.2 (SD 68) ml and their weights (true range 72-3125 g) within +30 (78) g. Scans also recovered successive injections of 50 ml of water, within +/- 5 ml. Scans in 12 healthy supine men recovered their vital capacities, total lung capacities (TLC), and predicted tissue volumes with comparable accuracy. At total lung capacity the mean tissue volume of single lungs was 431 (64) ml and at residual volume (RV) it was 427 (63) ml. Tissue volume was then used to match inspiratory and expiratory slices and calculate regional ventilation. Throughout the mid 90% of lung the RV/TLC ratio was fairly constant--mean 21% (5%). New methods of presenting such regional data graphically and automatically are also described.
Multi-Scale Computational Models for Electrical Brain Stimulation
Seo, Hyeon; Jun, Sung C.
2017-01-01
Electrical brain stimulation (EBS) is an appealing method to treat neurological disorders. To achieve optimal stimulation effects and a better understanding of the underlying brain mechanisms, neuroscientists have proposed computational modeling studies for a decade. Recently, multi-scale models that combine a volume conductor head model and multi-compartmental models of cortical neurons have been developed to predict stimulation effects on the macroscopic and microscopic levels more precisely. As the need for better computational models continues to increase, we overview here recent multi-scale modeling studies; we focused on approaches that coupled a simplified or high-resolution volume conductor head model and multi-compartmental models of cortical neurons, and constructed realistic fiber models using diffusion tensor imaging (DTI). Further implications for achieving better precision in estimating cellular responses are discussed. PMID:29123476
NASA Technical Reports Server (NTRS)
Sadler, S. G.
1972-01-01
A mathematical model and computer program was implemented to study the main rotor free wake geometry effects on helicopter rotor blade air loads and response in steady maneuvers. Volume 1 (NASA CR-2110) contains the theoretical formulation and analysis of results. Volume 2 contains the computer program listing.
Preoperative computer simulation for planning of vascular access surgery in hemodialysis patients.
Zonnebeld, Niek; Huberts, Wouter; van Loon, Magda M; Delhaas, Tammo; Tordoir, Jan H M
2017-03-06
The arteriovenous fistula (AVF) is the preferred vascular access for hemodialysis patients. Unfortunately, 20-40% of all constructed AVFs fail to mature (FTM), and are therefore not usable for hemodialysis. AVF maturation importantly depends on postoperative blood volume flow. Predicting patient-specific immediate postoperative flow could therefore support surgical planning. A computational model predicting blood volume flow is available, but the effect of blood flow predictions on the clinical endpoint of maturation (at least 500 mL/min blood volume flow, diameter of the venous cannulation segment ≥4 mm) remains undetermined. A multicenter randomized clinical trial will be conducted in which 372 patients will be randomized (1:1 allocation ratio) between conventional healthcare and computational model-aided decision making. All patients are extensively examined using duplex ultrasonography (DUS) during preoperative assessment (12 venous and 11 arterial diameter measurements; 3 arterial volume flow measurements). The computational model will predict patient-specific immediate postoperative blood volume flows based on this DUS examination. Using these predictions, the preferred AVF configuration is recommended for the individual patient (radiocephalic, brachiocephalic, or brachiobasilic). The primary endpoint is FTM rate at six weeks in both groups, secondary endpoints include AVF functionality and patency rates at 6 and 12 months postoperatively. ClinicalTrials.gov (NCT02453412), and ToetsingOnline.nl (NL51610.068.14).
NASA Technical Reports Server (NTRS)
Nakazawa, S.
1987-01-01
This Annual Status Report presents the results of work performed during the third year of the 3-D Inelastic Analysis Methods for Hot Section Components program (NASA Contract NAS3-23697). The objective of the program is to produce a series of new computer codes that permit more accurate and efficient three-dimensional analysis of selected hot section components, i.e., combustor liners, turbine blades, and turbine vanes. The computer codes embody a progression of mathematical models and are streamlined to take advantage of geometrical features, loading conditions, and forms of material response that distinguish each group of selected components. This report is presented in two volumes. Volume 1 describes effort performed under Task 4B, Special Finite Element Special Function Models, while Volume 2 concentrates on Task 4C, Advanced Special Functions Models.
DOT National Transportation Integrated Search
1974-08-01
Volume 5 describes the DELTA Simulation Model. It includes all documentation of the DELTA (Determine Effective Levels of Task Automation) computer simulation developed by TRW for use in the Automation Applications Study. Volume 5A includes a user's m...
Computing volume potentials for noninvasive imaging of cardiac excitation.
van der Graaf, A W Maurits; Bhagirath, Pranav; van Driel, Vincent J H M; Ramanna, Hemanth; de Hooge, Jacques; de Groot, Natasja M S; Götte, Marco J W
2015-03-01
In noninvasive imaging of cardiac excitation, the use of body surface potentials (BSP) rather than body volume potentials (BVP) has been favored due to enhanced computational efficiency and reduced modeling effort. Nowadays, increased computational power and the availability of open source software enable the calculation of BVP for clinical purposes. In order to illustrate the possible advantages of this approach, the explanatory power of BVP is investigated using a rectangular tank filled with an electrolytic conductor and a patient specific three dimensional model. MRI images of the tank and of a patient were obtained in three orthogonal directions using a turbo spin echo MRI sequence. MRI images were segmented in three dimensional using custom written software. Gmsh software was used for mesh generation. BVP were computed using a transfer matrix and FEniCS software. The solution for 240,000 nodes, corresponding to a resolution of 5 mm throughout the thorax volume, was computed in 3 minutes. The tank experiment revealed that an increased electrode surface renders the position of the 4 V equipotential plane insensitive to mesh cell size and reduces simulated deviations. In the patient-specific model, the impact of assigning a different conductivity to lung tissue on the distribution of volume potentials could be visualized. Generation of high quality volume meshes and computation of BVP with a resolution of 5 mm is feasible using generally available software and hardware. Estimation of BVP may lead to an improved understanding of the genesis of BSP and sources of local inaccuracies. © 2014 Wiley Periodicals, Inc.
Airport Performance Model : Volume 2 - User's Manual and Program Documentation
DOT National Transportation Integrated Search
1978-10-01
Volume II contains a User's manual and program documentation for the Airport Performance Model. This computer-based model is written in FORTRAN IV for the DEC-10. The user's manual describes the user inputs to the interactive program and gives sample...
NASA Technical Reports Server (NTRS)
Burgin, G. H.; Fogel, L. J.; Phelps, J. P.
1975-01-01
A technique for computer simulation of air combat is described. Volume 1 decribes the computer program and its development in general terms. Two versions of the program exist. Both incorporate a logic for selecting and executing air combat maneuvers with performance models of specific fighter aircraft. In the batch processing version the flight paths of two aircraft engaged in interactive aerial combat and controlled by the same logic are computed. The realtime version permits human pilots to fly air-to-air combat against the adaptive maneuvering logic (AML) in Langley Differential Maneuvering Simulator (DMS). Volume 2 consists of a detailed description of the computer programs.
Yock, Adam D; Rao, Arvind; Dong, Lei; Beadle, Beth M; Garden, Adam S; Kudchadker, Rajat J; Court, Laurence E
2014-05-01
The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: -11.6%-23.8%) and 14.6% (range: -7.3%-27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: -6.8%-40.3%) and 13.1% (range: -1.5%-52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: -11.1%-20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography images and facilitate improved treatment management.
DOT National Transportation Integrated Search
1973-02-01
The volume presents the models used to analyze basic features of the system, establish feasibility of techniques, and evaluate system performance. The models use analytical expressions and computer simulations to represent the relationship between sy...
NASA Technical Reports Server (NTRS)
Stocks, Dana R.
1986-01-01
The Dynamic Gas Temperature Measurement System compensation software accepts digitized data from two different diameter thermocouples and computes a compensated frequency response spectrum for one of the thermocouples. Detailed discussions of the physical system, analytical model, and computer software are presented in this volume and in Volume 1 of this report under Task 3. Computer program software restrictions and test cases are also presented. Compensated and uncompensated data may be presented in either the time or frequency domain. Time domain data are presented as instantaneous temperature vs time. Frequency domain data may be presented in several forms such as power spectral density vs frequency.
Computing Critical Properties with Yang-Yang Anomalies
NASA Astrophysics Data System (ADS)
Orkoulas, Gerassimos; Cerdeirina, Claudio; Fisher, Michael
2017-01-01
Computation of the thermodynamics of fluids in the critical region is a challenging task owing to divergence of the correlation length and lack of particle-hole symmetries found in Ising or lattice-gas models. In addition, analysis of experiments and simulations reveals a Yang-Yang (YY) anomaly which entails sharing of the specific heat singularity between the pressure and the chemical potential. The size of the YY anomaly is measured by the YY ratio Rμ =C μ /CV of the amplitudes of C μ = - T d2 μ /dT2 and of the total specific heat CV. A ``complete scaling'' theory, in which the pressure mixes into the scaling fields, accounts for the YY anomaly. In Phys. Rev. Lett. 116, 040601 (2016), compressible cell gas (CCG) models which exhibit YY and singular diameter anomalies, have been advanced for near-critical fluids. In such models, the individual cell volumes are allowed to fluctuate. The thermodynamics of CCGs can be computed through mapping onto the Ising model via the seldom-used great grand canonical ensemble. The computations indicate that local free volume fluctuations are the origins of the YY effects. Furthermore, local energy-volume coupling (to model water) is another crucial factor underlying the phenomena.
"Tools For Analysis and Visualization of Large Time- Varying CFD Data Sets"
NASA Technical Reports Server (NTRS)
Wilhelms, Jane; vanGelder, Allen
1999-01-01
During the four years of this grant (including the one year extension), we have explored many aspects of the visualization of large CFD (Computational Fluid Dynamics) datasets. These have included new direct volume rendering approaches, hierarchical methods, volume decimation, error metrics, parallelization, hardware texture mapping, and methods for analyzing and comparing images. First, we implemented an extremely general direct volume rendering approach that can be used to render rectilinear, curvilinear, or tetrahedral grids, including overlapping multiple zone grids, and time-varying grids. Next, we developed techniques for associating the sample data with a k-d tree, a simple hierarchial data model to approximate samples in the regions covered by each node of the tree, and an error metric for the accuracy of the model. We also explored a new method for determining the accuracy of approximate models based on the light field method described at ACM SIGGRAPH (Association for Computing Machinery Special Interest Group on Computer Graphics) '96. In our initial implementation, we automatically image the volume from 32 approximately evenly distributed positions on the surface of an enclosing tessellated sphere. We then calculate differences between these images under different conditions of volume approximation or decimation.
SoftWAXS: a computational tool for modeling wide-angle X-ray solution scattering from biomolecules.
Bardhan, Jaydeep; Park, Sanghyun; Makowski, Lee
2009-10-01
This paper describes a computational approach to estimating wide-angle X-ray solution scattering (WAXS) from proteins, which has been implemented in a computer program called SoftWAXS. The accuracy and efficiency of SoftWAXS are analyzed for analytically solvable model problems as well as for proteins. Key features of the approach include a numerical procedure for performing the required spherical averaging and explicit representation of the solute-solvent boundary and the surface of the hydration layer. These features allow the Fourier transform of the excluded volume and hydration layer to be computed directly and with high accuracy. This approach will allow future investigation of different treatments of the electron density in the hydration shell. Numerical results illustrate the differences between this approach to modeling the excluded volume and a widely used model that treats the excluded-volume function as a sum of Gaussians representing the individual atomic excluded volumes. Comparison of the results obtained here with those from explicit-solvent molecular dynamics clarifies shortcomings inherent to the representation of solvent as a time-averaged electron-density profile. In addition, an assessment is made of how the calculated scattering patterns depend on input parameters such as the solute-atom radii, the width of the hydration shell and the hydration-layer contrast. These results suggest that obtaining predictive calculations of high-resolution WAXS patterns may require sophisticated treatments of solvent.
HYDRA-II: A hydrothermal analysis computer code: Volume 3, Verification/validation assessments
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCann, R.A.; Lowery, P.S.
1987-10-01
HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equationsmore » for conservation of momentum are enhanced by the incorporation of directional porosities and permeabilities that aid in modeling solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated procedures are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume I - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. Volume II - User's Manual contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a model problem. This volume, Volume III - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. This volume also documents comparisons between the results of simulations of single- and multiassembly storage systems and actual experimental data. 11 refs., 55 figs., 13 tabs.« less
A multiscale MDCT image-based breathing lung model with time-varying regional ventilation
Yin, Youbing; Choi, Jiwoong; Hoffman, Eric A.; Tawhai, Merryn H.; Lin, Ching-Long
2012-01-01
A novel algorithm is presented that links local structural variables (regional ventilation and deforming central airways) to global function (total lung volume) in the lung over three imaged lung volumes, to derive a breathing lung model for computational fluid dynamics simulation. The algorithm constitutes the core of an integrative, image-based computational framework for subject-specific simulation of the breathing lung. For the first time, the algorithm is applied to three multi-detector row computed tomography (MDCT) volumetric lung images of the same individual. A key technique in linking global and local variables over multiple images is an in-house mass-preserving image registration method. Throughout breathing cycles, cubic interpolation is employed to ensure C1 continuity in constructing time-varying regional ventilation at the whole lung level, flow rate fractions exiting the terminal airways, and airway deformation. The imaged exit airway flow rate fractions are derived from regional ventilation with the aid of a three-dimensional (3D) and one-dimensional (1D) coupled airway tree that connects the airways to the alveolar tissue. An in-house parallel large-eddy simulation (LES) technique is adopted to capture turbulent-transitional-laminar flows in both normal and deep breathing conditions. The results obtained by the proposed algorithm when using three lung volume images are compared with those using only one or two volume images. The three-volume-based lung model produces physiologically-consistent time-varying pressure and ventilation distribution. The one-volume-based lung model under-predicts pressure drop and yields un-physiological lobar ventilation. The two-volume-based model can account for airway deformation and non-uniform regional ventilation to some extent, but does not capture the non-linear features of the lung. PMID:23794749
HYDRA-II: A hydrothermal analysis computer code: Volume 2, User's manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCann, R.A.; Lowery, P.S.; Lessor, D.L.
1987-09-01
HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite-difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equations formore » conservation of momentum incorporate directional porosities and permeabilities that are available to model solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated methods are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume 1 - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. This volume, Volume 2 - User's Manual, contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a sample problem. The final volume, Volume 3 - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. 6 refs.« less
Computing Incompressible Flows With Free Surfaces
NASA Technical Reports Server (NTRS)
Kothe, D.
1994-01-01
RIPPLE computer program models transient, two-dimensional flows of incompressible fluids with surface tension on free surfaces of general shape. Surface tension modeled as volume force derived from continuum-surface-force model, giving RIPPLE both robustness and accuracy in modeling surface-tension effects at free surface. Also models wall adhesion effects. Written in FORTRAN 77.
Wagner, Maximilian E H; Gellrich, Nils-Claudius; Friese, Karl-Ingo; Becker, Matthias; Wolter, Franz-Erich; Lichtenstein, Juergen T; Stoetzer, Marcus; Rana, Majeed; Essig, Harald
2016-01-01
Objective determination of the orbital volume is important in the diagnostic process and in evaluating the efficacy of medical and/or surgical treatment of orbital diseases. Tools designed to measure orbital volume with computed tomography (CT) often cannot be used with cone beam CT (CBCT) because of inferior tissue representation, although CBCT has the benefit of greater availability and lower patient radiation exposure. Therefore, a model-based segmentation technique is presented as a new method for measuring orbital volume and compared to alternative techniques. Both eyes from thirty subjects with no known orbital pathology who had undergone CBCT as a part of routine care were evaluated (n = 60 eyes). Orbital volume was measured with manual, atlas-based, and model-based segmentation methods. Volume measurements, volume determination time, and usability were compared between the three methods. Differences in means were tested for statistical significance using two-tailed Student's t tests. Neither atlas-based (26.63 ± 3.15 mm(3)) nor model-based (26.87 ± 2.99 mm(3)) measurements were significantly different from manual volume measurements (26.65 ± 4.0 mm(3)). However, the time required to determine orbital volume was significantly longer for manual measurements (10.24 ± 1.21 min) than for atlas-based (6.96 ± 2.62 min, p < 0.001) or model-based (5.73 ± 1.12 min, p < 0.001) measurements. All three orbital volume measurement methods examined can accurately measure orbital volume, although atlas-based and model-based methods seem to be more user-friendly and less time-consuming. The new model-based technique achieves fully automated segmentation results, whereas all atlas-based segmentations at least required manipulations to the anterior closing. Additionally, model-based segmentation can provide reliable orbital volume measurements when CT image quality is poor.
Gøthesen, Øystein; Slover, James; Havelin, Leif; Askildsen, Jan Erik; Malchau, Henrik; Furnes, Ove
2013-07-06
The use of Computer Assisted Surgery (CAS) for knee replacements is intended to improve the alignment of knee prostheses in order to reduce the number of revision operations. Is the cost effectiveness of computer assisted surgery influenced by patient volume and age? By employing a Markov model, we analysed the cost effectiveness of computer assisted surgery versus conventional arthroplasty with respect to implant survival and operation volume in two theoretical Norwegian age cohorts. We obtained mortality and hospital cost data over a 20-year period from Norwegian registers. We presumed that the cost of an intervention would need to be below NOK 500,000 per QALY (Quality Adjusted Life Year) gained, to be considered cost effective. The added cost of computer assisted surgery, provided this has no impact on implant survival, is NOK 1037 and NOK 1414 respectively for 60 and 75-year-olds per quality-adjusted life year at a volume of 25 prostheses per year, and NOK 128 and NOK 175 respectively at a volume of 250 prostheses per year. Sensitivity analyses showed that the 10-year implant survival in cohort 1 needs to rise from 89.8% to 90.6% at 25 prostheses per year, and from 89.8 to 89.9% at 250 prostheses per year for computer assisted surgery to be considered cost effective. In cohort 2, the required improvement is a rise from 95.1% to 95.4% at 25 prostheses per year, and from 95.10% to 95.14% at 250 prostheses per year. The cost of using computer navigation for total knee replacements may be acceptable for 60-year-old as well as 75-year-old patients if the technique increases the implant survival rate just marginally, and the department has a high operation volume. A low volume department might not achieve cost-effectiveness unless computer navigation has a more significant impact on implant survival, thus may defer the investments until such data are available.
Analytic models of ducted turbomachinery tone noise sources. Volume 2: Subprogram documentation
NASA Technical Reports Server (NTRS)
Clark, T. L.; Ganz, U. W.; Graf, G. A.; Westall, J. S.
1974-01-01
Analytical models were developed for computing the periodic sound pressures of subsonic fans in an infinite hardwall annular duct with uniform flow. The computer programs are described which are used for numerical computations of sound pressure mode amplitudes. The data are applied to the acoustic properties of turbomachinery.
The Sortie-Generation Model System. Volume 5. Maintenance Subsystem
1981-09-01
Compuger RoanutI f and moidel 11, Computer operatinS system 17, Proorammino largualviso IS. Numlier of .ugic proltsm Hoewl -3 CSCobol 600 stuscomentm...THE SORTIE-GENERATION MODEL SYSTEM OC’ VOLUME V MAINTENANCE SUBSYSTEM September 1981 Robert S. Greenberg 05$ Prepared pursuant to Department of...Generation Model System Volume V Maintenance Subsystem 6. PERFORMING ORG. REPORT NUMBER LMI Task- L102 7. AUTHOR(a) 8. CONTRACT OR GRANT NUMBER(a
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patton, A.D.; Ayoub, A.K.; Singh, C.
1982-07-01
This report describes the structure and operation of prototype computer programs developed for a Monte Carlo simulation model, GENESIS, and for two analytical models, OPCON and OPPLAN. It includes input data requirements and sample test cases.
In the EPA document Predicting Attenuation of Viruses During Percolation in Soils 1. Probabilistic Model the conceptual, theoretical, and mathematical foundations for a predictive screening model were presented. In this current volume we present a User's Guide for the computer mo...
A Pulsatile Cardiovascular Computer Model for Teaching Heart-Blood Vessel Interaction.
ERIC Educational Resources Information Center
Campbell, Kenneth; And Others
1982-01-01
Describes a model which gives realistic predictions of pulsatile pressure, flow, and volume events in the cardiovascular system. Includes computer oriented laboratory exercises for veterinary and graduate students; equations of the dynamic and algebraic models; and a flow chart for the cardiovascular teaching program. (JN)
The Marshall Engineering Thermosphere (MET) Model. Volume 1; Technical Description
NASA Technical Reports Server (NTRS)
Smith, R. E.
1998-01-01
Volume 1 presents a technical description of the Marshall Engineering Thermosphere (MET) model atmosphere and a summary of its historical development. Various programs developed to augment the original capability of the model are discussed in detail. The report also describes each of the individual subroutines developed to enhance the model. Computer codes for these subroutines are contained in four appendices.
Model implementation for dynamic computation of system cost
NASA Astrophysics Data System (ADS)
Levri, J.; Vaccari, D.
The Advanced Life Support (ALS) Program metric is the ratio of the equivalent system mass (ESM) of a mission based on International Space Station (ISS) technology to the ESM of that same mission based on ALS technology. ESM is a mission cost analog that converts the volume, power, cooling and crewtime requirements of a mission into mass units to compute an estimate of the life support system emplacement cost. Traditionally, ESM has been computed statically, using nominal values for system sizing. However, computation of ESM with static, nominal sizing estimates cannot capture the peak sizing requirements driven by system dynamics. In this paper, a dynamic model for a near-term Mars mission is described. The model is implemented in Matlab/Simulink' for the purpose of dynamically computing ESM. This paper provides a general overview of the crew, food, biomass, waste, water and air blocks in the Simulink' model. Dynamic simulations of the life support system track mass flow, volume and crewtime needs, as well as power and cooling requirement profiles. The mission's ESM is computed, based upon simulation responses. Ultimately, computed ESM values for various system architectures will feed into an optimization search (non-derivative) algorithm to predict parameter combinations that result in reduced objective function values.
NASA Astrophysics Data System (ADS)
Jin, Dakai; Lu, Jia; Zhang, Xiaoliu; Chen, Cheng; Bai, ErWei; Saha, Punam K.
2017-03-01
Osteoporosis is associated with increased fracture risk. Recent advancement in the area of in vivo imaging allows segmentation of trabecular bone (TB) microstructures, which is a known key determinant of bone strength and fracture risk. An accurate biomechanical modelling of TB micro-architecture provides a comprehensive summary measure of bone strength and fracture risk. In this paper, a new direct TB biomechanical modelling method using nonlinear manifold-based volumetric reconstruction of trabecular network is presented. It is accomplished in two sequential modules. The first module reconstructs a nonlinear manifold-based volumetric representation of TB networks from three-dimensional digital images. Specifically, it starts with the fuzzy digital segmentation of a TB network, and computes its surface and curve skeletons. An individual trabecula is identified as a topological segment in the curve skeleton. Using geometric analysis, smoothing and optimization techniques, the algorithm generates smooth, curved, and continuous representations of individual trabeculae glued at their junctions. Also, the method generates a geometrically consistent TB volume at junctions. In the second module, a direct computational biomechanical stress-strain analysis is applied on the reconstructed TB volume to predict mechanical measures. The accuracy of the method was examined using micro-CT imaging of cadaveric distal tibia specimens (N = 12). A high linear correlation (r = 0.95) between TB volume computed using the new manifold-modelling algorithm and that directly derived from the voxel-based micro-CT images was observed. Young's modulus (YM) was computed using direct mechanical analysis on the TB manifold-model over a cubical volume of interest (VOI), and its correlation with the YM, computed using micro-CT based conventional finite-element analysis over the same VOI, was examined. A moderate linear correlation (r = 0.77) was observed between the two YM measures. This preliminary results show the accuracy of the new nonlinear manifold modelling algorithm for TB, and demonstrate the feasibility of a new direct mechanical strain-strain analysis on a nonlinear manifold model of a highly complex biological structure.
Modeling dam-break flows using finite volume method on unstructured grid
USDA-ARS?s Scientific Manuscript database
Two-dimensional shallow water models based on unstructured finite volume method and approximate Riemann solvers for computing the intercell fluxes have drawn growing attention because of their robustness, high adaptivity to complicated geometry and ability to simulate flows with mixed regimes and di...
COAL PREPARATION PLANT COMPUTER MODEL: VOLUME I. USER DOCUMENTATION
The two-volume report describes a steady state modeling system that simulates the performance of coal preparation plants. The system was developed originally under the technical leadership of the U.S. Bureau of Mines and the sponsorship of the EPA. The modified form described in ...
Geometry modeling and grid generation using 3D NURBS control volume
NASA Technical Reports Server (NTRS)
Yu, Tzu-Yi; Soni, Bharat K.; Shih, Ming-Hsin
1995-01-01
The algorithms for volume grid generation using NURBS geometric representation are presented. The parameterization algorithm is enhanced to yield a desired physical distribution on the curve, surface and volume. This approach bridges the gap between CAD surface/volume definition and surface/volume grid generation. Computational examples associated with practical configurations have shown the utilization of these algorithms.
NASA Astrophysics Data System (ADS)
Tiwari, Vaibhav
2018-07-01
The population analysis and estimation of merger rates of compact binaries is one of the important topics in gravitational wave astronomy. The primary ingredient in these analyses is the population-averaged sensitive volume. Typically, sensitive volume, of a given search to a given simulated source population, is estimated by drawing signals from the population model and adding them to the detector data as injections. Subsequently injections, which are simulated gravitational waveforms, are searched for by the search pipelines and their signal-to-noise ratio (SNR) is determined. Sensitive volume is estimated, by using Monte-Carlo (MC) integration, from the total number of injections added to the data, the number of injections that cross a chosen threshold on SNR and the astrophysical volume in which the injections are placed. So far, only fixed population models have been used in the estimation of binary black holes (BBH) merger rates. However, as the scope of population analysis broaden in terms of the methodologies and source properties considered, due to an increase in the number of observed gravitational wave (GW) signals, the procedure will need to be repeated multiple times at a large computational cost. In this letter we address the problem by performing a weighted MC integration. We show how a single set of generic injections can be weighted to estimate the sensitive volume for multiple population models; thereby greatly reducing the computational cost. The weights in this MC integral are the ratios of the output probabilities, determined by the population model and standard cosmology, and the injection probability, determined by the distribution function of the generic injections. Unlike analytical/semi-analytical methods, which usually estimate sensitive volume using single detector sensitivity, the method is accurate within statistical errors, comes at no added cost and requires minimal computational resources.
NASA Technical Reports Server (NTRS)
2000-01-01
Dr. Marc Pusey (seated) and Dr. Craig Kundrot use computers to analyze x-ray maps and generate three-dimensional models of protein structures. With this information, scientists at Marshall Space Flight Center can learn how proteins are made and how they work. The computer screen depicts a proten structure as a ball-and-stick model. Other models depict the actual volume occupied by the atoms, or the ribbon-like structures that are crucial to a protein's function.
Chvetsov, Alexei V; Dong, Lei; Palta, Jantinder R; Amdur, Robert J
2009-10-01
To develop a fast computational radiobiologic model for quantitative analysis of tumor volume during fractionated radiotherapy. The tumor-volume model can be useful for optimizing image-guidance protocols and four-dimensional treatment simulations in proton therapy that is highly sensitive to physiologic changes. The analysis is performed using two approximations: (1) tumor volume is a linear function of total cell number and (2) tumor-cell population is separated into four subpopulations: oxygenated viable cells, oxygenated lethally damaged cells, hypoxic viable cells, and hypoxic lethally damaged cells. An exponential decay model is used for disintegration and removal of oxygenated lethally damaged cells from the tumor. We tested our model on daily volumetric imaging data available for 14 head-and-neck cancer patients treated with an integrated computed tomography/linear accelerator system. A simulation based on the averaged values of radiobiologic parameters was able to describe eight cases during the entire treatment and four cases partially (50% of treatment time) with a maximum 20% error. The largest discrepancies between the model and clinical data were obtained for small tumors, which may be explained by larger errors in the manual tumor volume delineation procedure. Our results indicate that the change in gross tumor volume for head-and-neck cancer can be adequately described by a relatively simple radiobiologic model. In future research, we propose to study the variation of model parameters by fitting to clinical data for a cohort of patients with head-and-neck cancer and other tumors. The potential impact of other processes, like concurrent chemotherapy, on tumor volume should be evaluated.
Predictive Software Cost Model Study. Volume I. Final Technical Report.
1980-06-01
development phase to identify computer resources necessary to support computer programs after transfer of program manangement responsibility and system... classical model development with refinements specifically applicable to avionics systems. The refinements are the result of the Phase I literature search
Study of Two-Dimensional Compressible Non-Acoustic Modeling of Stirling Machine Type Components
NASA Technical Reports Server (NTRS)
Tew, Roy C., Jr.; Ibrahim, Mounir B.
2001-01-01
A two-dimensional (2-D) computer code was developed for modeling enclosed volumes of gas with oscillating boundaries, such as Stirling machine components. An existing 2-D incompressible flow computer code, CAST, was used as the starting point for the project. CAST was modified to use the compressible non-acoustic Navier-Stokes equations to model an enclosed volume including an oscillating piston. The devices modeled have low Mach numbers and are sufficiently small that the time required for acoustics to propagate across them is negligible. Therefore, acoustics were excluded to enable more time efficient computation. Background information about the project is presented. The compressible non-acoustic flow assumptions are discussed. The governing equations used in the model are presented in transport equation format. A brief description is given of the numerical methods used. Comparisons of code predictions with experimental data are then discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mirkovic, D; Peeler, C; Grosshans, D
Purpose: To develop a model of the relative biological effectiveness (RBE) of protons as a function of dose and linear energy transfer (LET) for induction of brain necrosis using clinical data. Methods: In this study, treatment planning information was exported from a clinical treatment planning system (TPS) and used to construct a detailed Monte Carlo model of the patient and the beam delivery system. The physical proton dose and LET were computed in each voxel of the patient volume using Monte Carlo particle transport. A follow-up magnetic resonance imaging (MRI) study registered to the treatment planning CT was used tomore » determine the region of the necrosis in the brain volume. Both, the whole brain and the necrosis volumes were segmented from the computed tomography (CT) dataset using the contours drawn by a physician and the corresponding voxels were binned with respect to dose and LET. The brain necrosis probability was computed as a function of dose and LET by dividing the total volume of all necrosis voxels with a given dose and LET with the corresponding total brain volume resulting in a set of NTCP-like curves (probability as a function of dose parameterized by LET). Results: The resulting model shows dependence on both dose and LET indicating the weakness of the constant RBE model for describing the brain toxicity. To the best of our knowledge the constant RBE model is currently used in all clinical applications which may Result in increased rate of brain toxicities in patients treated with protons. Conclusion: Further studies are needed to develop more accurate brain toxicity models for patients treated with protons and other heavy ions.« less
A flow-simulation model of the tidal Potomac River
Schaffranek, Raymond W.
1987-01-01
A one-dimensional model capable of simulating flow in a network of interconnected channels has been applied to the tidal Potomac River including its major tributaries and embayments between Washington, D.C., and Indian Head, Md. The model can be used to compute water-surface elevations and flow discharges at any of 66 predetermined locations or at any alternative river cross sections definable within the network of channels. In addition, the model can be used to provide tidal-interchange flow volumes and to evaluate tidal excursions and the flushing properties of the riverine system. Comparisons of model-computed results with measured watersurface elevations and discharges demonstrate the validity and accuracy of the model. Tidal-cycle flow volumes computed by the calibrated model have been verified to be within an accuracy of ? 10 percent. Quantitative characteristics of the hydrodynamics of the tidal river are identified and discussed. The comprehensive flow data provided by the model can be used to better understand the geochemical, biological, and other processes affecting the river's water quality.
A novel method for the evaluation of uncertainty in dose-volume histogram computation.
Henríquez, Francisco Cutanda; Castrillón, Silvia Vargas
2008-03-15
Dose-volume histograms (DVHs) are a useful tool in state-of-the-art radiotherapy treatment planning, and it is essential to recognize their limitations. Even after a specific dose-calculation model is optimized, dose distributions computed by using treatment-planning systems are affected by several sources of uncertainty, such as algorithm limitations, measurement uncertainty in the data used to model the beam, and residual differences between measured and computed dose. This report presents a novel method to take them into account. To take into account the effect of associated uncertainties, a probabilistic approach using a new kind of histogram, a dose-expected volume histogram, is introduced. The expected value of the volume in the region of interest receiving an absorbed dose equal to or greater than a certain value is found by using the probability distribution of the dose at each point. A rectangular probability distribution is assumed for this point dose, and a formulation that accounts for uncertainties associated with point dose is presented for practical computations. This method is applied to a set of DVHs for different regions of interest, including 6 brain patients, 8 lung patients, 8 pelvis patients, and 6 prostate patients planned for intensity-modulated radiation therapy. Results show a greater effect on planning target volume coverage than in organs at risk. In cases of steep DVH gradients, such as planning target volumes, this new method shows the largest differences with the corresponding DVH; thus, the effect of the uncertainty is larger.
Del Galdo, Sara; Amadei, Andrea
2016-10-12
In this paper we apply the computational analysis recently proposed by our group to characterize the solvation properties of a native protein in aqueous solution, and to four model aqueous solutions of globular proteins in their unfolded states thus characterizing the protein unfolded state hydration shell and quantitatively evaluating the protein unfolded state partial molar volumes. Moreover, by using both the native and unfolded protein partial molar volumes, we obtain the corresponding variations (unfolding partial molar volumes) to be compared with the available experimental estimates. We also reconstruct the temperature and pressure dependence of the unfolding partial molar volume of Myoglobin dissecting the structural and hydration effects involved in the process.
Computation of turbulence and dispersion of cork in the NETL riser
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiradilok, Veeraya; Gidaspow, Dimitri; Breault, R.W.
The knowledge of dispersion coefficients is essential for reliable design of gasifiers. However, a literature review had shown that dispersion coefficients in fluidized beds differ by more than five orders of magnitude. This study presents a comparison of the computed axial solids dispersion coefficients for cork particles to the NETL riser cork data. The turbulence properties, the Reynolds stresses, the granular temperature spectra and the radial and axial gas and solids dispersion coefficients are computed. The standard kinetic theory model described in Gidaspow’s 1994 book, Multiphase Flow and Fluidization, Academic Press and the IIT and Fluent codes were used tomore » compute the measured axial solids volume fraction profiles for flow of cork particles in the NETL riser. The Johnson–Jackson boundary conditions were used. Standard drag correlations were used. This study shows that the computed solids volume fractions for the low flux flow are within the experimental error of those measured, using a two-dimensional model. At higher solids fluxes the simulated solids volume fractions are close to the experimental measurements, but deviate significantly at the top of the riser. This disagreement is due to use of simplified geometry in the two-dimensional simulation. There is a good agreement between the experiment and the three-dimensional simulation for a high flux condition. This study concludes that the axial and radial gas and solids dispersion coefficients in risers operating in the turbulent flow regime can be computed using a multiphase computational fluid dynamics model.« less
NASA Technical Reports Server (NTRS)
Lin, Shian-Jiann; Atlas, Robert (Technical Monitor)
2002-01-01
The Data Assimilation Office (DAO) has been developing a new generation of ultra-high resolution General Circulation Model (GCM) that is suitable for 4-D data assimilation, numerical weather predictions, and climate simulations. These three applications have conflicting requirements. For 4-D data assimilation and weather predictions, it is highly desirable to run the model at the highest possible spatial resolution (e.g., 55 km or finer) so as to be able to resolve and predict socially and economically important weather phenomena such as tropical cyclones, hurricanes, and severe winter storms. For climate change applications, the model simulations need to be carried out for decades, if not centuries. To reduce uncertainty in climate change assessments, the next generation model would also need to be run at a fine enough spatial resolution that can at least marginally simulate the effects of intense tropical cyclones. Scientific problems (e.g., parameterization of subgrid scale moist processes) aside, all three areas of application require the model's computational performance to be dramatically improved as compared to the previous generation. In this talk, I will present the current and future developments of the "finite-volume dynamical core" at the Data Assimilation Office. This dynamical core applies modem monotonicity preserving algorithms and is genuinely conservative by construction, not by an ad hoc fixer. The "discretization" of the conservation laws is purely local, which is clearly advantageous for resolving sharp gradient flow features. In addition, the local nature of the finite-volume discretization also has a significant advantage on distributed memory parallel computers. Together with a unique vertically Lagrangian control volume discretization that essentially reduces the dimension of the computational problem from three to two, the finite-volume dynamical core is very efficient, particularly at high resolutions. I will also present the computational design of the dynamical core using a hybrid distributed-shared memory programming paradigm that is portable to virtually any of today's high-end parallel super-computing clusters.
NASA Technical Reports Server (NTRS)
Lin, Shian-Jiann; Atlas, Robert (Technical Monitor)
2002-01-01
The Data Assimilation Office (DAO) has been developing a new generation of ultra-high resolution General Circulation Model (GCM) that is suitable for 4-D data assimilation, numerical weather predictions, and climate simulations. These three applications have conflicting requirements. For 4-D data assimilation and weather predictions, it is highly desirable to run the model at the highest possible spatial resolution (e.g., 55 kin or finer) so as to be able to resolve and predict socially and economically important weather phenomena such as tropical cyclones, hurricanes, and severe winter storms. For climate change applications, the model simulations need to be carried out for decades, if not centuries. To reduce uncertainty in climate change assessments, the next generation model would also need to be run at a fine enough spatial resolution that can at least marginally simulate the effects of intense tropical cyclones. Scientific problems (e.g., parameterization of subgrid scale moist processes) aside, all three areas of application require the model's computational performance to be dramatically improved as compared to the previous generation. In this talk, I will present the current and future developments of the "finite-volume dynamical core" at the Data Assimilation Office. This dynamical core applies modem monotonicity preserving algorithms and is genuinely conservative by construction, not by an ad hoc fixer. The "discretization" of the conservation laws is purely local, which is clearly advantageous for resolving sharp gradient flow features. In addition, the local nature of the finite-volume discretization also has a significant advantage on distributed memory parallel computers. Together with a unique vertically Lagrangian control volume discretization that essentially reduces the dimension of the computational problem from three to two, the finite-volume dynamical core is very efficient, particularly at high resolutions. I will also present the computational design of the dynamical core using a hybrid distributed- shared memory programming paradigm that is portable to virtually any of today's high-end parallel super-computing clusters.
Direct Visuo-Haptic 4D Volume Rendering Using Respiratory Motion Models.
Fortmeier, Dirk; Wilms, Matthias; Mastmeyer, Andre; Handels, Heinz
2015-01-01
This article presents methods for direct visuo-haptic 4D volume rendering of virtual patient models under respiratory motion. Breathing models are computed based on patient-specific 4D CT image data sequences. Virtual patient models are visualized in real-time by ray casting based rendering of a reference CT image warped by a time-variant displacement field, which is computed using the motion models at run-time. Furthermore, haptic interaction with the animated virtual patient models is provided by using the displacements computed at high rendering rates to translate the position of the haptic device into the space of the reference CT image. This concept is applied to virtual palpation and the haptic simulation of insertion of a virtual bendable needle. To this aim, different motion models that are applicable in real-time are presented and the methods are integrated into a needle puncture training simulation framework, which can be used for simulated biopsy or vessel puncture in the liver. To confirm real-time applicability, a performance analysis of the resulting framework is given. It is shown that the presented methods achieve mean update rates around 2,000 Hz for haptic simulation and interactive frame rates for volume rendering and thus are well suited for visuo-haptic rendering of virtual patients under respiratory motion.
Synfuel program analysis. Volume 2: VENVAL users manual
NASA Astrophysics Data System (ADS)
Muddiman, J. B.; Whelan, J. W.
1980-07-01
This volume is intended for program analysts and is a users manual for the VENVAL model. It contains specific explanations as to input data requirements and programming procedures for the use of this model. VENVAL is a generalized computer program to aid in evaluation of prospective private sector production ventures. The program can project interrelated values of installed capacity, production, sales revenue, operating costs, depreciation, investment, dent, earnings, taxes, return on investment, depletion, and cash flow measures. It can also compute related public sector and other external costs and revenues if unit costs are furnished.
Computing Reliabilities Of Ceramic Components Subject To Fracture
NASA Technical Reports Server (NTRS)
Nemeth, N. N.; Gyekenyesi, J. P.; Manderscheid, J. M.
1992-01-01
CARES calculates fast-fracture reliability or failure probability of macroscopically isotropic ceramic components. Program uses results from commercial structural-analysis program (MSC/NASTRAN or ANSYS) to evaluate reliability of component in presence of inherent surface- and/or volume-type flaws. Computes measure of reliability by use of finite-element mathematical model applicable to multiple materials in sense model made function of statistical characterizations of many ceramic materials. Reliability analysis uses element stress, temperature, area, and volume outputs, obtained from two-dimensional shell and three-dimensional solid isoparametric or axisymmetric finite elements. Written in FORTRAN 77.
Computer Modeling to Evaluate the Impact of Technology Changes on Resident Procedural Volume.
Grenda, Tyler R; Ballard, Tiffany N S; Obi, Andrea T; Pozehl, William; Seagull, F Jacob; Chen, Ryan; Cohn, Amy M; Daskin, Mark S; Reddy, Rishindra M
2016-12-01
As resident "index" procedures change in volume due to advances in technology or reliance on simulation, it may be difficult to ensure trainees meet case requirements. Training programs are in need of metrics to determine how many residents their institutional volume can support. As a case study of how such metrics can be applied, we evaluated a case distribution simulation model to examine program-level mediastinoscopy and endobronchial ultrasound (EBUS) volumes needed to train thoracic surgery residents. A computer model was created to simulate case distribution based on annual case volume, number of trainees, and rotation length. Single institutional case volume data (2011-2013) were applied, and 10 000 simulation years were run to predict the likelihood (95% confidence interval) of all residents (4 trainees) achieving board requirements for operative volume during a 2-year program. The mean annual mediastinoscopy volume was 43. In a simulation of pre-2012 board requirements (thoracic pathway, 25; cardiac pathway, 10), there was a 6% probability of all 4 residents meeting requirements. Under post-2012 requirements (thoracic, 15; cardiac, 10), however, the likelihood increased to 88%. When EBUS volume (mean 19 cases per year) was concurrently evaluated in the post-2012 era (thoracic, 10; cardiac, 0), the likelihood of all 4 residents meeting case requirements was only 23%. This model provides a metric to predict the probability of residents meeting case requirements in an era of changing volume by accounting for unpredictable and inequitable case distribution. It could be applied across operations, procedures, or disease diagnoses and may be particularly useful in developing resident curricula and schedules.
Computer model of Raritan River Basin water-supply system in central New Jersey
Dunne, Paul; Tasker, Gary D.
1996-01-01
This report describes a computer model of the Raritan River Basin water-supply system in central New Jersey. The computer model provides a technical basis for evaluating the effects of alternative patterns of operation of the Raritan River Basin water-supply system during extended periods of below-average precipitation. The computer model is a continuity-accounting model consisting of a series of interconnected nodes. At each node, the inflow volume, outflow volume, and change in storage are determined and recorded for each month. The model runs with a given set of operating rules and water-use requirements including releases, pumpages, and diversions. The model can be used to assess the hypothetical performance of the Raritan River Basin water- supply system in past years under alternative sets of operating rules. It also can be used to forecast the likelihood of specified outcomes, such as the depletion of reservoir contents below a specified threshold or of streamflows below statutory minimum passing flows, for a period of up to 12 months. The model was constructed on the basis of current reservoir capacities and the natural, unregulated monthly runoff values recorded at U.S. Geological Survey streamflow- gaging stations in the basin.
NASA Astrophysics Data System (ADS)
Wróżyński, Rafał; Pyszny, Krzysztof; Sojka, Mariusz; Przybyła, Czesław; Murat-Błażejewska, Sadżide
2017-06-01
The article describes how the Structure-from-Motion (SfM) method can be used to calculate the volume of anthropogenic microtopography. In the proposed workflow, data is obtained using mass-market devices such as a compact camera (Canon G9) and a smartphone (iPhone5). The volume is computed using free open source software (VisualSFMv0.5.23, CMPMVSv0.6.0., MeshLab) on a PCclass computer. The input data is acquired from video frames. To verify the method laboratory tests on the embankment of a known volume has been carried out. Models of the test embankment were built using two independent measurements made with those two devices. No significant differences were found between the models in a comparative analysis. The volumes of the models differed from the actual volume just by 0.7‰ and 2‰. After a successful laboratory verification, field measurements were carried out in the same way. While building the model from the data acquired with a smartphone, it was observed that a series of frames, approximately 14% of all the frames, was rejected. The missing frames caused the point cloud to be less dense in the place where they had been rejected. This affected the model's volume differed from the volume acquired with a camera by 7%. In order to improve the homogeneity, the frame extraction frequency was increased in the place where frames have been previously missing. A uniform model was thereby obtained with point cloud density evenly distributed. There was a 1.5% difference between the embankment's volume and the volume calculated from the camera-recorded video. The presented method permits the number of input frames to be increased and the model's accuracy to be enhanced without making an additional measurement, which may not be possible in the case of temporary features.
NASA Astrophysics Data System (ADS)
Juhui, Chen; Yanjia, Tang; Dan, Li; Pengfei, Xu; Huilin, Lu
2013-07-01
Flow behavior of gas and particles is predicted by the large eddy simulation of gas-second order moment of solid model (LES-SOM model) in the simulation of flow behavior in CFB. This study shows that the simulated solid volume fractions along height using a two-dimensional model are in agreement with experiments. The velocity, volume fraction and second-order moments of particles are computed. The second-order moments of clusters are calculated. The solid volume fraction, velocity and second order moments are compared at the three different model constants.
Finite volume model for two-dimensional shallow environmental flow
Simoes, F.J.M.
2011-01-01
This paper presents the development of a two-dimensional, depth integrated, unsteady, free-surface model based on the shallow water equations. The development was motivated by the desire of balancing computational efficiency and accuracy by selective and conjunctive use of different numerical techniques. The base framework of the discrete model uses Godunov methods on unstructured triangular grids, but the solution technique emphasizes the use of a high-resolution Riemann solver where needed, switching to a simpler and computationally more efficient upwind finite volume technique in the smooth regions of the flow. Explicit time marching is accomplished with strong stability preserving Runge-Kutta methods, with additional acceleration techniques for steady-state computations. A simplified mass-preserving algorithm is used to deal with wet/dry fronts. Application of the model is made to several benchmark cases that show the interplay of the diverse solution techniques.
Dynamic interactions in neural networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arbib, M.A.; Amari, S.
The study of neural networks is enjoying a great renaissance, both in computational neuroscience, the development of information processing models of living brains, and in neural computing, the use of neurally inspired concepts in the construction of intelligent machines. This volume presents models and data on the dynamic interactions occurring in the brain, and exhibits the dynamic interactions between research in computational neuroscience and in neural computing. The authors present current research, future trends and open problems.
REXOR 2 rotorcraft simulation model. Volume 1: Engineering documentation
NASA Technical Reports Server (NTRS)
Reaser, J. S.; Kretsinger, P. H.
1978-01-01
A rotorcraft nonlinear simulation called REXOR II, divided into three volumes, is described. The first volume is a development of rotorcraft mechanics and aerodynamics. The second is a development and explanation of the computer code required to implement the equations of motion. The third volume is a user's manual, and contains a description of code input/output as well as operating instructions.
Längkvist, Martin; Jendeberg, Johan; Thunberg, Per; Loutfi, Amy; Lidén, Mats
2018-06-01
Computed tomography (CT) is the method of choice for diagnosing ureteral stones - kidney stones that obstruct the ureter. The purpose of this study is to develop a computer aided detection (CAD) algorithm for identifying a ureteral stone in thin slice CT volumes. The challenge in CAD for urinary stones lies in the similarity in shape and intensity of stones with non-stone structures and how to efficiently deal with large high-resolution CT volumes. We address these challenges by using a Convolutional Neural Network (CNN) that works directly on the high resolution CT volumes. The method is evaluated on a large data base of 465 clinically acquired high-resolution CT volumes of the urinary tract with labeling of ureteral stones performed by a radiologist. The best model using 2.5D input data and anatomical information achieved a sensitivity of 100% and an average of 2.68 false-positives per patient on a test set of 88 scans. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
ERIC Educational Resources Information Center
McDonnell Douglas Astronautics Co. - East, St. Louis, MO.
This is the second volume of a two volume study. The first volume examined the literature to identify authoring aids for developing instructional materials, and to identify information clearing houses for existing materials. The purpose of this volume was to develop a means for assessing the cost versus expected benefits of innovations in…
NASA Technical Reports Server (NTRS)
Druyan, Leonard M.
2012-01-01
Climate models is a very broad topic, so a single volume can only offer a small sampling of relevant research activities. This volume of 14 chapters includes descriptions of a variety of modeling studies for a variety of geographic regions by an international roster of authors. The climate research community generally uses the rubric climate models to refer to organized sets of computer instructions that produce simulations of climate evolution. The code is based on physical relationships that describe the shared variability of meteorological parameters such as temperature, humidity, precipitation rate, circulation, radiation fluxes, etc. Three-dimensional climate models are integrated over time in order to compute the temporal and spatial variations of these parameters. Model domains can be global or regional and the horizontal and vertical resolutions of the computational grid vary from model to model. Considering the entire climate system requires accounting for interactions between solar insolation, atmospheric, oceanic and continental processes, the latter including land hydrology and vegetation. Model simulations may concentrate on one or more of these components, but the most sophisticated models will estimate the mutual interactions of all of these environments. Advances in computer technology have prompted investments in more complex model configurations that consider more phenomena interactions than were possible with yesterday s computers. However, not every attempt to add to the computational layers is rewarded by better model performance. Extensive research is required to test and document any advantages gained by greater sophistication in model formulation. One purpose for publishing climate model research results is to present purported advances for evaluation by the scientific community.
Robust statistical reconstruction for charged particle tomography
Schultz, Larry Joe; Klimenko, Alexei Vasilievich; Fraser, Andrew Mcleod; Morris, Christopher; Orum, John Christopher; Borozdin, Konstantin N; Sossong, Michael James; Hengartner, Nicolas W
2013-10-08
Systems and methods for charged particle detection including statistical reconstruction of object volume scattering density profiles from charged particle tomographic data to determine the probability distribution of charged particle scattering using a statistical multiple scattering model and determine a substantially maximum likelihood estimate of object volume scattering density using expectation maximization (ML/EM) algorithm to reconstruct the object volume scattering density. The presence of and/or type of object occupying the volume of interest can be identified from the reconstructed volume scattering density profile. The charged particle tomographic data can be cosmic ray muon tomographic data from a muon tracker for scanning packages, containers, vehicles or cargo. The method can be implemented using a computer program which is executable on a computer.
A generalized threshold model for computing bed load grain size distribution
NASA Astrophysics Data System (ADS)
Recking, Alain
2016-12-01
For morphodynamic studies, it is important to compute not only the transported volumes of bed load, but also the size of the transported material. A few bed load equations compute fractional transport (i.e., both the volume and grain size distribution), but many equations compute only the bulk transport (a volume) with no consideration of the transported grain sizes. To fill this gap, a method is proposed to compute the bed load grain size distribution separately to the bed load flux. The method is called the Generalized Threshold Model (GTM), because it extends the flow competence method for threshold of motion of the largest transported grain size to the full bed surface grain size distribution. This was achieved by replacing dimensional diameters with their size indices in the standard hiding function, which offers a useful framework for computation, carried out for each indices considered in the range [1, 100]. New functions are also proposed to account for partial transport. The method is very simple to implement and is sufficiently flexible to be tested in many environments. In addition to being a good complement to standard bulk bed load equations, it could also serve as a framework to assist in analyzing the physics of bed load transport in future research.
PROTEUS two-dimensional Navier-Stokes computer code, version 1.0. Volume 1: Analysis description
NASA Technical Reports Server (NTRS)
Towne, Charles E.; Schwab, John R.; Benson, Thomas J.; Suresh, Ambady
1990-01-01
A new computer code was developed to solve the two-dimensional or axisymmetric, Reynolds averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The thin-layer or Euler equations may also be solved. Turbulence is modeled using an algebraic eddy viscosity model. The objective was to develop a code for aerospace applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The equations are written in nonorthogonal body-fitted coordinates, and solved by marching in time using a fully-coupled alternating direction-implicit procedure with generalized first- or second-order time differencing. All terms are linearized using second-order Taylor series. The boundary conditions are treated implicitly, and may be steady, unsteady, or spatially periodic. Simple Cartesian or polar grids may be generated internally by the program. More complex geometries require an externally generated computational coordinate system. The documentation is divided into three volumes. Volume 1 is the Analysis Description, and describes in detail the governing equations, the turbulence model, the linearization of the equations and boundary conditions, the time and space differencing formulas, the ADI solution procedure, and the artificial viscosity models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
1981-10-29
This volume is the software description for the National Utility Regulatory Model (NUREG). This is the third of three volumes provided by ICF under contract number DEAC-01-79EI-10579. These three volumes are: a manual describing the NUREG methodology; a users guide; and a description of the software. This manual describes the software which has been developed for NUREG. This includes a listing of the source modules. All computer code has been written in FORTRAN.
ILS Glide Slope Performance Prediction. Volume B
1974-09-01
figures are identical in both volumes. . Abottec A mathematical model for predicting the performance of ILS glide slope arrays in the presence of...irregularities on the performance of ILS Glide Slope antenna systems, a mathematical -electromagnetic scattering computer model has been developed. This work was...Antenna ........... 4-4 9. Test Case Results ..................................... r-3 ix PART I. IEO -j 1.INTRODUCTION IA mathematical model has been
Micro-Mechanical Modeling of Ductile Fracture in Welded Aluminum-Lithium Alloys
NASA Technical Reports Server (NTRS)
Ibrahim, Ahmed
2002-01-01
This computation model for microscopic crack growth in welded aluminum-lithium alloys consists of a cavity with initial volume specified by the fraction f(sub 0), i.e. the void volume relative to the cell volume. Thus, cell size D and initial porosity f(sub 0) defines the key parameters in this model. The choice of cell size requires: 1) D must be representative of the large inclusion spacing. 2) Predicted R-curves scale almost proportionally with D for fixed f(sub 0). 3) mapping of one finite element per cell must provide adequate resolution of the stress-strain fields in the active layer and the adjacent material. For the ferritic steels studied thus far with this model, calibrated cell sizes range from 50-200 microns with f(sub 0) in the 0.0001 to 0.004 micron range. This range of values for D and f (sub 0) satisfies issues 1) and 3). This computational model employs the Gurson and Tvergaard constitutive model for porous plastic materials to describe the progressive damage of cells due to the growth of pre-existing voids. The model derives from a rigid-plastic limit analysis of a solid having a volume fraction (f) of voids approximated by a homogenous spherical body containing a spherical void.
Computational methods for vortex dominated compressible flows
NASA Technical Reports Server (NTRS)
Murman, Earll M.
1987-01-01
The principal objectives were to: understand the mechanisms by which Euler equation computations model leading edge vortex flows; understand the vortical and shock wave structures that may exist for different wing shapes, angles of incidence, and Mach numbers; and compare calculations with experiments in order to ascertain the limitations and advantages of Euler equation models. The initial approach utilized the cell centered finite volume Jameson scheme. The final calculation utilized a cell vertex finite volume method on an unstructured grid. Both methods used Runge-Kutta four stage schemes for integrating the equations. The principal findings are briefly summarized.
Verification of Geometric Model-Based Plant Phenotyping Methods for Studies of Xerophytic Plants.
Drapikowski, Paweł; Kazimierczak-Grygiel, Ewa; Korecki, Dominik; Wiland-Szymańska, Justyna
2016-06-27
This paper presents the results of verification of certain non-contact measurement methods of plant scanning to estimate morphological parameters such as length, width, area, volume of leaves and/or stems on the basis of computer models. The best results in reproducing the shape of scanned objects up to 50 cm in height were obtained with the structured-light DAVID Laserscanner. The optimal triangle mesh resolution for scanned surfaces was determined with the measurement error taken into account. The research suggests that measuring morphological parameters from computer models can supplement or even replace phenotyping with classic methods. Calculating precise values of area and volume makes determination of the S/V (surface/volume) ratio for cacti and other succulents possible, whereas for classic methods the result is an approximation only. In addition, the possibility of scanning and measuring plant species which differ in morphology was investigated.
Verification of Geometric Model-Based Plant Phenotyping Methods for Studies of Xerophytic Plants
Drapikowski, Paweł; Kazimierczak-Grygiel, Ewa; Korecki, Dominik; Wiland-Szymańska, Justyna
2016-01-01
This paper presents the results of verification of certain non-contact measurement methods of plant scanning to estimate morphological parameters such as length, width, area, volume of leaves and/or stems on the basis of computer models. The best results in reproducing the shape of scanned objects up to 50 cm in height were obtained with the structured-light DAVID Laserscanner. The optimal triangle mesh resolution for scanned surfaces was determined with the measurement error taken into account. The research suggests that measuring morphological parameters from computer models can supplement or even replace phenotyping with classic methods. Calculating precise values of area and volume makes determination of the S/V (surface/volume) ratio for cacti and other succulents possible, whereas for classic methods the result is an approximation only. In addition, the possibility of scanning and measuring plant species which differ in morphology was investigated. PMID:27355949
Locomotive crashworthiness report : volume 4 : additional freight locomotive calculations
DOT National Transportation Integrated Search
1995-07-01
Previously developed computer models (see volume 1) are used to carry out additional calculations for evaluation of road freight locomotive crashworthiness. The effect of fewer locomotives (as would be expected after transition from DC motor to highe...
An incompressible two-dimensional multiphase particle-in-cell model for dense particle flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snider, D.M.; O`Rourke, P.J.; Andrews, M.J.
1997-06-01
A two-dimensional, incompressible, multiphase particle-in-cell (MP-PIC) method is presented for dense particle flows. The numerical technique solves the governing equations of the fluid phase using a continuum model and those of the particle phase using a Lagrangian model. Difficulties associated with calculating interparticle interactions for dense particle flows with volume fractions above 5% have been eliminated by mapping particle properties to a Eulerian grid and then mapping back computed stress tensors to particle positions. This approach utilizes the best of Eulerian/Eulerian continuum models and Eulerian/Lagrangian discrete models. The solution scheme allows for distributions of types, sizes, and density of particles,more » with no numerical diffusion from the Lagrangian particle calculations. The computational method is implicit with respect to pressure, velocity, and volume fraction in the continuum solution thus avoiding courant limits on computational time advancement. MP-PIC simulations are compared with one-dimensional problems that have analytical solutions and with two-dimensional problems for which there are experimental data.« less
Less Daily Computer Use is Related to Smaller Hippocampal Volumes in Cognitively Intact Elderly.
Silbert, Lisa C; Dodge, Hiroko H; Lahna, David; Promjunyakul, Nutta-On; Austin, Daniel; Mattek, Nora; Erten-Lyons, Deniz; Kaye, Jeffrey A
2016-01-01
Computer use is becoming a common activity in the daily life of older individuals and declines over time in those with mild cognitive impairment (MCI). The relationship between daily computer use (DCU) and imaging markers of neurodegeneration is unknown. The objective of this study was to examine the relationship between average DCU and volumetric markers of neurodegeneration on brain MRI. Cognitively intact volunteers enrolled in the Intelligent Systems for Assessing Aging Change study underwent MRI. Total in-home computer use per day was calculated using mouse movement detection and averaged over a one-month period surrounding the MRI. Spearman's rank order correlation (univariate analysis) and linear regression models (multivariate analysis) examined hippocampal, gray matter (GM), white matter hyperintensity (WMH), and ventricular cerebral spinal fluid (vCSF) volumes in relation to DCU. A voxel-based morphometry analysis identified relationships between regional GM density and DCU. Twenty-seven cognitively intact participants used their computer for 51.3 minutes per day on average. Less DCU was associated with smaller hippocampal volumes (r = 0.48, p = 0.01), but not total GM, WMH, or vCSF volumes. After adjusting for age, education, and gender, less DCU remained associated with smaller hippocampal volume (p = 0.01). Voxel-wise analysis demonstrated that less daily computer use was associated with decreased GM density in the bilateral hippocampi and temporal lobes. Less daily computer use is associated with smaller brain volume in regions that are integral to memory function and known to be involved early with Alzheimer's pathology and conversion to dementia. Continuous monitoring of daily computer use may detect signs of preclinical neurodegeneration in older individuals at risk for dementia.
Roth, Christian J; Ismail, Mahmoud; Yoshihara, Lena; Wall, Wolfgang A
2017-01-01
In this article, we propose a comprehensive computational model of the entire respiratory system, which allows simulating patient-specific lungs under different ventilation scenarios and provides a deeper insight into local straining and stressing of pulmonary acini. We include novel 0D inter-acinar linker elements to respect the interplay between neighboring alveoli, an essential feature especially in heterogeneously distended lungs. The model is applicable to healthy and diseased patient-specific lung geometries. Presented computations in this work are based on a patient-specific lung geometry obtained from computed tomography data and composed of 60,143 conducting airways, 30,072 acini, and 140,135 inter-acinar linkers. The conducting airways start at the trachea and end before the respiratory bronchioles. The acini are connected to the conducting airways via terminal airways and to each other via inter-acinar linkers forming a fully coupled anatomically based respiratory model. Presented numerical examples include simulation of breathing during a spirometry-like test, measurement of a quasi-static pressure-volume curve using a supersyringe maneuver, and volume-controlled mechanical ventilation. The simulations show that our model incorporating inter-acinar dependencies successfully reproduces physiological results in healthy and diseased states. Moreover, within these scenarios, a deeper insight into local pressure, volume, and flow rate distribution in the human lung is investigated and discussed. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Effect of Age-Related Human Lens Sutures Growth on Its Fluid Dynamics.
Wu, Ho-Ting D; Howse, Louisa A; Vaghefi, Ehsan
2017-12-01
Age-related nuclear cataract is the opacification of the clear ocular lens due to oxidative damage as we age, and is the leading cause of blindness in the world. A lack of antioxidant supply to the core of ever-growing ocular lens could contribute to the cause of this condition. In this project, a computational model was developed to study the sutural fluid inflow of the aging human lens. Three different SOLIDWORKS computational fluid dynamics models of the human lens (7 years old; 28 years old; 46 years old) were created, based on available literature data. The fluid dynamics of the lens sutures were modelled using the Stokes flow equations, combined with realistic physiological boundary conditions and embedded in COMSOL Multiphysics. The flow rate, volume, and flow rate per volume of fluid entering the aging lens were examined, and all increased over the 40 years modelled. However, while the volume of the lens grew by ∼300% and the flow rate increased by ∼400%, the flow rate per volume increased only by very moderate ∼38%. Here, sutural information from humans of 7 to 46 years of age was obtained. In this modelled age range, an increase of flow rate per volume was observed, albeit at very slow rate. We hypothesize that with even further increasing age (60+ years old), the lens volume growth would outpace its flow rate increases, which would eventually lead to malnutrition of the lens nucleus and onset of cataracts.
Automatic 3D Building Detection and Modeling from Airborne LiDAR Point Clouds
ERIC Educational Resources Information Center
Sun, Shaohui
2013-01-01
Urban reconstruction, with an emphasis on man-made structure modeling, is an active research area with broad impact on several potential applications. Urban reconstruction combines photogrammetry, remote sensing, computer vision, and computer graphics. Even though there is a huge volume of work that has been done, many problems still remain…
NASA Astrophysics Data System (ADS)
Roquet, F.; Madec, G.; McDougall, Trevor J.; Barker, Paul M.
2015-06-01
A new set of approximations to the standard TEOS-10 equation of state are presented. These follow a polynomial form, making it computationally efficient for use in numerical ocean models. Two versions are provided, the first being a fit of density for Boussinesq ocean models, and the second fitting specific volume which is more suitable for compressible models. Both versions are given as the sum of a vertical reference profile (6th-order polynomial) and an anomaly (52-term polynomial, cubic in pressure), with relative errors of ∼0.1% on the thermal expansion coefficients. A 75-term polynomial expression is also presented for computing specific volume, with a better accuracy than the existing TEOS-10 48-term rational approximation, especially regarding the sound speed, and it is suggested that this expression represents a valuable approximation of the TEOS-10 equation of state for hydrographic data analysis. In the last section, practical aspects about the implementation of TEOS-10 in ocean models are discussed.
Study of Automobile Market Dynamics : Volume 2. Analysis.
DOT National Transportation Integrated Search
1977-08-01
Volume II describes the work in providing statistical inputs to a computer model by examining the effects of various options on the number of automobiles sold; the distribution of sales among small, medium and large cars; the distribution between aut...
Hardin, Megan E.; Come, Carolyn E.; San José Estépar, Raúl; Ross, James C.; Kurugol, Sila; Okajima, Yuka; Han, MeiLan K.; Kim, Victor; Ramsdell, Joe; Silverman, Edwin K.; Crapo, James D.; Lynch, David A.; Make, Barry; Barr, R. Graham; Hersh, Craig P.; Washko, George R.
2014-01-01
Rationale and Objectives: Asthma is associated with chronic airflow obstruction. Our goal was to assess the association of computed tomographic measures of airway wall volume and lumen volume with the FEV1 and chronic airflow obstruction in smokers with childhood-onset asthma. Methods: We analyzed clinical, lung function, and volumetric computed tomographic airway volume data from 7,266 smokers, including 590 with childhood-onset asthma. Small wall volume and small lumen volume of segmental airways were defined as measures 1 SD below the mean. We assessed the association between small wall volume, small lumen volume, FEV1, and chronic airflow obstruction (post-bronchodilator FEV1/FVC ratio < 0.7) using linear and logistic models. Measurements and Main Results: Compared with subjects without childhood-onset asthma, those with childhood-onset asthma had smaller wall volume and lumen volume (P < 0.0001) of segmental airways. Among subjects with childhood-onset asthma, those with the smallest wall volume and lumen volume had the lowest FEV1 and greatest odds of chronic airflow obstruction. A similar tendency was seen in those without childhood-onset asthma. When comparing these two groups, both small wall volume and small lumen volume were more strongly associated with FEV1 and chronic airflow obstruction among subjects with childhood-asthma in multivariate models. Conclusion: In smokers with childhood-onset asthma, smaller airways are associated with reduced lung function and chronic airflow obstruction. Clinical trial registered with www.clinicaltrials.gov (NCT00608764). PMID:25296268
Diaz, Alejandro A; Hardin, Megan E; Come, Carolyn E; San José Estépar, Raúl; Ross, James C; Kurugol, Sila; Okajima, Yuka; Han, MeiLan K; Kim, Victor; Ramsdell, Joe; Silverman, Edwin K; Crapo, James D; Lynch, David A; Make, Barry; Barr, R Graham; Hersh, Craig P; Washko, George R
2014-11-01
Asthma is associated with chronic airflow obstruction. Our goal was to assess the association of computed tomographic measures of airway wall volume and lumen volume with the FEV1 and chronic airflow obstruction in smokers with childhood-onset asthma. We analyzed clinical, lung function, and volumetric computed tomographic airway volume data from 7,266 smokers, including 590 with childhood-onset asthma. Small wall volume and small lumen volume of segmental airways were defined as measures 1 SD below the mean. We assessed the association between small wall volume, small lumen volume, FEV1, and chronic airflow obstruction (post-bronchodilator FEV1/FVC ratio < 0.7) using linear and logistic models. Compared with subjects without childhood-onset asthma, those with childhood-onset asthma had smaller wall volume and lumen volume (P < 0.0001) of segmental airways. Among subjects with childhood-onset asthma, those with the smallest wall volume and lumen volume had the lowest FEV1 and greatest odds of chronic airflow obstruction. A similar tendency was seen in those without childhood-onset asthma. When comparing these two groups, both small wall volume and small lumen volume were more strongly associated with FEV1 and chronic airflow obstruction among subjects with childhood-asthma in multivariate models. In smokers with childhood-onset asthma, smaller airways are associated with reduced lung function and chronic airflow obstruction. Clinical trial registered with www.clinicaltrials.gov (NCT00608764).
NASA Astrophysics Data System (ADS)
Clementi, Enrico
2012-06-01
This is the introductory chapter to the AIP Proceedings volume "Theory and Applications of Computational Chemistry: The First Decade of the Second Millennium" where we discuss the evolution of "computational chemistry". Very early variational computational chemistry developments are reported in Sections 1 to 7, and 11, 12 by recalling some of the computational chemistry contributions by the author and his collaborators (from late 1950 to mid 1990); perturbation techniques are not considered in this already extended work. Present day's computational chemistry is partly considered in Sections 8 to 10 where more recent studies by the author and his collaborators are discussed, including the Hartree-Fock-Heitler-London method; a more general discussion on present day computational chemistry is presented in Section 14. The following chapters of this AIP volume provide a view of modern computational chemistry. Future computational chemistry developments can be extrapolated from the chapters of this AIP volume; further, in Sections 13 and 15 present an overall analysis on computational chemistry, obtained from the Global Simulation approach, by considering the evolution of scientific knowledge confronted with the opportunities offered by modern computers.
NASA Technical Reports Server (NTRS)
Lewandowski, B. E.; DeWitt, J. K.; Gallo, C. A.; Gilkey, K. M.; Godfrey, A. P.; Humphreys, B. T.; Jagodnik, K. M.; Kassemi, M.; Myers, J. G.; Nelson, E. S.;
2017-01-01
MOTIVATION: Spaceflight countermeasures mitigate the harmful effects of the space environment on astronaut health and performance. Exercise has historically been used as a countermeasure to physical deconditioning, and additional countermeasures including lower body negative pressure, blood flow occlusion and artificial gravity are being researched as countermeasures to spaceflight-induced fluid shifts. The NASA Digital Astronaut Project uses computational models of physiological systems to inform countermeasure design and to predict countermeasure efficacy.OVERVIEW: Computational modeling supports the development of the exercise devices that will be flown on NASAs new exploration crew vehicles. Biomechanical modeling is used to inform design requirements to ensure that exercises can be properly performed within the volume allocated for exercise and to determine whether the limited mass, volume and power requirements of the devices will affect biomechanical outcomes. Models of muscle atrophy and bone remodeling can predict device efficacy for protecting musculoskeletal health during long-duration missions. A lumped-parameter whole-body model of the fluids within the body, which includes the blood within the cardiovascular system, the cerebral spinal fluid, interstitial fluid and lymphatic system fluid, estimates compartmental changes in pressure and volume due to gravitational changes. These models simulate fluid shift countermeasure effects and predict the associated changes in tissue strain in areas of physiological interest to aid in predicting countermeasure effectiveness. SIGNIFICANCE: Development and testing of spaceflight countermeasure prototypes are resource-intensive efforts. Computational modeling can supplement this process by performing simulations that reduce the amount of necessary experimental testing. Outcomes of the simulations are often important for the definition of design requirements and the identification of factors essential in ensuring countermeasure efficacy.
AHPCRC (Army High Performance Computing Rsearch Center) Bulletin. Volume 1, Issue 4
2011-01-01
Computational and Mathematical Engineering, Stanford University esgs@stanford.edu (650) 723-3764 Molecular Dynamics Models of Antimicrobial ...simulations using low-fidelity Reynolds-av- eraged models illustrate the limited predictive capabili- ties of these schemes. The predictions for scalar and...driving force. The AHPCRC group has used their models to predict nonuniform concentra- tion profiles across small channels as a result of variations
M&S Journal. Volume 8, Issue 2, Summer 2013
2013-01-01
Modeling Notation ( BPMN ) [White and Miers, 2008], and the integration of the modeling notation with executable simulation engines [Anupindi 2005...activities and the supporting IT in BPMN and use that to compute MOE for a mission instance. Requirements for Modeling Missions To understand the...representation versus impact computation tradeoffs we selected BPMN , along with some proposed extensions to represent information dependencies, as the
Computational analysis of blood clot dissolution using a vibrating catheter tip.
Lee, Jeong Hyun; Oh, Jin Sun; Yoon, Bye Ri; Choi, Seung Hong; Rhee, Kyehan; Jho, Jae Young; Han, Moon Hee
2012-04-01
We developed a novel concept of endovascular thrombolysis that employs a vibrating electroactive polymer actuator. In order to predict the efficacy of thrombolysis using the developed vibrating actuator, enzyme (plasminogen activator) perfusion into a clot was analyzed by solving flow fields and species transport equations considering the fluid structure interaction. In vitro thrombolysis experiments were also performed. Computational results showed that plasminogen activator perfusion into a clot was enhanced by actuator vibration at frequencies of 1 and 5 Hz. Plasminogen activator perfusion was affected by the actuator oscillation frequencies and amplitudes that were determined by electromechanical characteristics of a polymer actuator. Computed plasminogen activator perfused volumes were compared with experimentally measured dissolved clot volumes. The computed plasminogen activator perfusion volumes with threshold concentrations of 16% of the initial plasminogen activator concentration agreed well with the in vitro experimental data. This study showed the effectiveness of actuator oscillation on thrombolysis and the validity of the computational plasminogen activator perfusion model for predicting thrombolysis in complex flow fields induced by an oscillating actuator.
Program Helps To Determine Chemical-Reaction Mechanisms
NASA Technical Reports Server (NTRS)
Bittker, D. A.; Radhakrishnan, K.
1995-01-01
General Chemical Kinetics and Sensitivity Analysis (LSENS) computer code developed for use in solving complex, homogeneous, gas-phase, chemical-kinetics problems. Provides for efficient and accurate chemical-kinetics computations and provides for sensitivity analysis for variety of problems, including problems involving honisothermal conditions. Incorporates mathematical models for static system, steady one-dimensional inviscid flow, reaction behind incident shock wave (with boundary-layer correction), and perfectly stirred reactor. Computations of equilibrium properties performed for following assigned states: enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. Written in FORTRAN 77 with exception of NAMELIST extensions used for input.
A finite-volume module for all-scale Earth-system modelling at ECMWF
NASA Astrophysics Data System (ADS)
Kühnlein, Christian; Malardel, Sylvie; Smolarkiewicz, Piotr
2017-04-01
We highlight recent advancements in the development of the finite-volume module (FVM) (Smolarkiewicz et al., 2016) for the IFS at ECMWF. FVM represents an alternative dynamical core that complements the operational spectral dynamical core of the IFS with new capabilities. Most notably, these include a compact-stencil finite-volume discretisation, flexible meshes, conservative non-oscillatory transport and all-scale governing equations. As a default, FVM solves the compressible Euler equations in a geospherical framework (Szmelter and Smolarkiewicz, 2010). The formulation incorporates a generalised terrain-following vertical coordinate. A hybrid computational mesh, fully unstructured in the horizontal and structured in the vertical, enables efficient global atmospheric modelling. Moreover, a centred two-time-level semi-implicit integration scheme is employed with 3D implicit treatment of acoustic, buoyant, and rotational modes. The associated 3D elliptic Helmholtz problem is solved using a preconditioned Generalised Conjugate Residual approach. The solution procedure employs the non-oscillatory finite-volume MPDATA advection scheme that is bespoke for the compressible dynamics on the hybrid mesh (Kühnlein and Smolarkiewicz, 2017). The recent progress of FVM is illustrated with results of benchmark simulations of intermediate complexity, and comparison to the operational spectral dynamical core of the IFS. C. Kühnlein, P.K. Smolarkiewicz: An unstructured-mesh finite-volume MPDATA for compressible atmospheric dynamics, J. Comput. Phys. (2017), in press. P.K. Smolarkiewicz, W. Deconinck, M. Hamrud, C. Kühnlein, G. Mozdzynski, J. Szmelter, N.P. Wedi: A finite-volume module for simulating global all-scale atmospheric flows, J. Comput. Phys. 314 (2016) 287-304. J. Szmelter, P.K. Smolarkiewicz: An edge-based unstructured mesh discretisation in geospherical framework, J. Comput. Phys. 229 (2010) 4980-4995.
Modeling and Analysis of Power Processing Systems (MAPPS). Volume 2: Appendices
NASA Technical Reports Server (NTRS)
Lee, F. C.; Radman, S.; Carter, R. A.; Wu, C. H.; Yu, Y.; Chang, R.
1980-01-01
The computer programs and derivations generated in support of the modeling and design optimization program are presented. Programs for the buck regulator, boost regulator, and buck-boost regulator are described. The computer program for the design optimization calculations is presented. Constraints for the boost and buck-boost converter were derived. Derivations of state-space equations and transfer functions are presented. Computer lists for the converters are presented, and the input parameters justified.
PROTEUS two-dimensional Navier-Stokes computer code, version 1.0. Volume 2: User's guide
NASA Technical Reports Server (NTRS)
Towne, Charles E.; Schwab, John R.; Benson, Thomas J.; Suresh, Ambady
1990-01-01
A new computer code was developed to solve the two-dimensional or axisymmetric, Reynolds averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The thin-layer or Euler equations may also be solved. Turbulence is modeled using an algebraic eddy viscosity model. The objective was to develop a code for aerospace applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The equations are written in nonorthogonal body-fitted coordinates, and solved by marching in time using a fully-coupled alternating direction-implicit procedure with generalized first- or second-order time differencing. All terms are linearized using second-order Taylor series. The boundary conditions are treated implicitly, and may be steady, unsteady, or spatially periodic. Simple Cartesian or polar grids may be generated internally by the program. More complex geometries require an externally generated computational coordinate system. The documentation is divided into three volumes. Volume 2 is the User's Guide, and describes the program's general features, the input and output, the procedure for setting up initial conditions, the computer resource requirements, the diagnostic messages that may be generated, the job control language used to run the program, and several test cases.
Computer simulation studies in fluid and calcium regulation and orthostatic intolerance
NASA Technical Reports Server (NTRS)
1985-01-01
The systems analysis approach to physiological research uses mathematical models and computer simulation. Major areas of concern during prolonged space flight discussed include fluid and blood volume regulation; cardiovascular response during shuttle reentry; countermeasures for orthostatic intolerance; and calcium regulation and bone atrophy. Potential contributions of physiologic math models to future flight experiments are examined.
NASA Technical Reports Server (NTRS)
Chang, Katarina L.; Pennline, James A.
2013-01-01
During long-duration missions at the International Space Station, astronauts experience weightlessness leading to skeletal unloading. Unloading causes a lack of a mechanical stimulus that triggers bone cellular units to remove mass from the skeleton. A mathematical system of the cellular dynamics predicts theoretical changes to volume fractions and ash fraction in response to temporal variations in skeletal loading. No current model uses image technology to gather information about a skeletal site s initial properties to calculate bone remodeling changes and then to compare predicted bone strengths with the initial strength. The goal of this study is to use quantitative computed tomography (QCT) in conjunction with a computational model of the bone remodeling process to establish initial bone properties to predict changes in bone mechanics during bone loss and recovery with finite element (FE) modeling. Input parameters for the remodeling model include bone volume fraction and ash fraction, which are both computed from the QCT images. A non-destructive approach to measure ash fraction is also derived. Voxel-based finite element models (FEM) created from QCTs provide initial evaluation of bone strength. Bone volume fraction and ash fraction outputs from the computational model predict changes to the elastic modulus of bone via a two-parameter equation. The modulus captures the effect of bone remodeling and functions as the key to evaluate of changes in strength. Application of this time-dependent modulus to FEMs and composite beam theory enables an assessment of bone mechanics during recovery. Prediction of bone strength is not only important for astronauts, but is also pertinent to millions of patients with osteoporosis and low bone density.
Ambient occlusion effects for combined volumes and tubular geometry.
Schott, Mathias; Martin, Tobias; Grosset, A V Pascal; Smith, Sean T; Hansen, Charles D
2013-06-01
This paper details a method for interactive direct volume rendering that computes ambient occlusion effects for visualizations that combine both volumetric and geometric primitives, specifically tube-shaped geometric objects representing streamlines, magnetic field lines or DTI fiber tracts. The algorithm extends the recently presented the directional occlusion shading model to allow the rendering of those geometric shapes in combination with a context providing 3D volume, considering mutual occlusion between structures represented by a volume or geometry. Stream tube geometries are computed using an effective spline-based interpolation and approximation scheme that avoids self-intersection and maintains coherent orientation of the stream tube segments to avoid surface deforming twists. Furthermore, strategies to reduce the geometric and specular aliasing of the stream tubes are discussed.
Ambient Occlusion Effects for Combined Volumes and Tubular Geometry
Schott, Mathias; Martin, Tobias; Grosset, A.V. Pascal; Smith, Sean T.; Hansen, Charles D.
2013-01-01
This paper details a method for interactive direct volume rendering that computes ambient occlusion effects for visualizations that combine both volumetric and geometric primitives, specifically tube-shaped geometric objects representing streamlines, magnetic field lines or DTI fiber tracts. The algorithm extends the recently presented the directional occlusion shading model to allow the rendering of those geometric shapes in combination with a context providing 3D volume, considering mutual occlusion between structures represented by a volume or geometry. Stream tube geometries are computed using an effective spline-based interpolation and approximation scheme that avoids self-intersection and maintains coherent orientation of the stream tube segments to avoid surface deforming twists. Furthermore, strategies to reduce the geometric and specular aliasing of the stream tubes are discussed. PMID:23559506
Detailed model for practical pulverized coal furnaces and gasifiers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, P.J.; Smoot, L.D.
1989-08-01
This study has been supported by a consortium of nine industrial and governmental sponsors. Work was initiated on May 1, 1985 and completed August 31, 1989. The central objective of this work was to develop, evaluate and apply a practical combustion model for utility boilers, industrial furnaces and gasifiers. Key accomplishments have included: Development of an advanced first-generation, computer model for combustion in three dimensional furnaces; development of a new first generation fouling and slagging submodel; detailed evaluation of an existing NO{sub x} submodel; development and evaluation of an improved radiation submodel; preparation and distribution of a three-volume final report:more » (a) Volume 1: General Technical Report; (b) Volume 2: PCGC-3 User's Manual; (c) Volume 3: Data Book for Evaluation of Three-Dimensional Combustion Models; and organization of a user's workshop on the three-dimensional code. The furnace computer model developed under this study requires further development before it can be applied generally to all applications; however, it can be used now by specialists for many specific applications, including non-combusting systems and combusting geseous systems. A new combustion center was organized and work was initiated to continue the important research effort initiated by this study. 212 refs., 72 figs., 38 tabs.« less
SU-E-T-664: Radiobiological Modeling of Prophylactic Cranial Irradiation in Mice
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, D; Debeb, B; Woodward, W
Purpose: Prophylactic cranial irradiation (PCI) is a clinical technique used to reduce the incidence of brain metastasis and improve overall survival in select patients with ALL and SCLC, and we have shown the potential of PCI in select breast cancer patients through a mouse model (manuscript in preparation). We developed a computational model using our experimental results to demonstrate the advantage of treating brain micro-metastases early. Methods: MATLAB was used to develop the computational model of brain metastasis and PCI in mice. The number of metastases per mouse and the volume of metastases from four- and eight-week endpoints were fitmore » to normal and log-normal distributions, respectively. Model input parameters were optimized so that model output would match the experimental number of metastases per mouse. A limiting dilution assay was performed to validate the model. The effect of radiation at different time points was computationally evaluated through the endpoints of incidence, number of metastases, and tumor burden. Results: The correlation between experimental number of metastases per mouse and the Gaussian fit was 87% and 66% at the two endpoints. The experimental volumes and the log-normal fit had correlations of 99% and 97%. In the optimized model, the correlation between number of metastases per mouse and the Gaussian fit was 96% and 98%. The log-normal volume fit and the model agree 100%. The model was validated by a limiting dilution assay, where the correlation was 100%. The model demonstrates that cells are very sensitive to radiation at early time points, and delaying treatment introduces a threshold dose at which point the incidence and number of metastases decline. Conclusion: We have developed a computational model of brain metastasis and PCI in mice that is highly correlated to our experimental data. The model shows that early treatment of subclinical disease is highly advantageous.« less
NASA Technical Reports Server (NTRS)
Nakazawa, S.
1988-01-01
This annual status report presents the results of work performed during the fourth year of the 3-D Inelastic Analysis Methods for Hot Section Components program (NASA Contract NAS3-23697). The objective of the program is to produce a series of new computer codes permitting more accurate and efficient 3-D analysis of selected hot section components, i.e., combustor liners, turbine blades and turbine vanes. The computer codes embody a progression of math models and are streamlined to take advantage of geometrical features, loading conditions, and forms of material response that distinguish each group of selected components. Volume 1 of this report discusses the special finite element models developed during the fourth year of the contract.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sprague, Michael A.; Stickel, Jonathan J.; Sitaraman, Hariswaran
Designing processing equipment for the mixing of settling suspensions is a challenging problem. Achieving low-cost mixing is especially difficult for the application of slowly reacting suspended solids because the cost of impeller power consumption becomes quite high due to the long reaction times (batch mode) or due to large-volume reactors (continuous mode). Further, the usual scale-up metrics for mixing, e.g., constant tip speed and constant power per volume, do not apply well for mixing of suspensions. As an alternative, computational fluid dynamics (CFD) can be useful for analyzing mixing at multiple scales and determining appropriate mixer designs and operating parameters.more » We developed a mixture model to describe the hydrodynamics of a settling cellulose suspension. The suspension motion is represented as a single velocity field in a computationally efficient Eulerian framework. The solids are represented by a scalar volume-fraction field that undergoes transport due to particle diffusion, settling, fluid advection, and shear stress. A settling model and a viscosity model, both functions of volume fraction, were selected to fit experimental settling and viscosity data, respectively. Simulations were performed with the open-source Nek5000 CFD program, which is based on the high-order spectral-finite-element method. Simulations were performed for the cellulose suspension undergoing mixing in a laboratory-scale vane mixer. The settled-bed heights predicted by the simulations were in semi-quantitative agreement with experimental observations. Further, the simulation results were in quantitative agreement with experimentally obtained torque and mixing-rate data, including a characteristic torque bifurcation. In future work, we plan to couple this CFD model with a reaction-kinetics model for the enzymatic digestion of cellulose, allowing us to predict enzymatic digestion performance for various mixing intensities and novel reactor designs.« less
DOT National Transportation Integrated Search
1974-08-01
Volume 4 describes the automation requirements. A presentation of automation requirements is made for an advanced air traffic management system in terms of controller work for-e, computer resources, controller productivity, system manning, failure ef...
Locomotive crashworthiness research : volume 2 : design concept generation and evaluation
DOT National Transportation Integrated Search
1995-07-01
This is the second volume in a series of four that reports on a study in which computer models were developed and applied to evaluate whether various crashworthiness features, as defined in Public Law 102-365, can provide practical benefit to the occ...
A conceptually and computationally simple method for the definition, display, quantification, and comparison of the shapes of three-dimensional mathematical molecular models is presented. Molecular or solvent-accessible volume and surface area can also be calculated. Algorithms, ...
NASA Astrophysics Data System (ADS)
Figiel, Łukasz; Dunne, Fionn P. E.; Buckley, C. Paul
2010-01-01
Layered-silicate nanoparticles offer a cost-effective reinforcement for thermoplastics. Computational modelling has been employed to study large deformations in layered-silicate/poly(ethylene terephthalate) (PET) nanocomposites near the glass transition, as would be experienced during industrial forming processes such as thermoforming or injection stretch blow moulding. Non-linear numerical modelling was applied, to predict the macroscopic large deformation behaviour, with morphology evolution and deformation occurring at the microscopic level, using the representative volume element (RVE) approach. A physically based elasto-viscoplastic constitutive model, describing the behaviour of the PET matrix within the RVE, was numerically implemented into a finite element solver (ABAQUS) using an UMAT subroutine. The implementation was designed to be robust, for accommodating large rotations and stretches of the matrix local to, and between, the nanoparticles. The nanocomposite morphology was reconstructed at the RVE level using a Monte-Carlo-based algorithm that placed straight, high-aspect ratio particles according to the specified orientation and volume fraction, with the assumption of periodicity. Computational experiments using this methodology enabled prediction of the strain-stiffening behaviour of the nanocomposite, observed experimentally, as functions of strain, strain rate, temperature and particle volume fraction. These results revealed the probable origins of the enhanced strain stiffening observed: (a) evolution of the morphology (through particle re-orientation) and (b) early onset of stress-induced pre-crystallization (and hence lock-up of viscous flow), triggered by the presence of particles. The computational model enabled prediction of the effects of process parameters (strain rate, temperature) on evolution of the morphology, and hence on the end-use properties.
Computational Modeling of Blood Flow in the TrapEase Inferior Vena Cava Filter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singer, M A; Henshaw, W D; Wang, S L
To evaluate the flow hemodynamics of the TrapEase vena cava filter using three dimensional computational fluid dynamics, including simulated thrombi of multiple shapes, sizes, and trapping positions. The study was performed to identify potential areas of recirculation and stagnation and areas in which trapped thrombi may influence intrafilter thrombosis. Computer models of the TrapEase filter, thrombi (volumes ranging from 0.25mL to 2mL, 3 different shapes), and a 23mm diameter cava were constructed. The hemodynamics of steady-state flow at Reynolds number 600 was examined for the unoccluded and partially occluded filter. Axial velocity contours and wall shear stresses were computed. Flowmore » in the unoccluded TrapEase filter experienced minimal disruption, except near the superior and inferior tips where low velocity flow was observed. For spherical thrombi in the superior trapping position, stagnant and recirculating flow was observed downstream of the thrombus; the volume of stagnant flow and the peak wall shear stress increased monotonically with thrombus volume. For inferiorly trapped spherical thrombi, marked disruption to the flow was observed along the cava wall ipsilateral to the thrombus and in the interior of the filter. Spherically shaped thrombus produced a lower peak wall shear stress than conically shaped thrombus and a larger peak stress than ellipsoidal thrombus. We have designed and constructed a computer model of the flow hemodynamics of the TrapEase IVC filter with varying shapes, sizes, and positions of thrombi. The computer model offers several advantages over in vitro techniques including: improved resolution, ease of evaluating different thrombus sizes and shapes, and easy adaptation for new filter designs and flow parameters. Results from the model also support a previously reported finding from photochromic experiments that suggest the inferior trapping position of the TrapEase IVC filter leads to an intra-filter region of recirculating/stagnant flow with very low shear stress that may be thrombogenic.« less
AOPs & Biomarkers: Bridging High Throughput Screening and Regulatory Decision Making.
As high throughput screening (HTS) approaches play a larger role in toxicity testing, computational toxicology has emerged as a critical component in interpreting the large volume of data produced. Computational models for this purpose are becoming increasingly more sophisticated...
2000-04-19
Dr. Marc Pusey (seated) and Dr. Craig Kundrot use computers to analyze x-ray maps and generate three-dimensional models of protein structures. With this information, scientists at Marshall Space Flight Center can learn how proteins are made and how they work. The computer screen depicts a proten structure as a ball-and-stick model. Other models depict the actual volume occupied by the atoms, or the ribbon-like structures that are crucial to a protein's function.
Using management information systems to enhance health care quality assurance.
Rosser, L H; Kleiner, B H
1995-01-01
Examines how computers and quality assurance are being used to improve the quality of health care delivery. Traditional quality assurance methods have been limited in their ability to effectively manage the high volume of data generated by the health care process. Computers on the other hand are able to handle large volumes of data as well as monitor patient care activities in both the acute care and ambulatory care settings. Discusses the use of computers to collect and analyse patient data so that changes and problems can be identified. In addition, computer models for reminding physicians to order appropriate preventive health measures for their patients are presented. Concludes that the use of computers to augment quality improvement is essential if the quality of patient care and health promotion are to be improved.
A 4-cylinder Stirling engine computer program with dynamic energy equations
NASA Technical Reports Server (NTRS)
Daniele, C. J.; Lorenzo, C. F.
1983-01-01
A computer program for simulating the steady state and transient performance of a four cylinder Stirling engine is presented. The thermodynamic model includes both continuity and energy equations and linear momentum terms (flow resistance). Each working space between the pistons is broken into seven control volumes. Drive dynamics and vehicle load effects are included. The model contains 70 state variables. Also included in the model are piston rod seal leakage effects. The computer program includes a model of a hydrogen supply system, from which hydrogen may be added to the system to accelerate the engine. Flow charts are provided.
Hahn, Wolfram; Fricke-Zech, Susanne; Fialka-Fricke, Julia; Dullin, Christian; Zapf, Antonia; Gruber, Rudolf; Sennhenn-kirchner, Sabine; Kubein-Meesenburg, Dietmar; Sadat-Khonsari, Reza
2009-09-01
An investigation was conducted to compare the image quality of prototype flat-panel volume computed tomography (fpVCT) and multislice computed tomography (MSCT) of suture structures. Bone samples were taken from the midpalatal suture of 5 young (16 weeks) and 5 old (200 weeks) Sus scrofa domestica and fixed in formalin solution. An fpVCT prototype and an MSCT were used to obtain images of the specimens. The facial reformations were assessed by 4 observers using a 1 (excellent) to 5 (poor) rating scale for the weighted criteria visualization of the suture structure. A linear mixed model was used for statistical analysis. Results with P < .05 were considered to be statistically significant. The visualization of the suture of young specimens was significantly better than that of older animals (P < .001). The visualization of the suture with fpVCT was significantly better than that with MSCT (P < .001). Compared with MSCT, fpVCT produces superior results in the visualization of the midpalatal suture in a Sus scrofa domestica model.
Bohme, Andrea; van Rienen, Ursula
2016-08-01
Computational modeling of the stimulating field distribution during Deep Brain Stimulation provides an opportunity to advance our knowledge of this neurosurgical therapy for Parkinson's disease. There exist several approaches to model the target region for Deep Brain Stimulation in Hemi-parkinson Rats with volume conductor models. We have described and compared the normalized mapping approach as well as the modeling with three-dimensional structures, which include curvilinear coordinates to assure an anatomically realistic conductivity tensor orientation.
NASA Technical Reports Server (NTRS)
Shankar, V.; Rowell, C.; Hall, W. F.; Mohammadian, A. H.; Schuh, M.; Taylor, K.
1992-01-01
Accurate and rapid evaluation of radar signature for alternative aircraft/store configurations would be of substantial benefit in the evolution of integrated designs that meet radar cross-section (RCS) requirements across the threat spectrum. Finite-volume time domain methods offer the possibility of modeling the whole aircraft, including penetrable regions and stores, at longer wavelengths on today's gigaflop supercomputers and at typical airborne radar wavelengths on the teraflop computers of tomorrow. A structured-grid finite-volume time domain computational fluid dynamics (CFD)-based RCS code has been developed at the Rockwell Science Center, and this code incorporates modeling techniques for general radar absorbing materials and structures. Using this work as a base, the goal of the CFD-based CEM effort is to define, implement and evaluate various code development issues suitable for rapid prototype signature prediction.
Modeling Endovascular Coils as Heterogeneous Porous Media
NASA Astrophysics Data System (ADS)
Yadollahi Farsani, H.; Herrmann, M.; Chong, B.; Frakes, D.
2016-12-01
Minimally invasive surgeries are the stat-of-the-art treatments for many pathologies. Treating brain aneurysms is no exception; invasive neurovascular clipping is no longer the only option and endovascular coiling has introduced itself as the most common treatment. Coiling isolates the aneurysm from blood circulation by promoting thrombosis within the aneurysm. One approach to studying intra-aneurysmal hemodynamics consists of virtually deploying finite element coil models and then performing computational fluid dynamics. However, this approach is often computationally expensive and requires extensive resources to perform. The porous medium approach has been considered as an alternative to the conventional coil modeling approach because it lessens the complexities of computational fluid dynamics simulations by reducing the number of mesh elements needed to discretize the domain. There have been a limited number of attempts at treating the endovascular coils as homogeneous porous media. However, the heterogeneity associated with coil configurations requires a more accurately defined porous medium in which the porosity and permeability change throughout the domain. We implemented this approach by introducing a lattice of sample volumes and utilizing techniques available in the field of interactive computer graphics. We observed that the introduction of the heterogeneity assumption was associated with significant changes in simulated aneurysmal flow velocities as compared to the homogeneous assumption case. Moreover, as the sample volume size was decreased, the flow velocities approached an asymptotical value, showing the importance of the sample volume size selection. These results demonstrate that the homogeneous assumption for porous media that are inherently heterogeneous can lead to considerable errors. Additionally, this modeling approach allowed us to simulate post-treatment flows without considering the explicit geometry of a deployed endovascular coil mass, greatly simplifying computation.
Free Wake Techniques for Rotor Aerodynamic Analylis. Volume 2: Vortex Sheet Models
NASA Technical Reports Server (NTRS)
Tanuwidjaja, A.
1982-01-01
Results of computations are presented using vortex sheets to model the wake and test the sensitivity of the solutions to various assumptions used in the development of the models. The complete codings are included.
PROTEUS two-dimensional Navier-Stokes computer code, version 1.0. Volume 3: Programmer's reference
NASA Technical Reports Server (NTRS)
Towne, Charles E.; Schwab, John R.; Benson, Thomas J.; Suresh, Ambady
1990-01-01
A new computer code was developed to solve the 2-D or axisymmetric, Reynolds-averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The thin-layer or Euler equations may also be solved. Turbulence is modeled using an algebraic eddy viscosity model. The objective was to develop a code for aerospace applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The equations are written in nonorthogonal body-fitted coordinates, and solved by marching in time using a fully-coupled alternating-direction-implicit procedure with generalized first- or second-order time differencing. All terms are linearized using second-order Taylor series. The boundary conditions are treated implicitly, and may be steady, unsteady, or spatially periodic. Simple Cartesian or polar grids may be generated internally by the program. More complex geometries require an externally generated computational coordinate system. The documentation is divided into three volumes. Volume 3 is the Programmer's Reference, and describes the program structure, the FORTRAN variables stored in common blocks, and the details of each subprogram.
Proceedings of the 3rd Annual Conference on Aerospace Computational Control, volume 1
NASA Technical Reports Server (NTRS)
Bernard, Douglas E. (Editor); Man, Guy K. (Editor)
1989-01-01
Conference topics included definition of tool requirements, advanced multibody component representation descriptions, model reduction, parallel computation, real time simulation, control design and analysis software, user interface issues, testing and verification, and applications to spacecraft, robotics, and aircraft.
Pinevol: a user's guide to a volume calculator for southern pines
Daniel J. Leduc
2006-01-01
Taper functions describe a model of the actual geometric shape of a tree. When this shape is assumed to be known, volume by any log rule and to any merchantability standard can be calculated. PINEVOL is a computer program for calculating the volume of the major southern pines using species-specific bole taper functions. It can use the Doyle, Scribner, or International...
The physics of volume rendering
NASA Astrophysics Data System (ADS)
Peters, Thomas
2014-11-01
Radiation transfer is an important topic in several physical disciplines, probably most prominently in astrophysics. Computer scientists use radiation transfer, among other things, for the visualization of complex data sets with direct volume rendering. In this article, I point out the connection between physical radiation transfer and volume rendering, and I describe an implementation of direct volume rendering in the astrophysical radiation transfer code RADMC-3D. I show examples for the use of this module on analytical models and simulation data.
Software Surface Modeling and Grid Generation Steering Committee
NASA Technical Reports Server (NTRS)
Smith, Robert E. (Editor)
1992-01-01
It is a NASA objective to promote improvements in the capability and efficiency of computational fluid dynamics. Grid generation, the creation of a discrete representation of the solution domain, is an essential part of computational fluid dynamics. However, grid generation about complex boundaries requires sophisticated surface-model descriptions of the boundaries. The surface modeling and the associated computation of surface grids consume an extremely large percentage of the total time required for volume grid generation. Efficient and user friendly software systems for surface modeling and grid generation are critical for computational fluid dynamics to reach its potential. The papers presented here represent the state-of-the-art in software systems for surface modeling and grid generation. Several papers describe improved techniques for grid generation.
Konheim, Jeremy A; Kon, Zachary N; Pasrija, Chetan; Luo, Qingyang; Sanchez, Pablo G; Garcia, Jose P; Griffith, Bartley P; Jeudy, Jean
2016-04-01
Size matching for lung transplantation is widely accomplished using height comparisons between donors and recipients. This gross approximation allows for wide variation in lung size and, potentially, size mismatch. Three-dimensional computed tomography (3D-CT) volumetry comparisons could offer more accurate size matching. Although recipient CT scans are universally available, donor CT scans are rarely performed. Therefore, predicted donor lung volumes could be used for comparison to measured recipient lung volumes, but no such predictive equations exist. We aimed to use 3D-CT volumetry measurements from a normal patient population to generate equations for predicted total lung volume (pTLV), predicted right lung volume (pRLV), and predicted left lung volume (pLLV), for size-matching purposes. Chest CT scans of 400 normal patients were retrospectively evaluated. 3D-CT volumetry was performed to measure total lung volume, right lung volume, and left lung volume of each patient, and predictive equations were generated. The fitted model was tested in a separate group of 100 patients. The model was externally validated by comparison of total lung volume with total lung capacity from pulmonary function tests in a subset of those patients. Age, gender, height, and race were independent predictors of lung volume. In the test group, there were strong linear correlations between predicted and actual lung volumes measured by 3D-CT volumetry for pTLV (r = 0.72), pRLV (r = 0.72), and pLLV (r = 0.69). A strong linear correlation was also observed when comparing pTLV and total lung capacity (r = 0.82). We successfully created a predictive model for pTLV, pRLV, and pLLV. These may serve as reference standards and predict donor lung volume for size matching in lung transplantation. Copyright © 2016 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.
NONLINEAR SYSTEMS, LINEAR SYSTEMS, SUBROUTINES , SOIL MECHANICS, INTERFACES, DYNAMICS, LOADS(FORCES), FORCE(MECHANICS), DAMPING, ACCELERATION, ELASTIC...PROPERTIES, PLASTIC PROPERTIES, CRACKS , REINFORCING MATERIALS , COMPOSITE MATERIALS , FAILURE(MECHANICS), MECHANICAL PROPERTIES, INSTRUCTION MANUALS, DIGITAL COMPUTERS...STRESSES, *COMPUTER PROGRAMS), (*STRUCTURES, STRESSES), (*DATA PROCESSING, STRUCTURAL PROPERTIES), SOILS , STRAIN(MECHANICS), MATHEMATICAL MODELS
Computers in Life Science Education, Volume 7, Numbers 1-12.
ERIC Educational Resources Information Center
Computers in Life Science Education, 1990
1990-01-01
The 12 digests of Computers in Life Science Education from 1990 are presented. The articles found in chronological sequence are as follows: "The Computer as a Teaching Tool--How Far Have We Come? Where Are We Going?" (Modell); "Where's the Software--Part 1"; "Keeping Abreast of the Literature" (which appears quarterly); "Where's the Software--Part…
A Network Model for the Effective Thermal Conductivity of Rigid Fibrous Refractory Insulations
NASA Technical Reports Server (NTRS)
Marschall, Jochen; Cooper, D. M. (Technical Monitor)
1995-01-01
A procedure is described for computing the effective thermal conductivity of a rigid fibrous refractory insulation. The insulation is modeled as a 3-dimensional Cartesian network of thermal conductance. The values and volume distributions of the conductance are assigned to reflect the physical properties of the insulation, its constituent fibers, and any permeating gas. The effective thermal conductivity is computed by considering the simultaneous energy transport by solid conduction, gas conduction and radiation through a cubic volume of model insulation; thus the coupling between heat transfer modes is retained (within the simplifications inherent to the model), rather than suppressed by treating these heat transfer modes as independent. The model takes into account insulation composition, density and fiber anisotropy, as well as the geometric and material properties of the constituent fibers. A relatively good agreement, between calculated and experimentally derived thermal conductivity values, is obtained for a variety of rigid fibrous insulations.
Computational fluid dynamics study of viscous fingering in supercritical fluid chromatography.
Subraveti, Sai Gokul; Nikrityuk, Petr; Rajendran, Arvind
2018-01-26
Axi-symmetric numerical simulations are carried out to study the dynamics of a plug introduced through a mixed-stream injection in supercritical fluid chromatographic columns. The computational fluid dynamics model developed in this work takes into account both the hydrodynamics and adsorption equilibria to describe the phenomena of viscous fingering and plug effect that contribute to peak distortions in mixed-stream injections. The model was implemented into commercial computational fluid dynamics software using user-defined functions. The simulations describe the propagation of both the solute and modifier highlighting the interplay between the hydrodynamics and plug effect. The simulated peaks showed good agreement with experimental data published in the literature involving different injection volumes (5 μL, 50 μL, 1 mL and 2 mL) of flurbiprofen on Chiralpak AD-H column using a mobile phase of CO 2 and methanol. The study demonstrates that while viscous fingering is the main source of peak distortions for large-volume injections (1 mL and 2 mL) it has negligible impact on small-volume injections (5 μL and 50 μL). Band broadening in small-volume injections arise mainly due to the plug effect. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Davis, A. D.; Heimbach, P.; Marzouk, Y.
2017-12-01
We develop a Bayesian inverse modeling framework for predicting future ice sheet volume with associated formal uncertainty estimates. Marine ice sheets are drained by fast-flowing ice streams, which we simulate using a flowline model. Flowline models depend on geometric parameters (e.g., basal topography), parameterized physical processes (e.g., calving laws and basal sliding), and climate parameters (e.g., surface mass balance), most of which are unknown or uncertain. Given observations of ice surface velocity and thickness, we define a Bayesian posterior distribution over static parameters, such as basal topography. We also define a parameterized distribution over variable parameters, such as future surface mass balance, which we assume are not informed by the data. Hyperparameters are used to represent climate change scenarios, and sampling their distributions mimics internal variation. For example, a warming climate corresponds to increasing mean surface mass balance but an individual sample may have periods of increasing or decreasing surface mass balance. We characterize the predictive distribution of ice volume by evaluating the flowline model given samples from the posterior distribution and the distribution over variable parameters. Finally, we determine the effect of climate change on future ice sheet volume by investigating how changing the hyperparameters affects the predictive distribution. We use state-of-the-art Bayesian computation to address computational feasibility. Characterizing the posterior distribution (using Markov chain Monte Carlo), sampling the full range of variable parameters and evaluating the predictive model is prohibitively expensive. Furthermore, the required resolution of the inferred basal topography may be very high, which is often challenging for sampling methods. Instead, we leverage regularity in the predictive distribution to build a computationally cheaper surrogate over the low dimensional quantity of interest (future ice sheet volume). Continual surrogate refinement guarantees asymptotic sampling from the predictive distribution. Directly characterizing the predictive distribution in this way allows us to assess the ice sheet's sensitivity to climate variability and change.
Commercial Digital/ADP Equipment in the Ocean Environment. Volume 2. User Appendices
1978-12-15
is that the LINDA system uses a mini computer with a time sharing system software which allows several terminals to be operated at the same time...Acquisition System (ODAS) consists of sensors, computer hardware and computer software . Certain sensors are interfaced to the computers for real time...on USNS KANE, USNS BENT, and USKS WILKES. Commercial automatic data processing equipment used in ODAS includes: Item Model Computer PDP-9 Tape
DOT National Transportation Integrated Search
1995-07-01
This is the third volume in a series of four that reports on a study in which computer models were developed and applied to evaluate whether various crashworthiness features, as defined in Public Law 102-365, can provide practical benefit to the occu...
Hu, Yang; Li, Decai; Shu, Shi; Niu, Xiaodong
2016-02-01
Based on the Darcy-Brinkman-Forchheimer equation, a finite-volume computational model with lattice Boltzmann flux scheme is proposed for incompressible porous media flow in this paper. The fluxes across the cell interface are calculated by reconstructing the local solution of the generalized lattice Boltzmann equation for porous media flow. The time-scaled midpoint integration rule is adopted to discretize the governing equation, which makes the time step become limited by the Courant-Friedricks-Lewy condition. The force term which evaluates the effect of the porous medium is added to the discretized governing equation directly. The numerical simulations of the steady Poiseuille flow, the unsteady Womersley flow, the circular Couette flow, and the lid-driven flow are carried out to verify the present computational model. The obtained results show good agreement with the analytical, finite-difference, and/or previously published solutions.
Toward an in-situ analytics and diagnostics framework for earth system models
NASA Astrophysics Data System (ADS)
Anantharaj, Valentine; Wolf, Matthew; Rasch, Philip; Klasky, Scott; Williams, Dean; Jacob, Rob; Ma, Po-Lun; Kuo, Kwo-Sen
2017-04-01
The development roadmaps for many earth system models (ESM) aim for a globally cloud-resolving model targeting the pre-exascale and exascale systems of the future. The ESMs will also incorporate more complex physics, chemistry and biology - thereby vastly increasing the fidelity of the information content simulated by the model. We will then be faced with an unprecedented volume of simulation output that would need to be processed and analyzed concurrently in order to derive the valuable scientific results. We are already at this threshold with our current generation of ESMs at higher resolution simulations. Currently, the nominal I/O throughput in the Community Earth System Model (CESM) via Parallel IO (PIO) library is around 100 MB/s. If we look at the high frequency I/O requirements, it would require an additional 1 GB / simulated hour, translating to roughly 4 mins wallclock / simulated-day => 24.33 wallclock hours / simulated-model-year => 1,752,000 core-hours of charge per simulated-model-year on the Titan supercomputer at the Oak Ridge Leadership Computing Facility. There is also a pending need for 3X more volume of simulation output . Meanwhile, many ESMs use instrument simulators to run forward models to compare model simulations against satellite and ground-based instruments, such as radars and radiometers. The CFMIP Observation Simulator Package (COSP) is used in CESM as well as the Accelerated Climate Model for Energy (ACME), one of the ESMs specifically targeting current and emerging leadership-class computing platforms These simulators can be computationally expensive, accounting for as much as 30% of the computational cost. Hence the data are often written to output files that are then used for offline calculations. Again, the I/O bottleneck becomes a limitation. Detection and attribution studies also use large volume of data for pattern recognition and feature extraction to analyze weather and climate phenomenon such as tropical cyclones, atmospheric rivers, blizzards, etc. It is evident that ESMs need an in-situ framework to decouple the diagnostics and analytics from the prognostics and physics computations of the models so that the diagnostic computations could be performed concurrently without limiting model throughput. We are designing a science-driven online analytics framework for earth system models. Our approach is to adopt several data workflow technologies, such as the Adaptable IO System (ADIOS), being developed under the U.S. Exascale Computing Project (ECP) and integrate these to allow for extreme performance IO, in situ workflow integration, science-driven analytics and visualization all in a easy to use computational framework. This will allow science teams to write data 100-1000 times faster and seamlessly move from post processing the output for validation and verification purposes to performing these calculations in situ. We can easily and knowledgeably envision a near-term future where earth system models like ACME and CESM will have to address not only the challenges of the volume of data but also need to consider the velocity of the data. The earth system model of the future in the exascale era, as they incorporate more complex physics at higher resolutions, will be able to analyze more simulation content without having to compromise targeted model throughput.
A modeling technique for STOVL ejector and volume dynamics
NASA Technical Reports Server (NTRS)
Drummond, C. K.; Barankiewicz, W. S.
1990-01-01
New models for thrust augmenting ejector performance prediction and feeder duct dynamic analysis are presented and applied to a proposed Short Take Off and Vertical Landing (STOVL) aircraft configuration. Central to the analysis is the nontraditional treatment of the time-dependent volume integrals in the otherwise conventional control-volume approach. In the case of the thrust augmenting ejector, the analysis required a new relationship for transfer of kinetic energy from the primary flow to the secondary flow. Extraction of the required empirical corrections from current steady-state experimental data is discussed; a possible approach for modeling insight through Computational Fluid Dynamics (CFD) is presented.
Cervical Vertebral Body's Volume as a New Parameter for Predicting the Skeletal Maturation Stages.
Choi, Youn-Kyung; Kim, Jinmi; Yamaguchi, Tetsutaro; Maki, Koutaro; Ko, Ching-Chang; Kim, Yong-Il
2016-01-01
This study aimed to determine the correlation between the volumetric parameters derived from the images of the second, third, and fourth cervical vertebrae by using cone beam computed tomography with skeletal maturation stages and to propose a new formula for predicting skeletal maturation by using regression analysis. We obtained the estimation of skeletal maturation levels from hand-wrist radiographs and volume parameters derived from the second, third, and fourth cervical vertebrae bodies from 102 Japanese patients (54 women and 48 men, 5-18 years of age). We performed Pearson's correlation coefficient analysis and simple regression analysis. All volume parameters derived from the second, third, and fourth cervical vertebrae exhibited statistically significant correlations (P < 0.05). The simple regression model with the greatest R-square indicated the fourth-cervical-vertebra volume as an independent variable with a variance inflation factor less than ten. The explanation power was 81.76%. Volumetric parameters of cervical vertebrae using cone beam computed tomography are useful in regression models. The derived regression model has the potential for clinical application as it enables a simple and quantitative analysis to evaluate skeletal maturation level.
Cervical Vertebral Body's Volume as a New Parameter for Predicting the Skeletal Maturation Stages
Choi, Youn-Kyung; Kim, Jinmi; Maki, Koutaro; Ko, Ching-Chang
2016-01-01
This study aimed to determine the correlation between the volumetric parameters derived from the images of the second, third, and fourth cervical vertebrae by using cone beam computed tomography with skeletal maturation stages and to propose a new formula for predicting skeletal maturation by using regression analysis. We obtained the estimation of skeletal maturation levels from hand-wrist radiographs and volume parameters derived from the second, third, and fourth cervical vertebrae bodies from 102 Japanese patients (54 women and 48 men, 5–18 years of age). We performed Pearson's correlation coefficient analysis and simple regression analysis. All volume parameters derived from the second, third, and fourth cervical vertebrae exhibited statistically significant correlations (P < 0.05). The simple regression model with the greatest R-square indicated the fourth-cervical-vertebra volume as an independent variable with a variance inflation factor less than ten. The explanation power was 81.76%. Volumetric parameters of cervical vertebrae using cone beam computed tomography are useful in regression models. The derived regression model has the potential for clinical application as it enables a simple and quantitative analysis to evaluate skeletal maturation level. PMID:27340668
Towards Probablistic Assessment of Hypobaric Decompression Sickness Treatment
NASA Technical Reports Server (NTRS)
Conkin, J.; Abercromby, A. F.; Feiveson, A. H.; Gernhardt, M. L.; Norcross, J. R.; Ploutz-Snyder, R.; Wessel, J. H., III
2013-01-01
INTRODUCTION: Pressure, oxygen (O2), and time are the pillars to effective treatment of decompression sickness (DCS). The NASA DCS Treatment Model links a decrease in computed bubble volume to the resolution of a symptom. The decrease in volume is realized in two stages: a) during the Boyle's Law compression and b) during subsequent dissolution of the gas phase by the O2 window. METHODS: The cumulative distribution of 154 symptoms that resolved during repressurization was described with a log-logistic density function of pressure difference (deltaP as psid) associated with symptom resolution and two other explanatory variables. The 154 symptoms originated from 119 cases of DCS during 969 exposures in 47 different altitude tests. RESULTS: The probability of symptom resolution [P(symptom resolution)] = 1 / (1+exp(- (ln(deltaP) - 1.682 + 1.089×AMB - 0.00395×SYMPTOM TIME) / 0.633)), where AMB is 1 when the subject ambulated as part of the altitude exposure or else 0 and SYMPTOM TIME is the elapsed time in min from start of the altitude exposure to recognition of a DCS symptom. The P(symptom resolution) was estimated from computed deltaP from the Tissue Bubble Dynamics Model based on the "effective" Boyle's Law change: P2 - P1 (deltaP, psid) = P1×V1/V2 - P1, where V1 is the computed volume of a spherical bubble in a unit volume of tissue at low pressure P1 and V2 is computed volume after a change to a higher pressure P2. V2 continues to decrease through time at P2, at a faster rate if 100% ground level O2 was breathed. The computed deltaP is the effective treatment pressure at any point in time as if the entire ?deltaP was just from Boyle's Law compression. DISCUSSION: Given the low probability of DCS during extravehicular activity and the prompt treatment of a symptom with options through the model it is likely that the symptom and gas phase will resolve with minimum resources and minimal impact on astronaut health, safety, and productivity.
NASA Astrophysics Data System (ADS)
Herrington, A. R.; Lauritzen, P. H.; Reed, K. A.
2017-12-01
The spectral element dynamical core of the Community Atmosphere Model (CAM) has recently been coupled to an approximately isotropic, finite-volume grid per implementation of the conservative semi-Lagrangian multi-tracer transport scheme (CAM-SE-CSLAM; Lauritzen et al. 2017). In this framework, the semi-Lagrangian transport of tracers are computed on the finite-volume grid, while the adiabatic dynamics are solved using the spectral element grid. The physical parameterizations are evaluated on the finite-volume grid, as opposed to the unevenly spaced Gauss-Lobatto-Legendre nodes of the spectral element grid. Computing the physics on the finite-volume grid reduces numerical artifacts such as grid imprinting, possibly because the forcing terms are no longer computed at element boundaries where the resolved dynamics are least smooth. The separation of the physics grid and the dynamics grid allows for a unique opportunity to understand the resolution sensitivity in CAM-SE-CSLAM. The observed large sensitivity of CAM to horizontal resolution is a poorly understood impediment to improved simulations of regional climate using global, variable resolution grids. Here, a series of idealized moist simulations are presented in which the finite-volume grid resolution is varied relative to the spectral element grid resolution in CAM-SE-CSLAM. The simulations are carried out at multiple spectral element grid resolutions, in part to provide a companion set of simulations, in which the spectral element grid resolution is varied relative to the finite-volume grid resolution, but more generally to understand if the sensitivity to the finite-volume grid resolution is consistent across a wider spectrum of resolved scales. Results are interpreted in the context of prior ideas regarding resolution sensitivity of global atmospheric models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Chuanben; Fei, Zhaodong; Chen, Lisha
This study aimed to quantify dosimetric effects of weight loss for nasopharyngeal carcinoma (NPC) treated with intensity-modulated radiation therapy (IMRT). Overall, 25 patients with NPC treated with IMRT were enrolled. We simulated weight loss during IMRT on the computer. Weight loss model was based on the planning computed tomography (CT) images. The original external contour of head and neck was labeled plan 0, and its volume was regarded as pretreatment normal weight. We shrank the external contour with different margins (2, 3, and 5 mm) and generated new external contours of head and neck. The volumes of reconstructed external contoursmore » were regarded as weight during radiotherapy. After recontouring outlines, the initial treatment plan was mapped to the redefined CT scans with the same beam configurations, yielding new plans. The computer model represented a theoretical proportional weight loss of 3.4% to 13.7% during the course of IMRT. The dose delivered to the planning target volume (PTV) of primary gross tumor volume and clinical target volume significantly increased by 1.9% to 2.9% and 1.8% to 2.9% because of weight loss, respectively. The dose to the PTV of gross tumor volume of lymph nodes fluctuated from −2.0% to 1.0%. The dose to the brain stem and the spinal cord was increased (p < 0.001), whereas the dose to the parotid gland was decreased (p < 0.001). Weight loss may lead to significant dosimetric change during IMRT. Repeated scanning and replanning for patients with NPC with an obvious weight loss may be necessary.« less
NASA Technical Reports Server (NTRS)
Simanonok, K. E.; Srinivasan, R.; Charles, J. B.
1992-01-01
Fluid shifts in weightlessness may cause a central volume expansion, activating reflexes to reduce the blood volume. Computer simulation was used to test the hypothesis that preadaptation of the blood volume prior to exposure to weightlessness could counteract the central volume expansion due to fluid shifts and thereby attenuate the circulatory and renal responses resulting in large losses of fluid from body water compartments. The Guyton Model of Fluid, Electrolyte, and Circulatory Regulation was modified to simulate the six degree head down tilt that is frequently use as an experimental analog of weightlessness in bedrest studies. Simulation results show that preadaptation of the blood volume by a procedure resembling a blood donation immediately before head down bedrest is beneficial in damping the physiologic responses to fluid shifts and reducing body fluid losses. After ten hours of head down tilt, blood volume after preadaptation is higher than control for 20 to 30 days of bedrest. Preadaptation also produces potentially beneficial higher extracellular volume and total body water for 20 to 30 days of bedrest.
Bacterial molecular networks: bridging the gap between functional genomics and dynamical modelling.
van Helden, Jacques; Toussaint, Ariane; Thieffry, Denis
2012-01-01
This introductory review synthesizes the contents of the volume Bacterial Molecular Networks of the series Methods in Molecular Biology. This volume gathers 9 reviews and 16 method chapters describing computational protocols for the analysis of metabolic pathways, protein interaction networks, and regulatory networks. Each protocol is documented by concrete case studies dedicated to model bacteria or interacting populations. Altogether, the chapters provide a representative overview of state-of-the-art methods for data integration and retrieval, network visualization, graph analysis, and dynamical modelling.
The Simulation of a Jumbo Jet Transport Aircraft. Volume 2: Modeling Data
NASA Technical Reports Server (NTRS)
Hanke, C. R.; Nordwall, D. R.
1970-01-01
The manned simulation of a large transport aircraft is described. Aircraft and systems data necessary to implement the mathematical model described in Volume I and a discussion of how these data are used in model are presented. The results of the real-time computations in the NASA Ames Research Center Flight Simulator for Advanced Aircraft are shown and compared to flight test data and to the results obtained in a training simulator known to be satisfactory.
Initial retrieval sequence and blending strategy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pemwell, D.L.; Grenard, C.E.
1996-09-01
This report documents the initial retrieval sequence and the methodology used to select it. Waste retrieval, storage, pretreatment and vitrification were modeled for candidate single-shell tank retrieval sequences. Performance of the sequences was measured by a set of metrics (for example,high-level waste glass volume, relative risk and schedule).Computer models were used to evaluate estimated glass volumes,process rates, retrieval dates, and blending strategy effects.The models were based on estimates of component inventories and concentrations, sludge wash factors and timing, retrieval annex limitations, etc.
Kahnert, Michael; Nousiainen, Timo; Lindqvist, Hannakaisa; Ebert, Martin
2012-04-23
Light scattering by light absorbing carbon (LAC) aggregates encapsulated into sulfate shells is computed by use of the discrete dipole method. Computations are performed for a UV, visible, and IR wavelength, different particle sizes, and volume fractions. Reference computations are compared to three classes of simplified model particles that have been proposed for climate modeling purposes. Neither model matches the reference results sufficiently well. Remarkably, more realistic core-shell geometries fall behind homogeneous mixture models. An extended model based on a core-shell-shell geometry is proposed and tested. Good agreement is found for total optical cross sections and the asymmetry parameter. © 2012 Optical Society of America
Development of a New Arterial-Line Filter Design Using Computational Fluid Dynamics Analysis
Herbst, Daniel P.; Najm, Hani K.
2012-01-01
Abstract: Arterial-line filters used during extracorporeal circulation continue to rely on the physical properties of a wetted micropore and reductions in blood flow velocity to affect air separation from the circulating blood volume. Although problems associated with air embolism during cardiac surgery persist, a number of investigators have concluded that further improvements in filtration are needed to enhance air removal during cardiopulmonary bypass procedures. This article reviews theoretical principles of micropore filter technology and outlines the development of a new arterial-line filter concept using computational fluid dynamics analysis. Manufacturer-supplied data of a micropore screen and experimental results taken from an ex vivo test circuit were used to define the inputs needed for numerical modeling of a new filter design. Flow patterns, pressure distributions, and velocity profiles predicted with computational fluid dynamics softwarewere used to inform decisions on model refinements and how to achieve initial design goals of ≤225 mL prime volume and ≤500 cm2 of screen surface area. Predictions for optimal model geometry included a screen angle of 56° from the horizontal plane with a total surface area of 293.9 cm2 and a priming volume of 192.4 mL. This article describes in brief the developmental process used to advance a new filter design and supports the value of numerical modeling in this undertaking. PMID:23198394
Development of a new arterial-line filter design using computational fluid dynamics analysis.
Herbst, Daniel P; Najm, Hani K
2012-09-01
Arterial-line filters used during extracorporeal circulation continue to rely on the physical properties of a wetted micropore and reductions in blood flow velocity to affect air separation from the circulating blood volume. Although problems associated with air embolism during cardiac surgery persist, a number of investigators have concluded that further improvements in filtration are needed to enhance air removal during cardiopulmonary bypass procedures. This article reviews theoretical principles of micropore filter technology and outlines the development of a new arterial-line filter concept using computational fluid dynamics analysis. Manufacturer-supplied data of a micropore screen and experimental results taken from an ex vivo test circuit were used to define the inputs needed for numerical modeling of a new filter design. Flow patterns, pressure distributions, and velocity profiles predicted with computational fluid dynamics software were used to inform decisions on model refinements and how to achieve initial design goals of < or = 225 mL prime volume and < or = 500 cm2 of screen surface area. Predictions for optimal model geometry included a screen angle of 56 degrees from the horizontal plane with a total surface area of 293.9 cm2 and a priming volume of 192.4 mL. This article describes in brief the developmental process used to advance a new filter design and supports the value of numerical modeling in this undertaking.
A 4DCT imaging-based breathing lung model with relative hysteresis
Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.; Lin, Ching-Long
2016-01-01
To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for both models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry. PMID:28260811
A 4DCT imaging-based breathing lung model with relative hysteresis
NASA Astrophysics Data System (ADS)
Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.; Lin, Ching-Long
2016-12-01
To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for both models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zaborszky, J.; Venkatasubramanian, V.
1995-10-01
Taxonomy Theory is the first precise comprehensive theory for large power system dynamics modeled in any detail. The motivation for this project is to show that it can be used, practically, for analyzing a disturbance that actually occurred on a large system, which affected a sizable portion of the Midwest with supercritical Hopf type oscillations. This event is well documented and studied. The report first summarizes Taxonomy Theory with an engineering flavor. Then various computational approaches are sighted and analyzed for desirability to use with Taxonomy Theory. Then working equations are developed for computing a segment of the feasibility boundarymore » that bounds the region of (operating) parameters throughout which the operating point can be moved without losing stability. Then experimental software incorporating large EPRI software packages PSAPAC is developed. After a summary of the events during the subject disturbance, numerous large scale computations, up to 7600 buses, are reported. These results are reduced into graphical and tabular forms, which then are analyzed and discussed. The report is divided into two volumes. This volume illustrates the use of the Taxonomy Theory for computing the feasibility boundary and presents evidence that the event indeed led to a Hopf type oscillation on the system. Furthermore it proves that the Feasibility Theory can indeed be used for practical computation work with very large systems. Volume 2, a separate volume, will show that the disturbance has led to a supercritical (that is stable oscillation) Hopf bifurcation.« less
Modeling Flow Past a Tilted Vena Cava Filter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singer, M A; Wang, S L
Inferior vena cava filters are medical devices used to prevent pulmonary embolism (PE) from deep vein thrombosis. In particular, retrievable filters are well-suited for patients who are unresponsive to anticoagulation therapy and whose risk of PE decreased with time. The goal of this work is to use computational fluid dynamics to evaluate the flow past an unoccluded and partially occluded Celect inferior vena cava filter. In particular, the hemodynamic response to thrombus volume and filter tilt is examined, and the results are compared with flow conditions that are known to be thrombogenic. A computer model of the filter inside amore » model vena cava is constructed using high resolution digital photographs and methods of computer aided design. The models are parameterized using the Overture software framework, and a collection of overlapping grids is constructed to discretize the flow domain. The incompressible Navier-Stokes equations are solved, and the characteristics of the flow (i.e., velocity contours and wall shear stresses) are computed. The volume of stagnant and recirculating flow increases with thrombus volume. In addition, as the filter increases tilt, the cava wall adjacent to the tilted filter is subjected to low velocity flow that gives rise to regions of low wall shear stress. The results demonstrate the ease of IVC filter modeling with the Overture software framework. Flow conditions caused by the tilted Celect filter may elevate the risk of intrafilter thrombosis and facilitate vascular remodeling. This latter condition also increases the risk of penetration and potential incorporation of the hook of the filter into the vena caval wall, thereby complicating filter retrieval. Consequently, severe tilt at the time of filter deployment may warrant early clinical intervention.« less
3D robust Chan-Vese model for industrial computed tomography volume data segmentation
NASA Astrophysics Data System (ADS)
Liu, Linghui; Zeng, Li; Luan, Xiao
2013-11-01
Industrial computed tomography (CT) has been widely applied in many areas of non-destructive testing (NDT) and non-destructive evaluation (NDE). In practice, CT volume data to be dealt with may be corrupted by noise. This paper addresses the segmentation of noisy industrial CT volume data. Motivated by the research on the Chan-Vese (CV) model, we present a region-based active contour model that draws upon intensity information in local regions with a controllable scale. In the presence of noise, a local energy is firstly defined according to the intensity difference within a local neighborhood. Then a global energy is defined to integrate local energy with respect to all image points. In a level set formulation, this energy is represented by a variational level set function, where a surface evolution equation is derived for energy minimization. Comparative analysis with the CV model indicates the comparable performance of the 3D robust Chan-Vese (RCV) model. The quantitative evaluation also shows the segmentation accuracy of 3D RCV. In addition, the efficiency of our approach is validated under several types of noise, such as Poisson noise, Gaussian noise, salt-and-pepper noise and speckle noise.
Coupled Structural, Thermal, Phase-Change and Electromagnetic Analysis for Superconductors. Volume 1
NASA Technical Reports Server (NTRS)
Felippa, C. A.; Farhat, C.; Park, K. C.; Militello, C.; Schuler, J. J.
1996-01-01
Described are the theoretical development and computer implementation of reliable and efficient methods for the analysis of coupled mechanical problems that involve the interaction of mechanical, thermal, phase-change and electromagnetic subproblems. The focus application has been the modeling of superconductivity and associated quantum-state phase-change phenomena. In support of this objective the work has addressed the following issues: (1) development of variational principles for finite elements, (2) finite element modeling of the electromagnetic problem, (3) coupling of thermal and mechanical effects, and (4) computer implementation and solution of the superconductivity transition problem. The main accomplishments have been: (1) the development of the theory of parametrized and gauged variational principles, (2) the application of those principled to the construction of electromagnetic, thermal and mechanical finite elements, and (3) the coupling of electromagnetic finite elements with thermal and superconducting effects, and (4) the first detailed finite element simulations of bulk superconductors, in particular the Meissner effect and the nature of the normal conducting boundary layer. The theoretical development is described in two volumes. This volume, Volume 1, describes mostly formulations for specific problems. Volume 2 describes generalization of those formulations.
The effect of topography on pyroclastic flow mobility
NASA Astrophysics Data System (ADS)
Ogburn, S. E.; Calder, E. S.
2010-12-01
Pyroclastic flows are among the most destructive volcanic phenomena. Hazard mitigation depends upon accurate forecasting of possible flow paths, often using computational models. Two main metrics have been proposed to describe the mobility of pyroclastic flows. The Heim coefficient, height-dropped/run-out (H/L), exhibits an inverse relationship with flow volume. This coefficient corresponds to the coefficient of friction and informs computational models that use Coulomb friction laws. Another mobility measure states that with constant shear stress, planimetric area is proportional to the flow volume raised to the 2/3 power (A∝V^(2/3)). This relationship is incorporated in models using constant shear stress instead of constant friction, and used directly by some empirical models. Pyroclastic flows from Soufriere Hills Volcano, Montserrat; Unzen, Japan; Colima, Mexico; and Augustine, Alaska are well described by these metrics. However, flows in specific valleys exhibit differences in mobility. This study investigates the effect of topography on pyroclastic flow mobility, as measured by the above mentioned mobility metrics. Valley width, depth, and cross-sectional area all influence flow mobility. Investigating the appropriateness of these mobility measures, as well as the computational models they inform, indicates certain circumstances under which each model performs optimally. Knowing which conditions call for which models allows for better model selection or model weighting, and therefore, more realistic hazard predictions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghomi, Pooyan Shirvani; Zinchenko, Yuriy
2014-08-15
Purpose: To compare methods to incorporate the Dose Volume Histogram (DVH) curves into the treatment planning optimization. Method: The performance of three methods, namely, the conventional Mixed Integer Programming (MIP) model, a convex moment-based constrained optimization approach, and an unconstrained convex moment-based penalty approach, is compared using anonymized data of a prostate cancer patient. Three plans we generated using the corresponding optimization models. Four Organs at Risk (OARs) and one Tumor were involved in the treatment planning. The OARs and Tumor were discretized into total of 50,221 voxels. The number of beamlets was 943. We used commercially available optimization softwaremore » Gurobi and Matlab to solve the models. Plan comparison was done by recording the model runtime followed by visual inspection of the resulting dose volume histograms. Conclusion: We demonstrate the effectiveness of the moment-based approaches to replicate the set of prescribed DVH curves. The unconstrained convex moment-based penalty approach is concluded to have the greatest potential to reduce the computational effort and holds a promise of substantial computational speed up.« less
The report briefly describes the fundamental mechanisms and limiting factors involved in the electrostatic precipitation process. It discusses theories and procedures used in the computer model to describe the physical mechanisms, and generally describes the major operations perf...
NASA Technical Reports Server (NTRS)
Anderson, O. L.; Chiappetta, L. M.; Edwards, D. E.; Mcvey, J. B.
1982-01-01
A user's manual describing the operation of three computer codes (ADD code, PTRAK code, and VAPDIF code) is presented. The general features of the computer codes, the input/output formats, run streams, and sample input cases are described.
Crew appliance computer program manual, volume 1
NASA Technical Reports Server (NTRS)
Russell, D. J.
1975-01-01
Trade studies of numerous appliance concepts for advanced spacecraft galley, personal hygiene, housekeeping, and other areas were made to determine which best satisfy the space shuttle orbiter and modular space station mission requirements. Analytical models of selected appliance concepts not currently included in the G-189A Generalized Environmental/Thermal Control and Life Support Systems (ETCLSS) Computer Program subroutine library were developed. The new appliance subroutines are given along with complete analytical model descriptions, solution methods, user's input instructions, and validation run results. The appliance components modeled were integrated with G-189A ETCLSS models for shuttle orbiter and modular space station, and results from computer runs of these systems are presented.
Doyle, Heather; Lohfeld, Stefan; Dürselen, Lutz; McHugh, Peter
2015-04-01
Computational model geometries of tibial defects with two types of implanted tissue engineering scaffolds, β-tricalcium phosphate (β-TCP) and poly-ε-caprolactone (PCL)/β-TCP, are constructed from µ-CT scan images of the real in vivo defects. Simulations of each defect under four-point bending and under simulated in vivo axial compressive loading are performed. The mechanical stability of each defect is analysed using stress distribution analysis. The results of this analysis highlights the influence of callus volume, and both scaffold volume and stiffness, on the load-bearing abilities of these defects. Clinically-used image-based methods to predict the safety of removing external fixation are evaluated for each defect. Comparison of these measures with the results of computational analyses indicates that care must be taken in the interpretation of these measures. Copyright © 2015 Elsevier Ltd. All rights reserved.
Applied Computational Electromagnetics Society Journal. Volume 7, Number 1, Summer 1992
1992-01-01
previously-solved computational problem in electrical engineering, physics, or related fields of study. The technical activities promoted by this...in solution technique or in data input/output; identification of new applica- tions for electromagnetics modeling codes and techniques; integration of...papers will represent the computational electromagnetics aspects of research in electrical engineering, physics, or related disciplines. However, papers
Subgrid or Reynolds stress-modeling for three-dimensional turbulence computations
NASA Technical Reports Server (NTRS)
Rubesin, M. W.
1975-01-01
A review is given of recent advances in two distinct computational methods for evaluating turbulence fields, namely, statistical Reynolds stress modeling and turbulence simulation, where large eddies are followed in time. It is shown that evaluation of the mean Reynolds stresses, rather than use of a scalar eddy viscosity, permits an explanation of streamline curvature effects found in several experiments. Turbulence simulation, with a new volume averaging technique and third-order accurate finite-difference computing is shown to predict the decay of isotropic turbulence in incompressible flow with rather modest computer storage requirements, even at Reynolds numbers of aerodynamic interest.
Strategic Control Algorithm Development : Volume 4A. Computer Program Report.
DOT National Transportation Integrated Search
1974-08-01
A description of the strategic algorithm evaluation model is presented, both at the user and programmer levels. The model representation of an airport configuration, environmental considerations, the strategic control algorithm logic, and the airplan...
Solid rocket booster performance evaluation model. Volume 4: Program listing
NASA Technical Reports Server (NTRS)
1974-01-01
All subprograms or routines associated with the solid rocket booster performance evaluation model are indexed in this computer listing. An alphanumeric list of each routine in the index is provided in a table of contents.
Strategic Control Algorithm Development : Volume 4B. Computer Program Report (Concluded)
DOT National Transportation Integrated Search
1974-08-01
A description of the strategic algorithm evaluation model is presented, both at the user and programmer levels. The model representation of an airport configuration, environmental considerations, the strategic control algorithm logic, and the airplan...
Summation of IMS Volume Frequencies.
ERIC Educational Resources Information Center
Gordillo, Frank
A computer program designed to produce summary information on the data processing volume of the Southwest Regional Laboratory's (SWRL) Instructional Management System (IMS) is described. Written in FORTRAN IV for use on an IBM 360 Model 91, the program sorts IMS input data on the basis of run identifier and on the basis of classroom identification…
Coupled Structural, Thermal, Phase-change and Electromagnetic Analysis for Superconductors, Volume 2
NASA Technical Reports Server (NTRS)
Felippa, C. A.; Farhat, C.; Park, K. C.; Militello, C.; Schuler, J. J.
1996-01-01
Described are the theoretical development and computer implementation of reliable and efficient methods for the analysis of coupled mechanical problems that involve the interaction of mechanical, thermal, phase-change and electromag subproblems. The focus application has been the modeling of superconductivity and associated quantum-state phase change phenomena. In support of this objective the work has addressed the following issues: (1) development of variational principles for finite elements, (2) finite element modeling of the electromagnetic problem, (3) coupling of thermel and mechanical effects, and (4) computer implementation and solution of the superconductivity transition problem. The main accomplishments have been: (1) the development of the theory of parametrized and gauged variational principles, (2) the application of those principled to the construction of electromagnetic, thermal and mechanical finite elements, and (3) the coupling of electromagnetic finite elements with thermal and superconducting effects, and (4) the first detailed finite element simulations of bulk superconductors, in particular the Meissner effect and the nature of the normal conducting boundary layer. The theoretical development is described in two volumes. Volume 1 describes mostly formulation specific problems. Volume 2 describes generalization of those formulations.
High-Throughput Thermodynamic Modeling and Uncertainty Quantification for ICME
NASA Astrophysics Data System (ADS)
Otis, Richard A.; Liu, Zi-Kui
2017-05-01
One foundational component of the integrated computational materials engineering (ICME) and Materials Genome Initiative is the computational thermodynamics based on the calculation of phase diagrams (CALPHAD) method. The CALPHAD method pioneered by Kaufman has enabled the development of thermodynamic, atomic mobility, and molar volume databases of individual phases in the full space of temperature, composition, and sometimes pressure for technologically important multicomponent engineering materials, along with sophisticated computational tools for using the databases. In this article, our recent efforts will be presented in terms of developing new computational tools for high-throughput modeling and uncertainty quantification based on high-throughput, first-principles calculations and the CALPHAD method along with their potential propagations to downstream ICME modeling and simulations.
Volume of the steady-state space of financial flows in a monetary stock-flow-consistent model
NASA Astrophysics Data System (ADS)
Hazan, Aurélien
2017-05-01
We show that a steady-state stock-flow consistent macro-economic model can be represented as a Constraint Satisfaction Problem (CSP). The set of solutions is a polytope, which volume depends on the constraints applied and reveals the potential fragility of the economic circuit, with no need to study the dynamics. Several methods to compute the volume are compared, inspired by operations research methods and the analysis of metabolic networks, both exact and approximate. We also introduce a random transaction matrix, and study the particular case of linear flows with respect to money stocks.
DOT National Transportation Integrated Search
1976-07-01
Several new capabilities have been added to the DYNALIST II computer program. These include: (1) a component matrix generator that operates as a 3-D finite element modeling program where elements consist of rigid bodies, flexural bodies, wheelsets, s...
The Coast Artillery Journal. Volume 66, Number 5, May 1927
1927-05-01
We are losing our sense of propor- tion and computing to ounces from measurements accurate only to pounds. Pursuing this subject further, it is...assumed. 2. The wind effect is supposedly corrected for through the use of the wind computer . Actually the present wind computer does not fur- nish...unsuited for the application of the correction (the Model 1917 Data Computer ) by operators too busy with principal deflections to occupy themselves
NASA Technical Reports Server (NTRS)
Daniele, C. J.; Lorenzo, C. F.
1979-01-01
Lumped volume dynamic equations are derived using an energy state formulation. This technique requires that kinetic and potential energy state functions be written for the physical system being investigated. To account for losses in the system, a Rayleigh dissipation function is formed. Using these functions, a Lagrangian is formed and using Lagrange's equation, the equations of motion for the system are derived. The results of the application of this technique to a lumped volume are used to derive a model for the free piston Stirling engine. The model was simplified and programmed on an analog computer. Results are given comparing the model response with experimental data.
NASA Technical Reports Server (NTRS)
Daniele, C. J.; Lorenzo, C. F.
1979-01-01
Lumped volume dynamic equations are derived using an energy-state formulation. This technique requires that kinetic and potential energy state functions be written for the physical system being investigated. To account for losses in the system, a Rayleigh dissipation function is also formed. Using these functions, a Lagrangian is formed and using Lagrange's equation, the equations of motion for the system are derived. The results of the application of this technique to a lumped volume are used to derive a model for the free-piston Stirling engine. The model was simplified and programmed on an analog computer. Results are given comparing the model response with experimental data.
Mapping Bone Mineral Density Obtained by Quantitative Computed Tomography to Bone Volume Fraction
NASA Technical Reports Server (NTRS)
Pennline, James A.; Mulugeta, Lealem
2017-01-01
Methods for relating or mapping estimates of volumetric Bone Mineral Density (vBMD) obtained by Quantitative Computed Tomography to Bone Volume Fraction (BVF) are outlined mathematically. The methods are based on definitions of bone properties, cited experimental studies and regression relations derived from them for trabecular bone in the proximal femur. Using an experimental range of values in the intertrochanteric region obtained from male and female human subjects, age 18 to 49, the BVF values calculated from four different methods were compared to the experimental average and numerical range. The BVF values computed from the conversion method used data from two sources. One source provided pre bed rest vBMD values in the intertrochanteric region from 24 bed rest subject who participated in a 70 day study. Another source contained preflight vBMD values from 18 astronauts who spent 4 to 6 months on the ISS. To aid the use of a mapping from BMD to BVF, the discussion includes how to formulate them for purpose of computational modeling. An application of the conversions would be used to aid in modeling of time varying changes in vBMD as it relates to changes in BVF via bone remodeling and/or modeling.
Asymmetric Base-Bleed Effect on Aerospike Plume-Induced Base-Heating Environment
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Droege, Alan; DAgostino, Mark; Lee, Young-Ching; Williams, Robert
2004-01-01
A computational heat transfer design methodology was developed to study the dual-engine linear aerospike plume-induced base-heating environment during one power-pack out, in ascent flight. It includes a three-dimensional, finite volume, viscous, chemically reacting, and pressure-based computational fluid dynamics formulation, a special base-bleed boundary condition, and a three-dimensional, finite volume, and spectral-line-based weighted-sum-of-gray-gases absorption computational radiation heat transfer formulation. A separate radiation model was used for diagnostic purposes. The computational methodology was systematically benchmarked. In this study, near-base radiative heat fluxes were computed, and they compared well with those measured during static linear aerospike engine tests. The base-heating environment of 18 trajectory points selected from three power-pack out scenarios was computed. The computed asymmetric base-heating physics were analyzed. The power-pack out condition has the most impact on convective base heating when it happens early in flight. The source of its impact comes from the asymmetric and reduced base bleed.
A three-dimensional semianalytical model of hydraulic fracture growth through weak barriers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luiskutty, C.T.; Tomutes, L.; Palmer, I.D.
1989-08-01
The goal of this research was to develop a fracture model for length/height ratio {le}4 that includes 2D flow (and a line source corresponding to the perforated interval) but makes approximations that allow a semianalytical solution, with large computer-time savings over the fully numerical mode. The height, maximum width, and pressure at the wellbore in this semianalytical model are calculated and compared with the results of the fully three-dimensional (3D) model. There is reasonable agreement in all parameters, the maximum discrepancy being 24%. Comparisons of fracture volume and leakoff volume also show reasonable agreement in volume and fluid efficiencies. Themore » values of length/height ratio, in the four cases in which agreement is found, vary from 1.5 to 3.7. The model offers a useful first-order (or screening) calculation of fracture-height growth through weak barriers (e.g., low stress contrasts). When coupled with the model developed for highly elongated fractures of length/height ratio {ge}4, which are also found to be in basic agreement with the fully numerical model, this new model provides the capability for approximating fracture-height growth through barriers for vertical fracture shapes that vary from penny to highly elongated. The computer time required is estimated to be less than the time required for the fully numerical model by a factor of 10 or more.« less
OPS MCC level B/C formulation requirements: Area targets and space volumes processor
NASA Technical Reports Server (NTRS)
Bishop, M. J., Jr.
1979-01-01
The level B/C mathematical specifications for the area targets and space volumes processor (ATSVP) are described. The processor is designed to compute the acquisition-of-signal (AOS) and loss-of-signal (LOS) times for area targets and space volumes. The characteristics of the area targets and space volumes are given. The mathematical equations necessary to determine whether the spacecraft lies within the area target or space volume are given. These equations provide a detailed model of the target geometry. A semianalytical technique for predicting the AOS and LOS time periods is disucssed. This technique was designed to bound the actual visibility period using a simplified target geometry model and unperturbed orbital motion. Functional overview of the ATSVP is presented and it's detailed logic flow is described.
DOT National Transportation Integrated Search
1978-05-01
The User Delay Cost Model (UDCM) is a Monte Carlo computer simulation of essential aspects of Terminal Control Area (TCA) air traffic movements that would be affected by facility outages. The model can also evaluate delay effects due to other factors...
A hybrid ARIMA and neural network model applied to forecast catch volumes of Selar crumenophthalmus
NASA Astrophysics Data System (ADS)
Aquino, Ronald L.; Alcantara, Nialle Loui Mar T.; Addawe, Rizavel C.
2017-11-01
The Selar crumenophthalmus with the English name big-eyed scad fish, locally known as matang-baka, is one of the fishes commonly caught along the waters of La Union, Philippines. The study deals with the forecasting of catch volumes of big-eyed scad fish for commercial consumption. The data used are quarterly caught volumes of big-eyed scad fish from 2002 to first quarter of 2017. This actual data is available from the open stat database published by the Philippine Statistics Authority (PSA)whose task is to collect, compiles, analyzes and publish information concerning different aspects of the Philippine setting. Autoregressive Integrated Moving Average (ARIMA) models, Artificial Neural Network (ANN) model and the Hybrid model consisting of ARIMA and ANN were developed to forecast catch volumes of big-eyed scad fish. Statistical errors such as Mean Absolute Errors (MAE) and Root Mean Square Errors (RMSE) were computed and compared to choose the most suitable model for forecasting the catch volume for the next few quarters. A comparison of the results of each model and corresponding statistical errors reveals that the hybrid model, ARIMA-ANN (2,1,2)(6:3:1), is the most suitable model to forecast the catch volumes of the big-eyed scad fish for the next few quarters.
MELCOR computer code manuals: Primer and user`s guides, Version 1.8.3 September 1994. Volume 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Summers, R.M.; Cole, R.K. Jr.; Smith, R.C.
1995-03-01
MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the US Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, andmore » combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR`s phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users` Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package.« less
Winter Simulation Conference, Miami Beach, Fla., December 4-6, 1978, Proceedings. Volumes 1 & 2
NASA Technical Reports Server (NTRS)
Highland, H. J. (Editor); Nielsen, N. R.; Hull, L. G.
1978-01-01
The papers report on the various aspects of simulation such as random variate generation, simulation optimization, ranking and selection of alternatives, model management, documentation, data bases, and instructional methods. Simulation studies in a wide variety of fields are described, including system design and scheduling, government and social systems, agriculture, computer systems, the military, transportation, corporate planning, ecosystems, health care, manufacturing and industrial systems, computer networks, education, energy, production planning and control, financial models, behavioral models, information systems, and inventory control.
NASA Technical Reports Server (NTRS)
Bodley, C. S.; Devers, D. A.; Park, C. A.
1975-01-01
A theoretical development and associated digital computer program system is presented. The dynamic system (spacecraft) is modeled as an assembly of rigid and/or flexible bodies not necessarily in a topological tree configuration. The computer program system may be used to investigate total system dynamic characteristics including interaction effects between rigid and/or flexible bodies, control systems, and a wide range of environmental loadings. Additionally, the program system may be used for design of attitude control systems and for evaluation of total dynamic system performance including time domain response and frequency domain stability analyses. Volume 1 presents the theoretical developments including a description of the physical system, the equations of dynamic equilibrium, discussion of kinematics and system topology, a complete treatment of momentum wheel coupling, and a discussion of gravity gradient and environmental effects. Volume 2, is a program users' guide and includes a description of the overall digital program code, individual subroutines and a description of required program input and generated program output. Volume 3 presents the results of selected demonstration problems that illustrate all program system capabilities.
Precht, H; Kitslaar, P H; Broersen, A; Gerke, O; Dijkstra, J; Thygesen, J; Egstrup, K; Lambrechtsen, J
2017-02-01
Investigate the influence of adaptive statistical iterative reconstruction (ASIR) and the model-based IR (Veo) reconstruction algorithm in coronary computed tomography angiography (CCTA) images on quantitative measurements in coronary arteries for plaque volumes and intensities. Three patients had three independent dose reduced CCTA performed and reconstructed with 30% ASIR (CTDI vol at 6.7 mGy), 60% ASIR (CTDI vol 4.3 mGy) and Veo (CTDI vol at 1.9 mGy). Coronary plaque analysis was performed for each measured CCTA volumes, plaque burden and intensities. Plaque volume and plaque burden show a decreasing tendency from ASIR to Veo as median volume for ASIR is 314 mm 3 and 337 mm 3 -252 mm 3 for Veo and plaque burden is 42% and 44% for ASIR to 39% for Veo. The lumen and vessel volume decrease slightly from 30% ASIR to 60% ASIR with 498 mm 3 -391 mm 3 for lumen volume and vessel volume from 939 mm 3 to 830 mm 3 . The intensities did not change overall between the different reconstructions for either lumen or plaque. We found a tendency of decreasing plaque volumes and plaque burden but no change in intensities with the use of low dose Veo CCTA (1.9 mGy) compared to dose reduced ASIR CCTA (6.7 mGy & 4.3 mGy), although more studies are warranted. Copyright © 2016 The College of Radiographers. Published by Elsevier Ltd. All rights reserved.
Greiner, Joachim; Sankarankutty, Aparna C; Seemann, Gunnar; Seidel, Thomas; Sachse, Frank B
2018-01-01
Computational modeling is an important tool to advance our knowledge on cardiac diseases and their underlying mechanisms. Computational models of conduction in cardiac tissues require identification of parameters. Our knowledge on these parameters is limited, especially for diseased tissues. Here, we assessed and quantified parameters for computational modeling of conduction in cardiac tissues. We used a rabbit model of myocardial infarction (MI) and an imaging-based approach to derive the parameters. Left ventricular tissue samples were obtained from fixed control hearts (animals: 5) and infarcted hearts (animals: 6) within 200 μm (region 1), 250-750 μm (region 2) and 1,000-1,250 μm (region 3) of the MI border. We assessed extracellular space, fibroblasts, smooth muscle cells, nuclei and gap junctions by a multi-label staining protocol. With confocal microscopy we acquired three-dimensional (3D) image stacks with a voxel size of 200 × 200 × 200 nm. Image segmentation yielded 3D reconstructions of tissue microstructure, which were used to numerically derive extracellular conductivity tensors. Volume fractions of myocyte, extracellular, interlaminar cleft, vessel and fibroblast domains in control were (in %) 65.03 ± 3.60, 24.68 ± 3.05, 3.95 ± 4.84, 7.71 ± 2.15, and 2.48 ± 1.11, respectively. Volume fractions in regions 1 and 2 were different for myocyte, myofibroblast, vessel, and extracellular domains. Fibrosis, defined as increase in fibrotic tissue constituents, was (in %) 21.21 ± 1.73, 16.90 ± 9.86, and 3.58 ± 8.64 in MI regions 1, 2, and 3, respectively. For control tissues, image-based computation of longitudinal, transverse and normal extracellular conductivity yielded (in S/m) 0.36 ± 0.11, 0.17 ± 0.07, and 0.1 ± 0.06, respectively. Conductivities were markedly increased in regions 1 ( + 75 , + 171, and + 100%), 2 ( + 53 , + 165, and + 80%), and 3 ( + 42 , + 141, and + 60%) . Volume fractions of the extracellular space including interlaminar clefts strongly correlated with conductivities in control and MI hearts. Our study provides novel quantitative data for computational modeling of conduction in normal and MI hearts. Notably, our study introduces comprehensive statistical information on tissue composition and extracellular conductivities on a microscopic scale in the MI border zone. We suggest that the presented data fill a significant gap in modeling parameters and extend our foundation for computational modeling of cardiac conduction.
Nguyen, Hien D; Ullmann, Jeremy F P; McLachlan, Geoffrey J; Voleti, Venkatakaushik; Li, Wenze; Hillman, Elizabeth M C; Reutens, David C; Janke, Andrew L
2018-02-01
Calcium is a ubiquitous messenger in neural signaling events. An increasing number of techniques are enabling visualization of neurological activity in animal models via luminescent proteins that bind to calcium ions. These techniques generate large volumes of spatially correlated time series. A model-based functional data analysis methodology via Gaussian mixtures is suggested for the clustering of data from such visualizations is proposed. The methodology is theoretically justified and a computationally efficient approach to estimation is suggested. An example analysis of a zebrafish imaging experiment is presented.
Guo, Zhi-Jun; Lin, Qiang; Liu, Hai-Tao; Lu, Jun-Ying; Zeng, Yan-Hong; Meng, Fan-Jie; Cao, Bin; Zi, Xue-Rong; Han, Shu-Ming; Zhang, Yu-Huan
2013-09-01
Using computed tomography (CT) to rapidly and accurately quantify pleural effusion volume benefits medical and scientific research. However, the precise volume of pleural effusions still involves many challenges and currently does not have a recognized accurate measuring. To explore the feasibility of using 64-slice CT volume-rendering technology to accurately measure pleural fluid volume and to then analyze the correlation between the volume of the free pleural effusion and the different diameters of the pleural effusion. The 64-slice CT volume-rendering technique was used to measure and analyze three parts. First, the fluid volume of a self-made thoracic model was measured and compared with the actual injected volume. Second, the pleural effusion volume was measured before and after pleural fluid drainage in 25 patients, and the volume reduction was compared with the actual volume of the liquid extract. Finally, the free pleural effusion volume was measured in 26 patients to analyze the correlation between it and the diameter of the effusion, which was then used to calculate the regression equation. After using the 64-slice CT volume-rendering technique to measure the fluid volume of the self-made thoracic model, the results were compared with the actual injection volume. No significant differences were found, P = 0.836. For the 25 patients with drained pleural effusions, the comparison of the reduction volume with the actual volume of the liquid extract revealed no significant differences, P = 0.989. The following linear regression equation was used to compare the pleural effusion volume (V) (measured by the CT volume-rendering technique) with the pleural effusion greatest depth (d): V = 158.16 × d - 116.01 (r = 0.91, P = 0.000). The following linear regression was used to compare the volume with the product of the pleural effusion diameters (l × h × d): V = 0.56 × (l × h × d) + 39.44 (r = 0.92, P = 0.000). The 64-slice CT volume-rendering technique can accurately measure the volume in pleural effusion patients, and a linear regression equation can be used to estimate the volume of the free pleural effusion.
1983-07-01
Analysis of trace contaminants project at ORNL. Medium applied to movement of heavy metals through a forested watershed. OAQPS has not reviewed...computer cartography and site design aids; management information systems for facility planning, construction and * operation; and a computer...4 (5) Comprehensive 4 (6) Spills/ Heavy Gas 5 b. Regional 7 c. Reactive Pollutants 7 d. Special Purpose 8 e. Rocket Firing 8 f. Summary of Models by
Tokamak experimental power reactor conceptual design. Volume II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1976-08-01
Volume II contains the following appendices: (1) summary of EPR design parameters, (2) impurity control, (3) plasma computational models, (4) structural support system, (5) materials considerations for the primary energy conversion system, (6) magnetics, (7) neutronics penetration analysis, (8) first wall stress analysis, (9) enrichment of isotopes of hydrogen by cryogenic distillation, and (10) noncircular plasma considerations. (MOW)
Contributions to Engineering Models of Human-Computer Interaction. Volume 1.
1988-05-06
for those readers wishing to replicate my results. Volume II is on file in the Carnegie-Mellon library and is available upon request from the author...a 50 mieec 10 10 10 10 170 170 170 Moo, sf 4 170 Mooc Figure 4-20: Schedule chart of the perception-wait algorithm for the detection span task
Sato, Y; Teixeira, E R; Tsuga, K; Shindoi, N
1999-08-01
More validity of finite element analysis (FEA) in implant biomechanics requires element downsizing. However, excess downsizing needs computer memory and calculation time. To evaluate the effectiveness of a new algorithm established for more valid FEA model construction without downsizing, three-dimensional FEA bone trabeculae models with different element sizes (300, 150 and 75 micron) were constructed. Four algorithms of stepwise (1 to 4 ranks) assignment of Young's modulus accorded with bone volume in the individual cubic element was used and then stress distribution against vertical loading was analysed. The model with 300 micron element size, with 4 ranks of Young's moduli accorded with bone volume in each element presented similar stress distribution to the model with the 75 micron element size. These results show that the new algorithm was effective, and the use of the 300 micron element for bone trabeculae representation was proposed, without critical changes in stress values and for possible savings on computer memory and calculation time in the laboratory.
Computational Investigation of Soot and Radiation in Turbulent Reacting Flows
NASA Astrophysics Data System (ADS)
Lalit, Harshad
This study delves into computational modeling of soot and infrared radiation for turbulent reacting flows, detailed understanding of both of which is paramount in the design of cleaner engines and pollution control. In the first part of the study, the concept of Stochastic Time and Space Series Analysis (STASS) as a numerical tool to compute time dependent statistics of radiation intensity is introduced for a turbulent premixed flame. In the absence of high fidelity codes for large eddy simulation or direct numerical simulation of turbulent flames, the utility of STASS for radiation imaging of reacting flows to understand the flame structure is assessed by generating images of infrared radiation in spectral bands dominated by radiation from gas phase carbon dioxide and water vapor using an assumed PDF method. The study elucidates the need for time dependent computation of radiation intensity for validation with experiments and the need for accounting for turbulence radiation interactions for correctly predicting radiation intensity and consequently the flame temperature and NOx in a reacting fluid flow. Comparison of single point statistics of infrared radiation intensity with measurements show that STASS can not only predict the flame structure but also estimate the dynamics of thermochemical scalars in the flame with reasonable accuracy. While a time series is used to generate realizations of thermochemical scalars in the first part of the study, in the second part, instantaneous realizations of resolved scale temperature, CO2 and H2O mole fractions and soot volume fractions are extracted from a large eddy simulation (LES) to carry out quantitative imaging of radiation intensity (QIRI) for a turbulent soot generating ethylene diffusion flame. A primary motivation of the study is to establish QIRI as a computational tool for validation of soot models, especially in the absence of conventional flow field and measured scalar data for sooting flames. Realizations of scalars from the LES are used in conjunction with the radiation heat transfer equation and a narrow band radiation model to compute time dependent and time averaged images of infrared radiation intensity in spectral bands corresponding to molecular radiation from gas phase carbon dioxide and soot particles exclusively. While qualitative and quantitative comparisons with measured images in the CO2 radiation band show that the flame structure is correctly computed, images computed in the soot radiation band illustrate that the soot volume fraction is under predicted by the computations. The effect of the soot model and cause of under prediction is investigated further by correcting the soot volume fraction using an empirical state relationship. By comparing default simulations with computations using the state relation, it is shown that while the soot model under-estimates the soot concentration, it correctly computes the intermittency of soot in the flame. The study of sooting flames is extended further by performing a parametric analysis of physical and numerical parameters that affect soot formation and transport in two laboratory scale turbulent sooting flames, one fueled by natural gas and the other by ethylene. The study is focused on investigating the effect of molecular diffusion of species, dilution of fuel with hydrogen gas and the effect of chemical reaction mechanism on the soot concentration in the flame. The effect of species Lewis numbers on soot evolution and transport is investigated by carrying out simulations, first with the default equal diffusivity (ED) assumption and then by incorporating a differential diffusion (DD) model. Computations using the DD model over-estimate the concentration of the soot precursor and soot oxidizer species, leading to inconsistencies in the estimate of the soot concentration. The linear differential diffusion (LDD) model, reported previously to consistently model differential diffusion effects is implemented to correct the over prediction effect of the DD model. It is shown that the effect of species Lewis number on soot evolution is a secondary phenomenon and that soot is primarily transported by advection of the fluid in a turbulent flame. The effect of hydrogen dilution on the soot formation and transport process is also studied. It is noted that the decay of soot volume fraction and flame length with hydrogen addition follows trends observed in laminar sooting flame measurements. While hydrogen enhances mixing shown by the laminar flamelet solutions, the mixing effect does not significantly contribute to differential molecular diffusion effects in the soot nucleation regions downstream of the flame and has a negligible effect on soot transport. The sensitivity of computations of soot volume fraction towards the chemical reaction mechanism is shown. It is concluded that modeling reaction pathways of C3 and C4 species that lead up to Polycyclic Aromatic Hydrocarbon (PAH) molecule formation is paramount for accurate predictions of soot in the flame. (Abstract shortened by ProQuest.).
Lattice study of finite volume effect in HVP for muon g-2
NASA Astrophysics Data System (ADS)
Izubuchi, Taku; Kuramashi, Yoshinobu; Lehner, Christoph; Shintani, Eigo
2018-03-01
We study the finite volume effect of the hadronic vacuum polarization contribution to muon g-2, aμhvp, in lattice QCD by comparison with two different volumes, L4 = (5.4)4 and (8.1)4 fm4, at physical pion. We perform the lattice computation of highly precise vector-vector current correlator with optimized AMA technique on Nf = 2 + 1 PACS gauge configurations in Wilson-clover fermion and stout smeared gluon action at one lattice cut-off, a-1 = 2.33 GeV. We compare two integrals of aμhvp, momentum integral and time-slice summation, on the lattice and numerically show that the different size of finite volume effect appears between two methods. We also discuss the effect of backward-state propagation into the result of aμhvp with the different boundary condition. Our model-independent study suggest that the lattice computation at physical pion is important for correct estimate of finite volume and other lattice systematics in aμhvp.
Modeling and Analysis of Power Processing Systems (MAPPS). Volume 1: Technical report
NASA Technical Reports Server (NTRS)
Lee, F. C.; Rahman, S.; Carter, R. A.; Wu, C. H.; Yu, Y.; Chang, R.
1980-01-01
Computer aided design and analysis techniques were applied to power processing equipment. Topics covered include: (1) discrete time domain analysis of switching regulators for performance analysis; (2) design optimization of power converters using augmented Lagrangian penalty function technique; (3) investigation of current-injected multiloop controlled switching regulators; and (4) application of optimization for Navy VSTOL energy power system. The generation of the mathematical models and the development and application of computer aided design techniques to solve the different mathematical models are discussed. Recommendations are made for future work that would enhance the application of the computer aided design techniques for power processing systems.
Methods and computer readable medium for improved radiotherapy dosimetry planning
Wessol, Daniel E.; Frandsen, Michael W.; Wheeler, Floyd J.; Nigg, David W.
2005-11-15
Methods and computer readable media are disclosed for ultimately developing a dosimetry plan for a treatment volume irradiated during radiation therapy with a radiation source concentrated internally within a patient or incident from an external beam. The dosimetry plan is available in near "real-time" because of the novel geometric model construction of the treatment volume which in turn allows for rapid calculations to be performed for simulated movements of particles along particle tracks therethrough. The particles are exemplary representations of alpha, beta or gamma emissions emanating from an internal radiation source during various radiotherapies, such as brachytherapy or targeted radionuclide therapy, or they are exemplary representations of high-energy photons, electrons, protons or other ionizing particles incident on the treatment volume from an external source. In a preferred embodiment, a medical image of a treatment volume irradiated during radiotherapy having a plurality of pixels of information is obtained.
Velocity gradients and reservoir volumes lessons in computational sensitivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, P.W.
1995-12-31
The sensitivity of reservoir volume estimation from depth converted geophysical time maps to the velocity gradients employed is investigated through a simple model study. The computed volumes are disconcertingly sensitive to gradients, both horizontal and vertical. The need for an accurate method of time to depth conversion is well demonstrated by the model study in which errors in velocity are magnified 40 fold in the computation of the volume. Thus if +/- 10% accuracy in the volume is desired, we must be able to estimate the velocity at the water contact with 0.25% accuracy. Put another way, if the velocitymore » is 8000 feet per second at the well then we have only +/- 20 feet per second leeway in estimating the velocity at the water contact. Very moderate horizontal and vertical gradients would typically indicate a velocity change of a few hundred feet per second if they are in the same direction. Clearly the interpreter needs to by very careful. A methodology is demonstrated which takes into account all the information that is available, velocities, tops, depositional and lithologic spatial patterns, and common sense. It is assumed that through appropriate use of check shot and other time-depth information, that the interpreter has correctly tied the reflection picks to the well tops. Such ties are ordinarily too soft for direct time-depth conversion to give adequate depth ties. The proposed method uses a common compaction law as its basis and incorporates time picks, tops and stratigraphic maps into the depth conversion process. The resulting depth map ties the known well tops in an optimum fashion.« less
NASA Astrophysics Data System (ADS)
Zhan, Shuiqing; Wang, Junfeng; Wang, Zhentao; Yang, Jianhong
2018-02-01
The effects of different cell design and operating parameters on the gas-liquid two-phase flows and bubble distribution characteristics under the anode bottom regions in aluminum electrolysis cells were analyzed using a three-dimensional computational fluid dynamics-population balance model. These parameters include inter-anode channel width, anode-cathode distance (ACD), anode width and length, current density, and electrolyte depth. The simulations results show that the inter-anode channel width has no significant effect on the gas volume fraction, electrolyte velocity, and bubble size. With increasing ACD, the above values decrease and more uniform bubbles can be obtained. Different effects of the anode width and length can be concluded in different cell regions. With increasing current density, the gas volume fraction and electrolyte velocity increase, but the bubble size keeps nearly the same. Increasing electrolyte depth decreased the gas volume fraction and bubble size in particular areas and the electrolyte velocity increased.
Computer design of porous active materials at different dimensional scales
NASA Astrophysics Data System (ADS)
Nasedkin, Andrey
2017-12-01
The paper presents a mathematical and computer modeling of effective properties of porous piezoelectric materials of three types: with ordinary porosity, with metallized pore surfaces, and with nanoscale porosity structure. The described integrated approach includes the effective moduli method of composite mechanics, simulation of representative volumes, and finite element method.
ERIC Educational Resources Information Center
de la Torre, Jose Garcia; Cifre, Jose G. Hernandez; Martinez, M. Carmen Lopez
2008-01-01
This paper describes a computational exercise at undergraduate level that demonstrates the employment of Monte Carlo simulation to study the conformational statistics of flexible polymer chains, and to predict solution properties. Three simple chain models, including excluded volume interactions, have been implemented in a public-domain computer…
NASA Technical Reports Server (NTRS)
Wray, S. T., Jr.
1975-01-01
Information necessary to use the LOVES computer program in its existing state or to modify the program to include studies not properly handled by the basic model is provided. A users guide, a programmers manual, and several supporting appendices are included.
Interactive Rapid Dose Assessment Model (IRDAM): reactor-accident assessment methods. Vol. 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poeton, R.W.; Moeller, M.P.; Laughlin, G.J.
1983-05-01
As part of the continuing emphasis on emergency preparedness, the US Nuclear Regulatory Commission (NRC) sponsored the development of a rapid dose assessment system by Pacific Northwest Laboratory (PNL). This system, the Interactive Rapid Dose Assessment Model (IRDAM) is a micro-computer based program for rapidly assessing the radiological impact of accidents at nuclear power plants. This document describes the technical bases for IRDAM including methods, models and assumptions used in calculations. IRDAM calculates whole body (5-cm depth) and infant thyroid doses at six fixed downwind distances between 500 and 20,000 meters. Radionuclides considered primarily consist of noble gases and radioiodines.more » In order to provide a rapid assessment capability consistent with the capacity of the Osborne-1 computer, certain simplifying approximations and assumptions are made. These are described, along with default values (assumptions used in the absence of specific input) in the text of this document. Two companion volumes to this one provide additional information on IRDAM. The user's Guide (NUREG/CR-3012, Volume 1) describes the setup and operation of equipment necessary to run IRDAM. Scenarios for Comparing Dose Assessment Models (NUREG/CR-3012, Volume 3) provides the results of calculations made by IRDAM and other models for specific accident scenarios.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shang, Yu; Lin, Yu; Yu, Guoqiang, E-mail: guoqiang.yu@uky.edu
2014-05-12
Conventional semi-infinite solution for extracting blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements may cause errors in estimation of BFI (αD{sub B}) in tissues with small volume and large curvature. We proposed an algorithm integrating Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in tissue for the extraction of αD{sub B}. The volume and geometry of the measured tissue were incorporated in the Monte Carlo simulation, which overcome the semi-infinite restrictions. The algorithm was tested using computer simulations on four tissue models with varied volumes/geometries and applied on an in vivo strokemore » model of mouse. Computer simulations shows that the high-order (N ≥ 5) linear algorithm was more accurate in extracting αD{sub B} (errors < ±2%) from the noise-free DCS data than the semi-infinite solution (errors: −5.3% to −18.0%) for different tissue models. Although adding random noises to DCS data resulted in αD{sub B} variations, the mean values of errors in extracting αD{sub B} were similar to those reconstructed from the noise-free DCS data. In addition, the errors in extracting the relative changes of αD{sub B} using both linear algorithm and semi-infinite solution were fairly small (errors < ±2.0%) and did not rely on the tissue volume/geometry. The experimental results from the in vivo stroke mice agreed with those in simulations, demonstrating the robustness of the linear algorithm. DCS with the high-order linear algorithm shows the potential for the inter-subject comparison and longitudinal monitoring of absolute BFI in a variety of tissues/organs with different volumes/geometries.« less
Computational upscaling of Drucker-Prager plasticity from micro-CT images of synthetic porous rock
NASA Astrophysics Data System (ADS)
Liu, Jie; Sarout, Joel; Zhang, Minchao; Dautriat, Jeremie; Veveakis, Emmanouil; Regenauer-Lieb, Klaus
2018-01-01
Quantifying rock physical properties is essential for the mining and petroleum industry. Microtomography provides a new way to quantify the relationship between the microstructure and the mechanical and transport properties of a rock. Studies reporting the use microtomographic images to derive permeability and elastic moduli of rocks are common; only rare studies were devoted to yield and failure parameters using this technique. In this study, we simulate the macroscale plastic properties of a synthetic sandstone sample made of calcite-cemented quartz grains using the microscale information obtained from microtomography. The computations rely on the concept of representative volume elements (RVEs). The mechanical RVE is determined using the upper and lower bounds of finite-element computations for elasticity. We present computational upscaling methods from microphysical processes to extract the plasticity parameters of the RVE and compare results to experimental data. The yield stress, cohesion and internal friction angle of the matrix (solid part) of the rock were obtained with reasonable accuracy. Computations of plasticity of a series of models of different volume-sizes showed almost overlapping stress-strain curves, suggesting that the mechanical RVE determined by elastic computations is also valid for plastic yielding. Furthermore, a series of models were created by self-similarly inflating/deflating the porous models, that is keeping a similar structure while achieving different porosity values. The analysis of these models showed that yield stress, cohesion and internal friction angle linearly decrease with increasing porosity in the porosity range between 8 and 28 per cent. The internal friction angle decreases the most significantly, while cohesion remains stable.
A microphysical parameterization of aqSOA and sulfate formation in clouds
NASA Astrophysics Data System (ADS)
McVay, Renee; Ervens, Barbara
2017-07-01
Sulfate and secondary organic aerosol (cloud aqSOA) can be chemically formed in cloud water. Model implementation of these processes represents a computational burden due to the large number of microphysical and chemical parameters. Chemical mechanisms have been condensed by reducing the number of chemical parameters. Here an alternative is presented to reduce the number of microphysical parameters (number of cloud droplet size classes). In-cloud mass formation is surface and volume dependent due to surface-limited oxidant uptake and/or size-dependent pH. Box and parcel model simulations show that using the effective cloud droplet diameter (proportional to total volume-to-surface ratio) reproduces sulfate and aqSOA formation rates within ≤30% as compared to full droplet distributions; other single diameters lead to much greater deviations. This single-class approach reduces computing time significantly and can be included in models when total liquid water content and effective diameter are available.
NASA Astrophysics Data System (ADS)
Lonsdale, R. D.; Webster, R.
This paper demonstrates the application of a simple finite volume approach to a finite element mesh, combining the economy of the former with the geometrical flexibility of the latter. The procedure is used to model a three-dimensional flow on a mesh of linear eight-node brick (hexahedra). Simulations are performed for a wide range of flow problems, some in excess of 94,000 nodes. The resulting computer code ASTEC that incorporates these procedures is described.
Magdoom, Kulam Najmudeen; Pishko, Gregory L.; Rice, Lori; Pampo, Chris; Siemann, Dietmar W.; Sarntinoranont, Malisa
2014-01-01
Systemic drug delivery to solid tumors involving macromolecular therapeutic agents is challenging for many reasons. Amongst them is their chaotic microvasculature which often leads to inadequate and uneven uptake of the drug. Localized drug delivery can circumvent such obstacles and convection-enhanced delivery (CED) - controlled infusion of the drug directly into the tissue - has emerged as a promising delivery method for distributing macromolecules over larger tissue volumes. In this study, a three-dimensional MR image-based computational porous media transport model accounting for realistic anatomical geometry and tumor leakiness was developed for predicting the interstitial flow field and distribution of albumin tracer following CED into the hind-limb tumor (KHT sarcoma) in a mouse. Sensitivity of the model to changes in infusion flow rate, catheter placement and tissue hydraulic conductivity were investigated. The model predictions suggest that 1) tracer distribution is asymmetric due to heterogeneous porosity; 2) tracer distribution volume varies linearly with infusion volume within the whole leg, and exponentially within the tumor reaching a maximum steady-state value; 3) infusion at the center of the tumor with high flow rates leads to maximum tracer coverage in the tumor with minimal leakage outside; and 4) increasing the tissue hydraulic conductivity lowers the tumor interstitial fluid pressure and decreases the tracer distribution volume within the whole leg and tumor. The model thus predicts that the interstitial fluid flow and drug transport is sensitive to porosity and changes in extracellular space. This image-based model thus serves as a potential tool for exploring the effects of transport heterogeneity in tumors. PMID:24619021
DOT National Transportation Integrated Search
1975-05-01
The report describes an analytical approach to estimation of fuel consumption in rail transportation, and provides sample computer calculations suggesting the sensitivity of fuel usage to various parameters. The model used is based upon careful delin...
DOT National Transportation Integrated Search
1977-01-01
Auto production and operation consume energy, material, capital and labor resources. Numerous substitution possibilities exist within and between resource sectors, corresponding to the broad spectrum of potential design technologies. Alternative auto...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williamson, Casey W.; Green, Garrett; Noticewala, Sonal S.
Purpose: Validated models are needed to justify strategies to define planning target volumes (PTVs) for intact cervical cancer used in clinical practice. Our objective was to independently validate a previously published shape model, using data collected prospectively from clinical trials. Methods and Materials: We analyzed 42 patients with intact cervical cancer treated with daily fractionated pelvic intensity modulated radiation therapy and concurrent chemotherapy in one of 2 prospective clinical trials. We collected online cone beam computed tomography (CBCT) scans before each fraction. Clinical target volume (CTV) structures from the planning computed tomography scan were cast onto each CBCT scan aftermore » rigid registration and manually redrawn to account for organ motion and deformation. We applied the 95% isodose cloud from the planning computed tomography scan to each CBCT scan and computed any CTV outside the 95% isodose cloud. The primary aim was to determine the proportion of CTVs that were encompassed within the 95% isodose volume. A 1-sample t test was used to test the hypothesis that the probability of complete coverage was different from 95%. We used mixed-effects logistic regression to assess effects of time and patient variability. Results: The 95% isodose line completely encompassed 92.3% of all CTVs (95% confidence interval, 88.3%-96.4%), not significantly different from the 95% probability anticipated a priori (P=.19). The overall proportion of missed CTVs was small: the grand mean of covered CTVs was 99.9%, and 95.2% of misses were located in the anterior body of the uterus. Time did not affect coverage probability (P=.71). Conclusions: With the clinical implementation of a previously proposed PTV definition strategy based on a shape model for intact cervical cancer, the probability of CTV coverage was high and the volume of CTV missed was low. This PTV expansion strategy is acceptable for clinical trials and practice; however, we recommend daily image guidance to avoid systematic large misses in select patients.« less
NASA Astrophysics Data System (ADS)
Toyokuni, G.; Takenaka, H.
2007-12-01
We propose a method to obtain effective grid parameters for the finite-difference (FD) method with standard Earth models using analytical ways. In spite of the broad use of the heterogeneous FD formulation for seismic waveform modeling, accurate treatment of material discontinuities inside the grid cells has been a serious problem for many years. One possible way to solve this problem is to introduce effective grid elastic moduli and densities (effective parameters) calculated by the volume harmonic averaging of elastic moduli and volume arithmetic averaging of density in grid cells. This scheme enables us to put a material discontinuity into an arbitrary position in the spatial grids. Most of the methods used for synthetic seismogram calculation today receives the blessing of the standard Earth models, such as the PREM, IASP91, SP6, and AK135, represented as functions of normalized radius. For the FD computation of seismic waveform with such models, we first need accurate treatment of material discontinuities in radius. This study provides a numerical scheme for analytical calculations of the effective parameters for an arbitrary spatial grids in radial direction as to these major four standard Earth models making the best use of their functional features. This scheme can analytically obtain the integral volume averages through partial fraction decompositions (PFDs) and integral formulae. We have developed a FORTRAN subroutine to perform the computations, which is opened to utilization in a large variety of FD schemes ranging from 1-D to 3-D, with conventional- and staggered-grids. In the presentation, we show some numerical examples displaying the accuracy of the FD synthetics simulated with the analytical effective parameters.
Computational study of Drucker-Prager plasticity of rock using microtomography
NASA Astrophysics Data System (ADS)
Liu, J.; Sarout, J.; Zhang, M.; Dautriat, J.; Veveakis, M.; Regenauer-Lieb, K.
2016-12-01
Understanding the physics of rocks is essential for the industry of mining and petroleum. Microtomography provides a new way to quantify the relationship between the microstructure and their mechanical and transport properties. Transport and elastic properties have been studied widely while plastic properties are still poorly understood. In this study, we analyse a synthetic sandstone sample for its up-scaled plastic properties from the micro-scale. The computations are based on the representative volume element (RVE). The mechanical RVE was determined by the upper and lower bound finite element computations of elasticity. By comparing with experimental curves, the parameters of the matrix (solid part), which consists of calcite-cemented quartz grains, were investigated and quite accurate values obtained. Analyses deduced the bulk properties of yield stress, cohesion and the angle of friction of the rock with pores. Computations of a series of models of volume-sizes from 240-cube to 400-cube showed almost overlapped stress-strain curves, suggesting that the mechanical RVE determined by elastic computations is valid for plastic yielding. Furthermore, a series of derivative models were created which have similar structure but different porosity values. The analyses of these models showed that yield stress, cohesion and the angle of friction linearly decrease with the porosity increasing in the range of porosity from 8% to 28%. The angle of friction decreases the fastest and cohesion shows the most stable along with porosity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engberg, L; KTH Royal Institute of Technology, Stockholm; Eriksson, K
Purpose: To formulate objective functions of a multicriteria fluence map optimization model that correlate well with plan quality metrics, and to solve this multicriteria model by convex approximation. Methods: In this study, objectives of a multicriteria model are formulated to explicitly either minimize or maximize a dose-at-volume measure. Given the widespread agreement that dose-at-volume levels play important roles in plan quality assessment, these objectives correlate well with plan quality metrics. This is in contrast to the conventional objectives, which are to maximize clinical goal achievement by relating to deviations from given dose-at-volume thresholds: while balancing the new objectives means explicitlymore » balancing dose-at-volume levels, balancing the conventional objectives effectively means balancing deviations. Constituted by the inherently non-convex dose-at-volume measure, the new objectives are approximated by the convex mean-tail-dose measure (CVaR measure), yielding a convex approximation of the multicriteria model. Results: Advantages of using the convex approximation are investigated through juxtaposition with the conventional objectives in a computational study of two patient cases. Clinical goals of each case respectively point out three ROI dose-at-volume measures to be considered for plan quality assessment. This is translated in the convex approximation into minimizing three mean-tail-dose measures. Evaluations of the three ROI dose-at-volume measures on Pareto optimal plans are used to represent plan quality of the Pareto sets. Besides providing increased accuracy in terms of feasibility of solutions, the convex approximation generates Pareto sets with overall improved plan quality. In one case, the Pareto set generated by the convex approximation entirely dominates that generated with the conventional objectives. Conclusion: The initial computational study indicates that the convex approximation outperforms the conventional objectives in aspects of accuracy and plan quality.« less
Haralambidis, Adam; Ari-Demirkaya, Arzu; Acar, Ahu; Küçükkeleş, Nazan; Ateş, Mustafa; Ozkaya, Selin
2009-12-01
The aim of this study was to evaluate the effect of rapid maxillary expansion on the volume of the nasal cavity by using computed tomography. The sample consisted of 24 patients (10 boys, 14 girls) in the permanent dentition who had maxillary constriction and bilateral posterior crossbite. Ten patients had skeletal Class I and 14 had Class II relationships. Skeletal maturity was assessed with the modified cervical vertebral maturation method. Computed tomograms were taken before expansion and at the end of the 3-month retention period, after active expansion. The tomograms were analyzed by Mimics software (version 10.11, Materialise Medical Co, Leuven, Belgium) to reconstruct 3-dimensional images and calculate the volume of the nasal cavities before and after expansion. A significant (P = 0.000) average increase of 11.3% in nasal volume was found. Sex, growth, and skeletal relationship did not influence measurements or response to treatment. A significant difference was found in the volume increase between the Class I and Class II patients, but it was attributed to the longer expansion period of the latter. Therefore, rapid maxillary expansion induces a significant average increase of the nasal volume and consequently can increase nasal permeability and establish a predominant nasal respiration pattern.
A computational method for sharp interface advection.
Roenby, Johan; Bredmose, Henrik; Jasak, Hrvoje
2016-11-01
We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volume of fluid (VOF) idea of calculating the volume of one of the fluids transported across the mesh faces during a time step. The novelty of the isoAdvector concept consists of two parts. First, we exploit an isosurface concept for modelling the interface inside cells in a geometric surface reconstruction step. Second, from the reconstructed surface, we model the motion of the face-interface intersection line for a general polygonal face to obtain the time evolution within a time step of the submerged face area. Integrating this submerged area over the time step leads to an accurate estimate for the total volume of fluid transported across the face. The method was tested on simple two-dimensional and three-dimensional interface advection problems on both structured and unstructured meshes. The results are very satisfactory in terms of volume conservation, boundedness, surface sharpness and efficiency. The isoAdvector method was implemented as an OpenFOAM ® extension and is published as open source.
Vivekanandhan, Sapthagirivasan; Subramaniam, Janarthanam; Mariamichael, Anburajan
2016-10-01
Hip fractures due to osteoporosis are increasing progressively across the globe. It is also difficult for those fractured patients to undergo dual-energy X-ray absorptiometry scans due to its complicated protocol and its associated cost. The utilisation of computed tomography for the fracture treatment has become common in the clinical practice. It would be helpful for orthopaedic clinicians, if they could get some additional information related to bone strength for better treatment planning. The aim of our study was to develop an automated system to segment the femoral neck region, extract the cortical and trabecular bone parameters, and assess the bone strength using an isotropic volume construction from clinical computed tomography images. The right hip computed tomography and right femur dual-energy X-ray absorptiometry measurements were taken from 50 south-Indian females aged 30-80 years. Each computed tomography image volume was re-constructed to form isotropic volumes. An automated system by incorporating active contour models was used to segment the neck region. A minimum distance boundary method was applied to isolate the cortical and trabecular bone components. The trabecular bone was enhanced and segmented using trabecular enrichment approach. The cortical and trabecular bone features were extracted and statistically compared with dual-energy X-ray absorptiometry measured femur neck bone mineral density. The extracted bone measures demonstrated a significant correlation with neck bone mineral density (r > 0.7, p < 0.001). The inclusion of cortical measures, along with the trabecular measures extracted after isotropic volume construction and trabecular enrichment approach procedures, resulted in better estimation of bone strength. The findings suggest that the proposed system using the clinical computed tomography images scanned with low dose could eventually be helpful in osteoporosis diagnosis and its treatment planning. © IMechE 2016.
The 1991 version of the plume impingement computer program. Volume 2: User's input guide
NASA Technical Reports Server (NTRS)
Bender, Robert L.; Somers, Richard E.; Prendergast, Maurice J.; Clayton, Joseph P.; Smith, Sheldon D.
1991-01-01
The Plume Impingement Program (PLIMP) is a computer code used to predict impact pressures, forces, moments, heating rates, and contamination on surfaces due to direct impingement flowfields. Typically, it has been used to analyze the effects of rocket exhaust plumes on nearby structures from ground level to the vacuum of space. The program normally uses flowfields generated by the MOC, RAMP2, SPF/2, or SFPGEN computer programs. It is capable of analyzing gaseous and gas/particle flows. A number of simple subshapes are available to model the surfaces of any structure. The original PLIMP program has been modified many times of the last 20 years. The theoretical bases for the referenced major changes, and additional undocumented changes and enhancements since 1988 are summarized in volume 1 of this report. This volume is the User's Input Guide and should be substituted for all previous guides when running the latest version of the program. This version can operate on VAX and UNIX machines with NCAR graphics ability.
2016-08-23
SECURITY CLASSIFICATION OF: Hybrid finite element / finite volume based CaMEL shallow water flow solvers have been successfully extended to study wave...effects on ice floes in a simplified 10 sq-km ocean domain. Our solver combines the merits of both the finite element and finite volume methods and...ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 sea ice dynamics, shallow water, finite element , finite volume
Evaluating the accuracy of wear formulae for acetabular cup liners.
Wu, James Shih-Shyn; Hsu, Shu-Ling; Chen, Jian-Horng
2010-02-01
This study proposes two methods for exploring the wear volume of a worn liner. The first method is a numerical method, in which SolidWorks software is used to create models of the worn out regions of liners at various wear directions and depths. The second method is an experimental one, in which a machining center is used to mill polyoxymethylene to manufacture worn and unworn liner models, then the volumes of the models are measured. The results show that the SolidWorks software is a good tool for presenting the wear pattern and volume of a worn liner. The formula provided by Ilchmann is the most suitable for computing liner volume loss, but is not accurate enough. This study suggests that a more accurate wear formula is required. This is crucial for accurate evaluation of the performance of hip components implanted in patients, as well as for designing new hip components.
1990-10-01
involving a heavy artillery barrage, the impact point output alone could consume upwards of 10,000 pages of computer paper. For this reason, AURA provides...but pervasive factor: the asset allocation model must be compatible with the mathematical behavior of the input data. Thus, for example, if assets are...described as expendable during repair or decontamination activities, it must have HOMELINKS which appear in the consuming repair SUBCHAINs
Zhao, Dong; Sakoda, Hideyuki; Sawyer, W Gregory; Banks, Scott A; Fregly, Benjamin J
2008-02-01
Wear of ultrahigh molecular weight polyethylene remains a primary factor limiting the longevity of total knee replacements (TKRs). However, wear testing on a simulator machine is time consuming and expensive, making it impractical for iterative design purposes. The objectives of this paper were first, to evaluate whether a computational model using a wear factor consistent with the TKR material pair can predict accurate TKR damage measured in a simulator machine, and second, to investigate how choice of surface evolution method (fixed or variable step) and material model (linear or nonlinear) affect the prediction. An iterative computational damage model was constructed for a commercial knee implant in an AMTI simulator machine. The damage model combined a dynamic contact model with a surface evolution model to predict how wear plus creep progressively alter tibial insert geometry over multiple simulations. The computational framework was validated by predicting wear in a cylinder-on-plate system for which an analytical solution was derived. The implant damage model was evaluated for 5 million cycles of simulated gait using damage measurements made on the same implant in an AMTI machine. Using a pin-on-plate wear factor for the same material pair as the implant, the model predicted tibial insert wear volume to within 2% error and damage depths and areas to within 18% and 10% error, respectively. Choice of material model had little influence, while inclusion of surface evolution affected damage depth and area but not wear volume predictions. Surface evolution method was important only during the initial cycles, where variable step was needed to capture rapid geometry changes due to the creep. Overall, our results indicate that accurate TKR damage predictions can be made with a computational model using a constant wear factor obtained from pin-on-plate tests for the same material pair, and furthermore, that surface evolution method matters only during the initial "break in" period of the simulation.
Multiresolution Iterative Reconstruction in High-Resolution Extremity Cone-Beam CT
Cao, Qian; Zbijewski, Wojciech; Sisniega, Alejandro; Yorkston, John; Siewerdsen, Jeffrey H; Stayman, J Webster
2016-01-01
Application of model-based iterative reconstruction (MBIR) to high resolution cone-beam CT (CBCT) is computationally challenging because of the very fine discretization (voxel size <100 µm) of the reconstructed volume. Moreover, standard MBIR techniques require that the complete transaxial support for the acquired projections is reconstructed, thus precluding acceleration by restricting the reconstruction to a region-of-interest. To reduce the computational burden of high resolution MBIR, we propose a multiresolution Penalized-Weighted Least Squares (PWLS) algorithm, where the volume is parameterized as a union of fine and coarse voxel grids as well as selective binning of detector pixels. We introduce a penalty function designed to regularize across the boundaries between the two grids. The algorithm was evaluated in simulation studies emulating an extremity CBCT system and in a physical study on a test-bench. Artifacts arising from the mismatched discretization of the fine and coarse sub-volumes were investigated. The fine grid region was parameterized using 0.15 mm voxels and the voxel size in the coarse grid region was varied by changing a downsampling factor. No significant artifacts were found in either of the regions for downsampling factors of up to 4×. For a typical extremities CBCT volume size, this downsampling corresponds to an acceleration of the reconstruction that is more than five times faster than a brute force solution that applies fine voxel parameterization to the entire volume. For certain configurations of the coarse and fine grid regions, in particular when the boundary between the regions does not cross high attenuation gradients, downsampling factors as high as 10× can be used without introducing artifacts, yielding a ~50× speedup in PWLS. The proposed multiresolution algorithm significantly reduces the computational burden of high resolution iterative CBCT reconstruction and can be extended to other applications of MBIR where computationally expensive, high-fidelity forward models are applied only to a sub-region of the field-of-view. PMID:27694701
Nonlinear, discrete flood event models, 1. Bayesian estimation of parameters
NASA Astrophysics Data System (ADS)
Bates, Bryson C.; Townley, Lloyd R.
1988-05-01
In this paper (Part 1), a Bayesian procedure for parameter estimation is applied to discrete flood event models. The essence of the procedure is the minimisation of a sum of squares function for models in which the computed peak discharge is nonlinear in terms of the parameters. This objective function is dependent on the observed and computed peak discharges for several storms on the catchment, information on the structure of observation error, and prior information on parameter values. The posterior covariance matrix gives a measure of the precision of the estimated parameters. The procedure is demonstrated using rainfall and runoff data from seven Australian catchments. It is concluded that the procedure is a powerful alternative to conventional parameter estimation techniques in situations where a number of floods are available for parameter estimation. Parts 2 and 3 will discuss the application of statistical nonlinearity measures and prediction uncertainty analysis to calibrated flood models. Bates (this volume) and Bates and Townley (this volume).
NASA Technical Reports Server (NTRS)
Marshall, Jochen; Milos, Frank; Fredrich, Joanne; Rasky, Daniel J. (Technical Monitor)
1997-01-01
Laser Scanning Confocal Microscopy (LSCM) has been used to obtain digital images of the complicated 3-D (three-dimensional) microstructures of rigid, fibrous thermal protection system (TPS) materials. These orthotropic materials are comprised of refractory ceramic fibers with diameters in the range of 1 to 10 microns and have open porosities of 0.8 or more. Algorithms are being constructed to extract quantitative microstructural information from the digital data so that it may be applied to specific heat and mass transport modeling efforts; such information includes, for example, the solid and pore volume fractions, the internal surface area per volume, fiber diameter distributions, and fiber orientation distributions. This type of information is difficult to obtain in general, yet it is directly relevant to many computational efforts which seek to model macroscopic thermophysical phenomena in terms of microscopic mechanisms or interactions. Two such computational efforts for fibrous TPS materials are: i) the calculation of radiative transport properties; ii) the modeling of gas permeabilities.
NASA Technical Reports Server (NTRS)
Janz, R. F.
1974-01-01
The systems cost/performance model was implemented as a digital computer program to perform initial program planning, cost/performance tradeoffs, and sensitivity analyses. The computer is described along with the operating environment in which the program was written and checked, the program specifications such as discussions of logic and computational flow, the different subsystem models involved in the design of the spacecraft, and routines involved in the nondesign area such as costing and scheduling of the design. Preliminary results for the DSCS-II design are also included.
Volume effects of late term normal tissue toxicity in prostate cancer radiotherapy
NASA Astrophysics Data System (ADS)
Bonta, Dacian Viorel
Modeling of volume effects for treatment toxicity is paramount for optimization of radiation therapy. This thesis proposes a new model for calculating volume effects in gastro-intestinal and genito-urinary normal tissue complication probability (NTCP) following radiation therapy for prostate carcinoma. The radiobiological and the pathological basis for this model and its relationship to other models are detailed. A review of the radiobiological experiments and published clinical data identified salient features and specific properties a biologically adequate model has to conform to. The new model was fit to a set of actual clinical data. In order to verify the goodness of fit, two established NTCP models and a non-NTCP measure for complication risk were fitted to the same clinical data. The method of fit for the model parameters was maximum likelihood estimation. Within the framework of the maximum likelihood approach I estimated the parameter uncertainties for each complication prediction model. The quality-of-fit was determined using the Aikaike Information Criterion. Based on the model that provided the best fit, I identified the volume effects for both types of toxicities. Computer-based bootstrap resampling of the original dataset was used to estimate the bias and variance for the fitted parameter values. Computer simulation was also used to estimate the population size that generates a specific uncertainty level (3%) in the value of predicted complication probability. The same method was used to estimate the size of the patient population needed for accurate choice of the model underlying the NTCP. The results indicate that, depending on the number of parameters of a specific NTCP model, 100 (for two parameter models) and 500 patients (for three parameter models) are needed for accurate parameter fit. Correlation of complication occurrence in patients was also investigated. The results suggest that complication outcomes are correlated in a patient, although the correlation coefficient is rather small.
NASA Technical Reports Server (NTRS)
Austin, F.; Markowitz, J.; Goldenberg, S.; Zetkov, G. A.
1973-01-01
The formulation of a mathematical model for predicting the dynamic behavior of rotating flexible space station configurations was conducted. The overall objectives of the study were: (1) to develop the theoretical techniques for determining the behavior of a realistically modeled rotating space station, (2) to provide a versatile computer program for the numerical analysis, and (3) to present practical concepts for experimental verification of the analytical results. The mathematical model and its associated computer program are described.
Visual Debugging of Object-Oriented Systems With the Unified Modeling Language
2004-03-01
to be “the systematic and imaginative use of the technology of interactive computer graphics and the disciplines of graphic design , typography ... Graphics volume 23 no 6, pp893-901, 1999. [SHN98] Shneiderman, B. Designing the User Interface. Strategies for Effective Human-Computer Interaction...System Design Objectives ................................................................................ 44 3.3 System Architecture
CORY: A Computer Program for Determining Dimension Stock Yields
Charles C Brunner; Marshall S. White; Fred M. Lamb; James G. Schroeder
1989-01-01
CORY is a computer program that calculates random-width, fixed-length cutting yields and best sawing sequences for either rip- or crosscut-first operations. It differs from other yield calculating programs by evaluating competing cuttings through conflict resolution models. Comparisons with Program YIELD resulted in a 9 percent greater cutting volume and a 98 percent...
A Software Hub for High Assurance Model-Driven Development and Analysis
2007-01-23
verification of UML models in TLPVS. In Thomas Baar, Alfred Strohmeier, Ana Moreira, and Stephen J. Mellor, editors, UML 2004 - The Unified Modeling...volume 3785 of Lecture Notes in Computer Science, pages 52–65, Manchester, UK, Nov 2005. Springer. [GH04] Günter Graw and Peter Herrmann. Transformation
A study of application of remote sensing to river forecasting. Volume 1: Executive summary
NASA Technical Reports Server (NTRS)
1975-01-01
A project is described whose goal was to define, implement and evaluate a pilot demonstration test to show the practicability of applying remotely sensed data to operational river forecasting in gaged or previously ungaged watersheds. A secondary objective was to provide NASA with documentation describing the computer programs that comprise the streamflow forecasting simulation model used. A computer-based simulation model was adapted to a streamflow forecasting application and implemented in an IBM System/360 Model 44 computer, operating in a dedicated mode, with operator interactive control through a Model 2250 keyboard/graphic CRT terminal. The test site whose hydrologic behavior was simulated is a small basin (365 square kilometers) designated Town Creek near Geraldine, Alabama.
NASA Astrophysics Data System (ADS)
Cai, Wenli; Yoshida, Hiroyuki; Harris, Gordon J.
2007-03-01
Measurement of the volume of focal liver tumors, called liver tumor volumetry, is indispensable for assessing the growth of tumors and for monitoring the response of tumors to oncology treatments. Traditional edge models, such as the maximum gradient and zero-crossing methods, often fail to detect the accurate boundary of a fuzzy object such as a liver tumor. As a result, the computerized volumetry based on these edge models tends to differ from manual segmentation results performed by physicians. In this study, we developed a novel computerized volumetry method for fuzzy objects, called dynamic-thresholding level set (DT level set). An optimal threshold value computed from a histogram tends to shift, relative to the theoretical threshold value obtained from a normal distribution model, toward a smaller region in the histogram. We thus designed a mobile shell structure, called a propagating shell, which is a thick region encompassing the level set front. The optimal threshold calculated from the histogram of the shell drives the level set front toward the boundary of a liver tumor. When the volume ratio between the object and the background in the shell approaches one, the optimal threshold value best fits the theoretical threshold value and the shell stops propagating. Application of the DT level set to 26 hepatic CT cases with 63 biopsy-confirmed hepatocellular carcinomas (HCCs) and metastases showed that the computer measured volumes were highly correlated with those of tumors measured manually by physicians. Our preliminary results showed that DT level set was effective and accurate in estimating the volumes of liver tumors detected in hepatic CT images.
Advanced vehicle systems assessment. Volume 5: Appendices
NASA Technical Reports Server (NTRS)
Hardy, K.
1985-01-01
An appendix to the systems assessment for the electric hybrid vehicle project is presented. Included are battery design, battery cost, aluminum vehicle construction, IBM PC computer programs and battery discharge models.
NASA Astrophysics Data System (ADS)
Mathieu, Jean-Philippe; Inal, Karim; Berveiller, Sophie; Diard, Olivier
2010-11-01
Local approach to brittle fracture for low-alloyed steels is discussed in this paper. A bibliographical introduction intends to highlight general trends and consensual points of the topic and evokes debatable aspects. French RPV steel 16MND5 (equ. ASTM A508 Cl.3), is then used as a model material to study the influence of temperature on brittle fracture. A micromechanical modelling of brittle fracture at the elementary volume scale already used in previous work is then recalled. It involves a multiscale modelling of microstructural plasticity which has been tuned on experimental inter-phase and inter-granular stresses heterogeneities measurements. Fracture probability of the elementary volume can then be computed using a randomly attributed defect size distribution based on realistic carbides repartition. This defect distribution is then deterministically correlated to stress heterogeneities simulated within the microstructure using a weakest-link hypothesis on the elementary volume, which results in a deterministic stress to fracture. Repeating the process allows to compute Weibull parameters on the elementary volume. This tool is then used to investigate the physical mechanisms that could explain the already experimentally observed temperature dependence of Beremin's parameter for 16MND5 steel. It is showed that, assuming that the hypothesis made in this work about cleavage micro-mechanisms are correct, effective equivalent surface energy (i.e. surface energy plus plastically dissipated energy when blunting the crack tip) for propagating a crack has to be temperature dependent to explain Beremin's parameters temperature evolution.
Industrial machinery noise impact modeling, volume 1
NASA Astrophysics Data System (ADS)
Hansen, C. H.; Kugler, B. A.
1981-07-01
The development of a machinery noise computer model which may be used to assess the effect of occupational noise on the health and welfare of industrial workers is discussed. The purpose of the model is to provide EPA with the methodology to evaluate the personnel noise problem, to identify the equipment types responsible for the exposure and to assess the potential benefits of a given noise control action. Due to its flexibility in design and application, the model and supportive computer program can be used by other federal agencies, state governments, labor and industry as an aid in the development of noise abatement programs.
PuMA: the Porous Microstructure Analysis software
NASA Astrophysics Data System (ADS)
Ferguson, Joseph C.; Panerai, Francesco; Borner, Arnaud; Mansour, Nagi N.
2018-01-01
The Porous Microstructure Analysis (PuMA) software has been developed in order to compute effective material properties and perform material response simulations on digitized microstructures of porous media. PuMA is able to import digital three-dimensional images obtained from X-ray microtomography or to generate artificial microstructures. PuMA also provides a module for interactive 3D visualizations. Version 2.1 includes modules to compute porosity, volume fractions, and surface area. Two finite difference Laplace solvers have been implemented to compute the continuum tortuosity factor, effective thermal conductivity, and effective electrical conductivity. A random method has been developed to compute tortuosity factors from the continuum to rarefied regimes. Representative elementary volume analysis can be performed on each property. The software also includes a time-dependent, particle-based model for the oxidation of fibrous materials. PuMA was developed for Linux operating systems and is available as a NASA software under a US & Foreign release.
Estimation of Local Bone Loads for the Volume of Interest.
Kim, Jung Jin; Kim, Youkyung; Jang, In Gwun
2016-07-01
Computational bone remodeling simulations have recently received significant attention with the aid of state-of-the-art high-resolution imaging modalities. They have been performed using localized finite element (FE) models rather than full FE models due to the excessive computational costs of full FE models. However, these localized bone remodeling simulations remain to be investigated in more depth. In particular, applying simplified loading conditions (e.g., uniform and unidirectional loads) to localized FE models have a severe limitation in a reliable subject-specific assessment. In order to effectively determine the physiological local bone loads for the volume of interest (VOI), this paper proposes a novel method of estimating the local loads when the global musculoskeletal loads are given. The proposed method is verified for the three VOI in a proximal femur in terms of force equilibrium, displacement field, and strain energy density (SED) distribution. The effect of the global load deviation on the local load estimation is also investigated by perturbing a hip joint contact force (HCF) in the femoral head. Deviation in force magnitude exhibits the greatest absolute changes in a SED distribution due to its own greatest deviation, whereas angular deviation perpendicular to a HCF provides the greatest relative change. With further in vivo force measurements and high-resolution clinical imaging modalities, the proposed method will contribute to the development of reliable patient-specific localized FE models, which can provide enhanced computational efficiency for iterative computing processes such as bone remodeling simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Summers, R.M.; Cole, R.K. Jr.; Smith, R.C.
1995-03-01
MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the U.S. Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, andmore » combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR`s phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package.« less
Normal reference values for bladder wall thickness on CT in a healthy population.
Fananapazir, Ghaneh; Kitich, Aleksandar; Lamba, Ramit; Stewart, Susan L; Corwin, Michael T
2018-02-01
To determine normal bladder wall thickness on CT in patients without bladder disease. Four hundred and nineteen patients presenting for trauma with normal CTs of the abdomen and pelvis were included in our retrospective study. Bladder wall thickness was assessed, and bladder volume was measured using both the ellipsoid formula and an automated technique. Patient age, gender, and body mass index were recorded. Linear regression models were created to account for bladder volume, age, gender, and body mass index, and the multiple correlation coefficient with bladder wall thickness was computed. Bladder volume and bladder wall thickness were log-transformed to achieve approximate normality and homogeneity of variance. Variables that did not contribute substantively to the model were excluded, and a parsimonious model was created and the multiple correlation coefficient was calculated. Expected bladder wall thickness was estimated for different bladder volumes, and 1.96 standard deviation above expected provided the upper limit of normal on the log scale. Age, gender, and bladder volume were associated with bladder wall thickness (p = 0.049, 0.024, and < 0.001, respectively). The linear regression model had an R 2 of 0.52. Age and gender were negligible in contribution to the model, and a parsimonious model using only volume was created for both the ellipsoid and automated volumes (R 2 = 0.52 and 0.51, respectively). Bladder wall thickness correlates with bladder wall volume. The study provides reference bladder wall thicknesses on CT utilizing both the ellipsoid formula and automated bladder volumes.
Nasrabad, Afshin Eskandari; Laghaei, Rozita; Eu, Byung Chan
2005-04-28
In previous work on the density fluctuation theory of transport coefficients of liquids, it was necessary to use empirical self-diffusion coefficients to calculate the transport coefficients (e.g., shear viscosity of carbon dioxide). In this work, the necessity of empirical input of the self-diffusion coefficients in the calculation of shear viscosity is removed, and the theory is thus made a self-contained molecular theory of transport coefficients of liquids, albeit it contains an empirical parameter in the subcritical regime. The required self-diffusion coefficients of liquid carbon dioxide are calculated by using the modified free volume theory for which the generic van der Waals equation of state and Monte Carlo simulations are combined to accurately compute the mean free volume by means of statistical mechanics. They have been computed as a function of density along four different isotherms and isobars. A Lennard-Jones site-site interaction potential was used to model the molecular carbon dioxide interaction. The density and temperature dependence of the theoretical self-diffusion coefficients are shown to be in excellent agreement with experimental data when the minimum critical free volume is identified with the molecular volume. The self-diffusion coefficients thus computed are then used to compute the density and temperature dependence of the shear viscosity of liquid carbon dioxide by employing the density fluctuation theory formula for shear viscosity as reported in an earlier paper (J. Chem. Phys. 2000, 112, 7118). The theoretical shear viscosity is shown to be robust and yields excellent density and temperature dependence for carbon dioxide. The pair correlation function appearing in the theory has been computed by Monte Carlo simulations.
Computational study of noise in a large signal transduction network.
Intosalmi, Jukka; Manninen, Tiina; Ruohonen, Keijo; Linne, Marja-Leena
2011-06-21
Biochemical systems are inherently noisy due to the discrete reaction events that occur in a random manner. Although noise is often perceived as a disturbing factor, the system might actually benefit from it. In order to understand the role of noise better, its quality must be studied in a quantitative manner. Computational analysis and modeling play an essential role in this demanding endeavor. We implemented a large nonlinear signal transduction network combining protein kinase C, mitogen-activated protein kinase, phospholipase A2, and β isoform of phospholipase C networks. We simulated the network in 300 different cellular volumes using the exact Gillespie stochastic simulation algorithm and analyzed the results in both the time and frequency domain. In order to perform simulations in a reasonable time, we used modern parallel computing techniques. The analysis revealed that time and frequency domain characteristics depend on the system volume. The simulation results also indicated that there are several kinds of noise processes in the network, all of them representing different kinds of low-frequency fluctuations. In the simulations, the power of noise decreased on all frequencies when the system volume was increased. We concluded that basic frequency domain techniques can be applied to the analysis of simulation results produced by the Gillespie stochastic simulation algorithm. This approach is suited not only to the study of fluctuations but also to the study of pure noise processes. Noise seems to have an important role in biochemical systems and its properties can be numerically studied by simulating the reacting system in different cellular volumes. Parallel computing techniques make it possible to run massive simulations in hundreds of volumes and, as a result, accurate statistics can be obtained from computational studies. © 2011 Intosalmi et al; licensee BioMed Central Ltd.
Three-dimensional spiral CT during arterial portography: comparison of three rendering techniques.
Heath, D G; Soyer, P A; Kuszyk, B S; Bliss, D F; Calhoun, P S; Bluemke, D A; Choti, M A; Fishman, E K
1995-07-01
The three most common techniques for three-dimensional reconstruction are surface rendering, maximum-intensity projection (MIP), and volume rendering. Surface-rendering algorithms model objects as collections of geometric primitives that are displayed with surface shading. The MIP algorithm renders an image by selecting the voxel with the maximum intensity signal along a line extended from the viewer's eye through the data volume. Volume-rendering algorithms sum the weighted contributions of all voxels along the line. Each technique has advantages and shortcomings that must be considered during selection of one for a specific clinical problem and during interpretation of the resulting images. With surface rendering, sharp-edged, clear three-dimensional reconstruction can be completed on modest computer systems; however, overlapping structures cannot be visualized and artifacts are a problem. MIP is computationally a fast technique, but it does not allow depiction of overlapping structures, and its images are three-dimensionally ambiguous unless depth cues are provided. Both surface rendering and MIP use less than 10% of the image data. In contrast, volume rendering uses nearly all of the data, allows demonstration of overlapping structures, and engenders few artifacts, but it requires substantially more computer power than the other techniques.
NASA Astrophysics Data System (ADS)
Benedek, Judit; Papp, Gábor; Kalmár, János
2018-04-01
Beyond rectangular prism polyhedron, as a discrete volume element, can also be used to model the density distribution inside 3D geological structures. The calculation of the closed formulae given for the gravitational potential and its higher-order derivatives, however, needs twice more runtime than that of the rectangular prism computations. Although the more detailed the better principle is generally accepted it is basically true only for errorless data. As soon as errors are present any forward gravitational calculation from the model is only a possible realization of the true force field on the significance level determined by the errors. So if one really considers the reliability of input data used in the calculations then sometimes the "less" can be equivalent to the "more" in statistical sense. As a consequence the processing time of the related complex formulae can be significantly reduced by the optimization of the number of volume elements based on the accuracy estimates of the input data. New algorithms are proposed to minimize the number of model elements defined both in local and in global coordinate systems. Common gravity field modelling programs generate optimized models for every computation points ( dynamic approach), whereas the static approach provides only one optimized model for all. Based on the static approach two different algorithms were developed. The grid-based algorithm starts with the maximum resolution polyhedral model defined by 3-3 points of each grid cell and generates a new polyhedral surface defined by points selected from the grid. The other algorithm is more general; it works also for irregularly distributed data (scattered points) connected by triangulation. Beyond the description of the optimization schemes some applications of these algorithms in regional and local gravity field modelling are presented too. The efficiency of the static approaches may provide even more than 90% reduction in computation time in favourable situation without the loss of reliability of the calculated gravity field parameters.
NASA Astrophysics Data System (ADS)
Cho, Yi Je; Lee, Wook Jin; Park, Yong Ho
2014-11-01
Aspects of numerical results from computational experiments on representative volume element (RVE) problems using finite element analyses are discussed. Two different boundary conditions (BCs) are examined and compared numerically for volume elements with different sizes, where tests have been performed on the uniaxial tensile deformation of random particle reinforced composites. Structural heterogeneities near model boundaries such as the free-edges of particle/matrix interfaces significantly influenced the overall numerical solutions, producing force and displacement fluctuations along the boundaries. Interestingly, this effect was shown to be limited to surface regions within a certain distance of the boundaries, while the interior of the model showed almost identical strain fields regardless of the applied BCs. Also, the thickness of the BC-affected regions remained constant with varying volume element sizes in the models. When the volume element size was large enough compared to the thickness of the BC-affected regions, the structural response of most of the model was found to be almost independent of the applied BC such that the apparent properties converged to the effective properties. Finally, the mechanism that leads a RVE model for random heterogeneous materials to be representative is discussed in terms of the size of the volume element and the thickness of the BC-affected region.
Alp, Murat; Cucinotta, Francis A.
2017-01-01
Changes to cognition, including memory, following radiation exposure are a concern for cosmic ray exposures to astronauts and in Hadron therapy with proton and heavy ion beams. The purpose of the present work is to develop computational methods to evaluate microscopic energy deposition (ED) in volumes representative of neuron cell structures, including segments of dendrites and spines, using a stochastic track structure model. A challenge for biophysical models of neuronal damage is the large sizes (>100 μm) and variability in volumes of possible dendritic segments and pre-synaptic elements (spines and filopodia). We consider cylindrical and spherical microscopic volumes of varying geometric parameters and aspect ratios from 0.5 to 5 irradiated by protons, and 3He and 12C particles at energies corresponding to a distance of 1 cm to the Bragg peak, which represent particles of interest in Hadron therapy as well as space radiation exposure. We investigate the optimal axis length of dendritic segments to evaluate microscopic ED and hit probabilities along the dendritic branches at a given macroscopic dose. Because of large computation times to analyze ED in volumes of varying sizes, we developed an analytical method to find the mean primary dose in spheres that can guide numerical methods to find the primary dose distribution for cylinders. Considering cylindrical segments of varying aspect ratio at constant volume, we assess the chord length distribution, mean number of hits and ED profiles by primary particles and secondary electrons (δ-rays). For biophysical modeling applications, segments on dendritic branches are proposed to have equal diameters and axes lengths along the varying diameter of a dendritic branch. PMID:28554507
NASA Astrophysics Data System (ADS)
Alp, Murat; Cucinotta, Francis A.
2017-05-01
Changes to cognition, including memory, following radiation exposure are a concern for cosmic ray exposures to astronauts and in Hadron therapy with proton and heavy ion beams. The purpose of the present work is to develop computational methods to evaluate microscopic energy deposition (ED) in volumes representative of neuron cell structures, including segments of dendrites and spines, using a stochastic track structure model. A challenge for biophysical models of neuronal damage is the large sizes (> 100 μm) and variability in volumes of possible dendritic segments and pre-synaptic elements (spines and filopodia). We consider cylindrical and spherical microscopic volumes of varying geometric parameters and aspect ratios from 0.5 to 5 irradiated by protons, and 3He and 12C particles at energies corresponding to a distance of 1 cm to the Bragg peak, which represent particles of interest in Hadron therapy as well as space radiation exposure. We investigate the optimal axis length of dendritic segments to evaluate microscopic ED and hit probabilities along the dendritic branches at a given macroscopic dose. Because of large computation times to analyze ED in volumes of varying sizes, we developed an analytical method to find the mean primary dose in spheres that can guide numerical methods to find the primary dose distribution for cylinders. Considering cylindrical segments of varying aspect ratio at constant volume, we assess the chord length distribution, mean number of hits and ED profiles by primary particles and secondary electrons (δ-rays). For biophysical modeling applications, segments on dendritic branches are proposed to have equal diameters and axes lengths along the varying diameter of a dendritic branch.
Alp, Murat; Cucinotta, Francis A
2017-05-01
Changes to cognition, including memory, following radiation exposure are a concern for cosmic ray exposures to astronauts and in Hadron therapy with proton and heavy ion beams. The purpose of the present work is to develop computational methods to evaluate microscopic energy deposition (ED) in volumes representative of neuron cell structures, including segments of dendrites and spines, using a stochastic track structure model. A challenge for biophysical models of neuronal damage is the large sizes (> 100µm) and variability in volumes of possible dendritic segments and pre-synaptic elements (spines and filopodia). We consider cylindrical and spherical microscopic volumes of varying geometric parameters and aspect ratios from 0.5 to 5 irradiated by protons, and 3 He and 12 C particles at energies corresponding to a distance of 1cm to the Bragg peak, which represent particles of interest in Hadron therapy as well as space radiation exposure. We investigate the optimal axis length of dendritic segments to evaluate microscopic ED and hit probabilities along the dendritic branches at a given macroscopic dose. Because of large computation times to analyze ED in volumes of varying sizes, we developed an analytical method to find the mean primary dose in spheres that can guide numerical methods to find the primary dose distribution for cylinders. Considering cylindrical segments of varying aspect ratio at constant volume, we assess the chord length distribution, mean number of hits and ED profiles by primary particles and secondary electrons (δ-rays). For biophysical modeling applications, segments on dendritic branches are proposed to have equal diameters and axes lengths along the varying diameter of a dendritic branch. Copyright © 2017. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Olsson, O.
2018-01-01
We present a novel heuristic derived from a probabilistic cost model for approximate N-body simulations. We show that this new heuristic can be used to guide tree construction towards higher quality trees with improved performance over current N-body codes. This represents an important step beyond the current practice of using spatial partitioning for N-body simulations, and enables adoption of a range of state-of-the-art algorithms developed for computer graphics applications to yield further improvements in N-body simulation performance. We outline directions for further developments and review the most promising such algorithms.
Sun, Jirun; Eidelman, Naomi; Lin-Gibson, Sheng
2009-03-01
The objectives of this study were to (1) demonstrate X-ray micro-computed tomography (microCT) as a viable method for determining the polymerization shrinkage and microleakage on the same sample accurately and non-destructively, and (2) investigate the effect of sample geometry (e.g., C-factor and volume) on polymerization shrinkage and microleakage. Composites placed in a series of model cavities of controlled C-factors and volumes were imaged using microCT to determine their precise location and volume before and after photopolymerization. Shrinkage was calculated by comparing the volume of composites before and after polymerization and leakage was predicted based on gap formation between composites and cavity walls as a function of position. Dye penetration experiments were used to validate microCT results. The degree of conversion (DC) of composites measured using FTIR microspectroscopy in reflectance mode was nearly identical for composites filled in all model cavity geometries. The shrinkage of composites calculated based on microCT results was statistically identical regardless of sample geometry. Microleakage, on the other hand, was highly dependent on the C-factor as well as the composite volume, with higher C-factors and larger volumes leading to a greater probability of microleakage. Spatial distribution of microleakage determined by microCT agreed well with results determined by dye penetration. microCT has proven to be a powerful technique in quantifying polymerization shrinkage and corresponding microleakage for clinically relevant cavity geometries.
Engineer Modeling Study. Volume II. Users Manual.
1982-09-01
Distribution Center, Digital Equip- ment Corporation, 1980). The following paragraphs briefly describe each of the major input sections...abbreviation 3. A sequence number for post-processing 4. Clock time 5. Order number pointer (six digits ) 6. Job number pointer (six digits ) 7. Unit number...KIT) Users Manual (Boeing Computer % Services, Inc., 1977). S VAX/VMS Users Manual. Volume 3A (Software Distribution Center, Digital Equipment
Computational Flow Modeling of Hydrodynamics in Multiphase Trickle-Bed Reactors
NASA Astrophysics Data System (ADS)
Lopes, Rodrigo J. G.; Quinta-Ferreira, Rosa M.
2008-05-01
This study aims to incorporate most recent multiphase models in order to investigate the hydrodynamic behavior of a TBR in terms of pressure drop and liquid holdup. Taking into account transport phenomena such as mass and heat transfer, an Eulerian k-fluid model was developed resulting from the volume averaging of the continuity and momentum equations and solved for a 3D representation of the catalytic bed. Computational fluid dynamics (CFD) model predicts hydrodynamic parameters quite well if good closures for fluid/fluid and fluid/particle interactions are incorporated in the multiphase model. Moreover, catalytic performance is investigated with the catalytic wet oxidation of a phenolic pollutant.
The present state and future directions of PDF methods
NASA Technical Reports Server (NTRS)
Pope, S. B.
1992-01-01
The objectives of the workshop are presented in viewgraph format, as is this entire article. The objectives are to discuss the present status and the future direction of various levels of engineering turbulence modeling related to Computational Fluid Dynamics (CFD) computations for propulsion; to assure that combustion is an essential part of propulsion; and to discuss Probability Density Function (PDF) methods for turbulent combustion. Essential to the integration of turbulent combustion models is the development of turbulent model, chemical kinetics, and numerical method. Some turbulent combustion models typically used in industry are the k-epsilon turbulent model, the equilibrium/mixing limited combustion, and the finite volume codes.
Modeling the October 2005 lahars at Panabaj (Guatemala)
NASA Astrophysics Data System (ADS)
Charbonnier, S. J.; Connor, C. B.; Connor, L. J.; Sheridan, M. F.; Oliva Hernández, J. P.; Richardson, J. A.
2018-01-01
An extreme rainfall event in October of 2005 triggered two deadly lahars on the flanks of Tolimán volcano (Guatemala) that caused many fatalities in the village of Panabaj. We mapped the deposits of these lahars, then developed computer simulations of the lahars using the geologic data and compared simulated area inundated by the flows to mapped area inundated. Computer simulation of the two lahars was dramatically improved after calibration with geological data. Specifically, detailed field measurements of flow inundation area, flow thickness, flow direction, and velocity estimates, collected after lahar emplacement, were used to calibrate the rheological input parameters for the models, including deposit volume, yield strength, sediment and water concentrations, and Manning roughness coefficients. Simulations of the two lahars, with volumes of 240,200 ± 55,400 and 126,000 ± 29,000 m3, using the FLO-2D computer program produced models of lahar runout within 3% of measured runouts and produced reasonable estimates of flow thickness and velocity along the lengths of the simulated flows. We compare areas inundated using the Jaccard fit, model sensitivity, and model precision metrics, all related to Bayes' theorem. These metrics show that false negatives (areas inundated by the observed lahar where not simulated) and false positives (areas not inundated by the observed lahar where inundation was simulated) are reduced using a model calibrated by rheology. The metrics offer a procedure for tuning model performance that will enhance model accuracy and make numerical models a more robust tool for natural hazard reduction.
A Lumped Computational Model for Sodium Sulfur Battery Analysis
NASA Astrophysics Data System (ADS)
Wu, Fan
Due to the cost of materials and time consuming testing procedures, development of new batteries is a slow and expensive practice. The purpose of this study is to develop a computational model and assess the capabilities of such a model designed to aid in the design process and control of sodium sulfur batteries. To this end, a transient lumped computational model derived from an integral analysis of the transport of species, energy and charge throughout the battery has been developed. The computation processes are coupled with the use of Faraday's law, and solutions for the species concentrations, electrical potential and current are produced in a time marching fashion. Properties required for solving the governing equations are calculated and updated as a function of time based on the composition of each control volume. The proposed model is validated against multi- dimensional simulations and experimental results from literatures, and simulation results using the proposed model is presented and analyzed. The computational model and electrochemical model used to solve the equations for the lumped model are compared with similar ones found in the literature. The results obtained from the current model compare favorably with those from experiments and other models.
NASA Technical Reports Server (NTRS)
1973-01-01
A computer programmer's manual for a digital computer which will permit rapid and accurate parametric analysis of current and advanced attitude control propulsion systems is presented. The concept is for a cold helium pressurized, subcritical cryogen fluid supplied, bipropellant gas-fed attitude control propulsion system. The cryogen fluids are stored as liquids under low pressure and temperature conditions. The mathematical model provides a generalized form for the procedural technique employed in setting up the analysis program.
Dayton Aircraft Cabin Fire Model, Version 3, Volume I. Physical Description.
1982-06-01
contact to any surface directly above a burning element, provided that the current flame length makes contact possible. For fires originating on the...no extension of the flames horizontally beneath the surface is considered. The equation for computing the flame length is presented in Section 5. For...high as 0.3. The values chosen for DACFIR3 are 0.15 for Ec and 0.10 for E P. The Steward model is also used to compute flame length , hf, for the fire
Systems cost/performance analysis; study 2.3. Volume 3: Programmer's manual and user's guide
NASA Technical Reports Server (NTRS)
1975-01-01
The implementation of the entire systems cost/performance model as a digital computer program was studied. A discussion of the operating environment in which the program was written and checked, the program specifications such as discussions of logic and computational flow, the different subsystem models involved in the design of the spacecraft, and routines involved in the nondesign area such as costing and scheduling of the design were covered. Preliminary results for the DSCS-2 design are also included.
Turbulent Bubbly Flow in a Vertical Pipe Computed By an Eddy-Resolving Reynolds Stress Model
2014-09-19
the numerical code OpenFOAM R©. 1 Introduction Turbulent bubbly flows are encountered in many industrially relevant applications, such as chemical in...performed using the OpenFOAM -2.2.2 computational code utilizing a cell- center-based finite volume method on an unstructured numerical grid. The...the mean Courant number is always below 0.4. The utilized turbulence models were implemented into the so-called twoPhaseEulerFoam solver in OpenFOAM , to
Railroad Classification Yard Technology Manual: Volume II : Yard Computer Systems
DOT National Transportation Integrated Search
1981-08-01
This volume (Volume II) of the Railroad Classification Yard Technology Manual documents the railroad classification yard computer systems methodology. The subjects covered are: functional description of process control and inventory computer systems,...
Jacob, Joseph; Bartholmai, Brian J; Rajagopalan, Srinivasan; Brun, Anne Laure; Egashira, Ryoko; Karwoski, Ronald; Kokosi, Maria; Wells, Athol U; Hansell, David M
2016-11-23
To evaluate computer-based computer tomography (CT) analysis (CALIPER) against visual CT scoring and pulmonary function tests (PFTs) when predicting mortality in patients with connective tissue disease-related interstitial lung disease (CTD-ILD). To identify outcome differences between distinct CTD-ILD groups derived following automated stratification of CALIPER variables. A total of 203 consecutive patients with assorted CTD-ILDs had CT parenchymal patterns evaluated by CALIPER and visual CT scoring: honeycombing, reticular pattern, ground glass opacities, pulmonary vessel volume, emphysema, and traction bronchiectasis. CT scores were evaluated against pulmonary function tests: forced vital capacity, diffusing capacity for carbon monoxide, carbon monoxide transfer coefficient, and composite physiologic index for mortality analysis. Automated stratification of CALIPER-CT variables was evaluated in place of and alongside forced vital capacity and diffusing capacity for carbon monoxide in the ILD gender, age physiology (ILD-GAP) model using receiver operating characteristic curve analysis. Cox regression analyses identified four independent predictors of mortality: patient age (P < 0.0001), smoking history (P = 0.0003), carbon monoxide transfer coefficient (P = 0.003), and pulmonary vessel volume (P < 0.0001). Automated stratification of CALIPER variables identified three morphologically distinct groups which were stronger predictors of mortality than all CT and functional indices. The Stratified-CT model substituted automated stratified groups for functional indices in the ILD-GAP model and maintained model strength (area under curve (AUC) = 0.74, P < 0.0001), ILD-GAP (AUC = 0.72, P < 0.0001). Combining automated stratified groups with the ILD-GAP model (stratified CT-GAP model) strengthened predictions of 1- and 2-year mortality: ILD-GAP (AUC = 0.87 and 0.86, respectively); stratified CT-GAP (AUC = 0.89 and 0.88, respectively). CALIPER-derived pulmonary vessel volume is an independent predictor of mortality across all CTD-ILD patients. Furthermore, automated stratification of CALIPER CT variables represents a novel method of prognostication at least as robust as PFTs in CTD-ILD patients.
Kernel Regression Estimation of Fiber Orientation Mixtures in Diffusion MRI
Cabeen, Ryan P.; Bastin, Mark E.; Laidlaw, David H.
2016-01-01
We present and evaluate a method for kernel regression estimation of fiber orientations and associated volume fractions for diffusion MR tractography and population-based atlas construction in clinical imaging studies of brain white matter. This is a model-based image processing technique in which representative fiber models are estimated from collections of component fiber models in model-valued image data. This extends prior work in nonparametric image processing and multi-compartment processing to provide computational tools for image interpolation, smoothing, and fusion with fiber orientation mixtures. In contrast to related work on multi-compartment processing, this approach is based on directional measures of divergence and includes data-adaptive extensions for model selection and bilateral filtering. This is useful for reconstructing complex anatomical features in clinical datasets analyzed with the ball-and-sticks model, and our framework’s data-adaptive extensions are potentially useful for general multi-compartment image processing. We experimentally evaluate our approach with both synthetic data from computational phantoms and in vivo clinical data from human subjects. With synthetic data experiments, we evaluate performance based on errors in fiber orientation, volume fraction, compartment count, and tractography-based connectivity. With in vivo data experiments, we first show improved scan-rescan reproducibility and reliability of quantitative fiber bundle metrics, including mean length, volume, streamline count, and mean volume fraction. We then demonstrate the creation of a multi-fiber tractography atlas from a population of 80 human subjects. In comparison to single tensor atlasing, our multi-fiber atlas shows more complete features of known fiber bundles and includes reconstructions of the lateral projections of the corpus callosum and complex fronto-parietal connections of the superior longitudinal fasciculus I, II, and III. PMID:26691524
NASA Technical Reports Server (NTRS)
Kopasakis, George; Connolly, Joseph W.; Cheng, Larry
2015-01-01
This paper covers the development of stage-by-stage and parallel flow path compressor modeling approaches for a Variable Cycle Engine. The stage-by-stage compressor modeling approach is an extension of a technique for lumped volume dynamics and performance characteristic modeling. It was developed to improve the accuracy of axial compressor dynamics over lumped volume dynamics modeling. The stage-by-stage compressor model presented here is formulated into a parallel flow path model that includes both axial and rotational dynamics. This is done to enable the study of compressor and propulsion system dynamic performance under flow distortion conditions. The approaches utilized here are generic and should be applicable for the modeling of any axial flow compressor design accurate time domain simulations. The objective of this work is as follows. Given the parameters describing the conditions of atmospheric disturbances, and utilizing the derived formulations, directly compute the transfer function poles and zeros describing these disturbances for acoustic velocity, temperature, pressure, and density. Time domain simulations of representative atmospheric turbulence can then be developed by utilizing these computed transfer functions together with the disturbance frequencies of interest.
Lin, Hsin-Hon; Peng, Shin-Lei; Wu, Jay; Shih, Tian-Yu; Chuang, Keh-Shih; Shih, Cheng-Ting
2017-05-01
Osteoporosis is a disease characterized by a degradation of bone structures. Various methods have been developed to diagnose osteoporosis by measuring bone mineral density (BMD) of patients. However, BMDs from these methods were not equivalent and were incomparable. In addition, partial volume effect introduces errors in estimating bone volume from computed tomography (CT) images using image segmentation. In this study, a two-compartment model (TCM) was proposed to calculate bone volume fraction (BV/TV) and BMD from CT images. The TCM considers bones to be composed of two sub-materials. Various equivalent BV/TV and BMD can be calculated by applying corresponding sub-material pairs in the TCM. In contrast to image segmentation, the TCM prevented the influence of the partial volume effect by calculating the volume percentage of sub-material in each image voxel. Validations of the TCM were performed using bone-equivalent uniform phantoms, a 3D-printed trabecular-structural phantom, a temporal bone flap, and abdominal CT images. By using the TCM, the calculated BV/TVs of the uniform phantoms were within percent errors of ±2%; the percent errors of the structural volumes with various CT slice thickness were below 9%; the volume of the temporal bone flap was close to that from micro-CT images with a percent error of 4.1%. No significant difference (p >0.01) was found between the areal BMD of lumbar vertebrae calculated using the TCM and measured using dual-energy X-ray absorptiometry. In conclusion, the proposed TCM could be applied to diagnose osteoporosis, while providing a basis for comparing various measurement methods.
Water immersion and its computer simulation as analogs of weightlessness
NASA Technical Reports Server (NTRS)
Leonard, J. I.
1982-01-01
Experimental studies and computer simulations of water immersion are summarized and discussed with regard to their utility as analogs of weightlessness. Emphasis is placed on describing and interpreting the renal, endocrine, fluid, and circulatory changes that take place during immersion. A mathematical model, based on concepts of fluid volume regulation, is shown to be well suited to simulate the dynamic responses to water immersion. Further, it is shown that such a model provides a means to study specific mechanisms and pathways involved in the immersion response. A number of hypotheses are evaluated with the model related to the effects of dehydration, venous pressure disturbances, the control of ADH, and changes in plasma-interstitial volume. By inference, it is suggested that most of the model's responses to water immersion are plausible predictions of the acute changes expected, but not yet measured, during space flight. One important prediction of the model is that previous attempts to measure a diuresis during space flight failed because astronauts may have been dehydrated and urine samples were pooled over 24-hour periods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mirkovic, D; Titt, U; Mohan, R
2016-06-15
Purpose: To evaluate effects of motion and variable relative biological effectiveness (RBE) in a lung cancer patient treated with passively scattered proton therapy using dose volume histograms associated with patient dose computed using three different methods. Methods: A proton treatment plan of a lung cancer patient optimized using clinical treatment planning system (TPS) was used to construct a detailed Monte Carlo (MC) model of the beam delivery system and the patient specific aperture and compensator. A phase space file containing all particles transported through the beam line was collected at the distal surface of the range compensator and subsequently transportedmore » through two different patient models. The first model was based on the average CT used by the TPS and the second model included all 10 phases of the corresponding 4DCT. The physical dose and proton linear energy transfer (LET) were computed in each voxel of two models and used to compute constant and variable RBE MC dose on average CT and 4D CT. The MC computed doses were compared to the TPS dose using dose volume histograms for relevant structures. Results: The results show significant differences in doses to the target and critical structures suggesting the need for more accurate proton dose computation methods. In particular, the 4D dose shows reduced coverage of the target and higher dose to the spinal cord, while variable RBE dose shows higher lung dose. Conclusion: The methodology developed in this pilot study is currently used for the analysis of a cohort of ∼90 lung patients from a clinical trial comparing proton and photon therapy for lung cancer. The results from this study will help us in determining the clinical significance of more accurate dose computation models in proton therapy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bondar, M.L., E-mail: m.bondar@erasmusmc.nl; Hoogeman, M.S.; Mens, J.W.
2012-08-01
Purpose: To design and evaluate individualized nonadaptive and online-adaptive strategies based on a pretreatment established motion model for the highly deformable target volume in cervical cancer patients. Methods and Materials: For 14 patients, nine to ten variable bladder filling computed tomography (CT) scans were acquired at pretreatment and after 40 Gy. Individualized model-based internal target volumes (mbITVs) accounting for the cervix and uterus motion due to bladder volume changes were generated by using a motion-model constructed from two pretreatment CT scans (full and empty bladder). Two individualized strategies were designed: a nonadaptive strategy, using an mbITV accounting for the full-rangemore » of bladder volume changes throughout the treatment; and an online-adaptive strategy, using mbITVs of bladder volume subranges to construct a library of plans. The latter adapts the treatment online by selecting the plan-of-the-day from the library based on the measured bladder volume. The individualized strategies were evaluated by the seven to eight CT scans not used for mbITVs construction, and compared with a population-based approach. Geometric uniform margins around planning cervix-uterus and mbITVs were determined to ensure adequate coverage. For each strategy, the percentage of the cervix-uterus, bladder, and rectum volumes inside the planning target volume (PTV), and the clinical target volume (CTV)-to-PTV volume (volume difference between PTV and CTV) were calculated. Results: The margin for the population-based approach was 38 mm and for the individualized strategies was 7 to 10 mm. Compared with the population-based approach, the individualized nonadaptive strategy decreased the CTV-to-PTV volume by 48% {+-} 6% and the percentage of bladder and rectum inside the PTV by 5% to 45% and 26% to 74% (p < 0.001), respectively. Replacing the individualized nonadaptive strategy by an online-adaptive, two-plan library further decreased the percentage of bladder and rectum inside the PTV (0% to 10% and -1% to 9%; p < 0.004) and the CTV-to-PTV volume (4-96 ml). Conclusions: Compared with population-based margins, an individualized PTV results in better organ-at-risk sparing. Online-adaptive radiotherapy further improves organ-at-risk sparing.« less
Editorial: Computational Creativity, Concept Invention, and General Intelligence
NASA Astrophysics Data System (ADS)
Besold, Tarek R.; Kühnberger, Kai-Uwe; Veale, Tony
2015-12-01
Over the last decade, computational creativity as a field of scientific investigation and computational systems engineering has seen growing popularity. Still, the levels of development between projects aiming at systems for artistic production or performance and endeavours addressing creative problem-solving or models of creative cognitive capacities is diverging. While the former have already seen several great successes, the latter still remain in their infancy. This volume collects reports on work trying to close the accrued gap.
1988-06-30
casting. 68 Figure 1-9: Line printer representation of roll solidification. 69 Figure I1-1: Test casting model. 76 Figure 11-2: Division of test casting...writing new casting analysis and design routines. The new routines would take advantage of advanced criteria for predicting casting soundness and cast...properties and technical advances in computer hardware and software. 11 2. CONCLUSIONS UPCAST, a comprehensive software package, has been developed for
ERIC Educational Resources Information Center
Heslin, J. Alexander, Jr.
In senior-level undergraduate research courses in Computer Information Systems (CIS), students are required to read and assimilate a large volume of current research literature. One course objective is to demonstrate to the student that there are patterns or models or paradigms of research. A new approach in identifying research paradigms is…
User's manual for a computer program for simulating intensively managed allowable cut.
Robert W. Sassaman; Ed Holt; Karl Bergsvik
1972-01-01
Detailed operating instructions are described for SIMAC, a computerized forest simulation model which calculates the allowable cut assuming volume regulation for forests with intensively managed stands. A sample problem illustrates the required inputs and expected output. SIMAC is written in FORTRAN IV and runs on a CDC 6400 computer with a SCOPE 3.3 operating system....
Software Assurance Curriculum Project Volume 2: Undergraduate Course Outlines
2010-08-01
Contents Acknowledgments iii Abstract v 1 An Undergraduate Curriculum Focus on Software Assurance 1 2 Computer Science I 7 3 Computer Science II...confidence that can be integrated into traditional software development and acquisition process models . Thus, in addition to a technology focus...testing throughout the software development life cycle ( SDLC ) AP Security and complexity—system development challenges: security failures
Solid rocket booster thermal radiation model. Volume 2: User's manual
NASA Technical Reports Server (NTRS)
Lee, A. L.
1976-01-01
A user's manual was prepared for the computer program of a solid rocket booster (SRB) thermal radiation model. The following information was included: (1) structure of the program, (2) input information required, (3) examples of input cards and output printout, (4) program characteristics, and (5) program listing.
DOT National Transportation Integrated Search
2004-03-20
A means of quantifying the cluttering effects of symbols is needed to evaluate the impact of displaying an increasing volume of information on aviation displays such as head-up displays. Human visual perception has been successfully modeled by algori...
Jade: using on-demand cloud analysis to give scientists back their flow
NASA Astrophysics Data System (ADS)
Robinson, N.; Tomlinson, J.; Hilson, A. J.; Arribas, A.; Powell, T.
2017-12-01
The UK's Met Office generates 400 TB weather and climate data every day by running physical models on its Top 20 supercomputer. As data volumes explode, there is a danger that analysis workflows become dominated by watching progress bars, and not thinking about science. We have been researching how we can use distributed computing to allow analysts to process these large volumes of high velocity data in a way that's easy, effective and cheap.Our prototype analysis stack, Jade, tries to encapsulate this. Functionality includes: An under-the-hood Dask engine which parallelises and distributes computations, without the need to retrain analysts Hybrid compute clusters (AWS, Alibaba, and local compute) comprising many thousands of cores Clusters which autoscale up/down in response to calculation load using Kubernetes, and balances the cluster across providers based on the current price of compute Lazy data access from cloud storage via containerised OpenDAP This technology stack allows us to perform calculations many orders of magnitude faster than is possible on local workstations. It is also possible to outperform dedicated local compute clusters, as cloud compute can, in principle, scale to much larger scales. The use of ephemeral compute resources also makes this implementation cost efficient.
Differential pencil beam dose computation model for photons.
Mohan, R; Chui, C; Lidofsky, L
1986-01-01
Differential pencil beam (DPB) is defined as the dose distribution relative to the position of the first collision, per unit collision density, for a monoenergetic pencil beam of photons in an infinite homogeneous medium of unit density. We have generated DPB dose distribution tables for a number of photon energies in water using the Monte Carlo method. The three-dimensional (3D) nature of the transport of photons and electrons is automatically incorporated in DPB dose distributions. Dose is computed by evaluating 3D integrals of DPB dose. The DPB dose computation model has been applied to calculate dose distributions for 60Co and accelerator beams. Calculations for the latter are performed using energy spectra generated with the Monte Carlo program. To predict dose distributions near the beam boundaries defined by the collimation system as well as blocks, we utilize the angular distribution of incident photons. Inhomogeneities are taken into account by attenuating the primary photon fluence exponentially utilizing the average total linear attenuation coefficient of intervening tissue, by multiplying photon fluence by the linear attenuation coefficient to yield the number of collisions in the scattering volume, and by scaling the path between the scattering volume element and the computation point by an effective density.
Solid rocket booster performance evaluation model. Volume 2: Users manual
NASA Technical Reports Server (NTRS)
1974-01-01
This users manual for the solid rocket booster performance evaluation model (SRB-II) contains descriptions of the model, the program options, the required program inputs, the program output format and the program error messages. SRB-II is written in FORTRAN and is operational on both the IBM 370/155 and the MSFC UNIVAC 1108 computers.
Global Weather Prediction and High-End Computing at NASA
NASA Technical Reports Server (NTRS)
Lin, Shian-Jiann; Atlas, Robert; Yeh, Kao-San
2003-01-01
We demonstrate current capabilities of the NASA finite-volume General Circulation Model an high-resolution global weather prediction, and discuss its development path in the foreseeable future. This model can be regarded as a prototype of a future NASA Earth modeling system intended to unify development activities cutting across various disciplines within the NASA Earth Science Enterprise.
Fananapazir, Ghaneh; Benzl, Robert; Corwin, Michael T; Chen, Ling-Xin; Sageshima, Junichiro; Stewart, Susan L; Troppmann, Christoph
2018-07-01
Purpose To determine whether the predonation computed tomography (CT)-based volume of the future remnant kidney is predictive of postdonation renal function in living kidney donors. Materials and Methods This institutional review board-approved, retrospective, HIPAA-compliant study included 126 live kidney donors who had undergone predonation renal CT between January 2007 and December 2014 as well as 2-year postdonation measurement of estimated glomerular filtration rate (eGFR). The whole kidney volume and cortical volume of the future remnant kidney were measured and standardized for body surface area (BSA). Bivariate linear associations between the ratios of whole kidney volume to BSA and cortical volume to BSA were obtained. A linear regression model for 2-year postdonation eGFR that incorporated donor age, sex, and either whole kidney volume-to-BSA ratio or cortical volume-to-BSA ratio was created, and the coefficient of determination (R 2 ) for the model was calculated. Factors not statistically additive in assessing 2-year eGFR were removed by using backward elimination, and the coefficient of determination for this parsimonious model was calculated. Results Correlation was slightly better for cortical volume-to-BSA ratio than for whole kidney volume-to-BSA ratio (r = 0.48 vs r = 0.44, respectively). The linear regression model incorporating all donor factors had an R 2 of 0.66. The only factors that were significantly additive to the equation were cortical volume-to-BSA ratio and predonation eGFR (P = .01 and P < .01, respectively), and the final parsimonious linear regression model incorporating these two variables explained almost the same amount of variance (R 2 = 0.65) as did the full model. Conclusion The cortical volume of the future remnant kidney helped predict postdonation eGFR at 2 years. The cortical volume-to-BSA ratio should thus be considered for addition as an important variable to living kidney donor evaluation and selection guidelines. © RSNA, 2018.
Melt Electrospinning Writing of Highly Ordered Large Volume Scaffold Architectures.
Wunner, Felix M; Wille, Marie-Luise; Noonan, Thomas G; Bas, Onur; Dalton, Paul D; De-Juan-Pardo, Elena M; Hutmacher, Dietmar W
2018-05-01
The additive manufacturing of highly ordered, micrometer-scale scaffolds is at the forefront of tissue engineering and regenerative medicine research. The fabrication of scaffolds for the regeneration of larger tissue volumes, in particular, remains a major challenge. A technology at the convergence of additive manufacturing and electrospinning-melt electrospinning writing (MEW)-is also limited in thickness/volume due to the accumulation of excess charge from the deposited material repelling and hence, distorting scaffold architectures. The underlying physical principles are studied that constrain MEW of thick, large volume scaffolds. Through computational modeling, numerical values variable working distances are established respectively, which maintain the electrostatic force at a constant level during the printing process. Based on the computational simulations, three voltage profiles are applied to determine the maximum height (exceeding 7 mm) of a highly ordered large volume scaffold. These thick MEW scaffolds have fully interconnected pores and allow cells to migrate and proliferate. To the best of the authors knowledge, this is the first study to report that z-axis adjustment and increasing the voltage during the MEW process allows for the fabrication of high-volume scaffolds with uniform morphologies and fiber diameters. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Koyama, Kazuya; Mitsumoto, Takuya; Shiraishi, Takahiro; Tsuda, Keisuke; Nishiyama, Atsushi; Inoue, Kazumasa; Yoshikawa, Kyosan; Hatano, Kazuo; Kubota, Kazuo; Fukushi, Masahiro
2017-09-01
We aimed to determine the difference in tumor volume associated with the reconstruction model in positron-emission tomography (PET). To reduce the influence of the reconstruction model, we suggested a method to measure the tumor volume using the relative threshold method with a fixed threshold based on peak standardized uptake value (SUV peak ). The efficacy of our method was verified using 18 F-2-fluoro-2-deoxy-D-glucose PET/computed tomography images of 20 patients with lung cancer. The tumor volume was determined using the relative threshold method with a fixed threshold based on the SUV peak . The PET data were reconstructed using the ordered-subset expectation maximization (OSEM) model, the OSEM + time-of-flight (TOF) model, and the OSEM + TOF + point-spread function (PSF) model. The volume differences associated with the reconstruction algorithm (%VD) were compared. For comparison, the tumor volume was measured using the relative threshold method based on the maximum SUV (SUV max ). For the OSEM and TOF models, the mean %VD values were -0.06 ± 8.07 and -2.04 ± 4.23% for the fixed 40% threshold according to the SUV max and the SUV peak, respectively. The effect of our method in this case seemed to be minor. For the OSEM and PSF models, the mean %VD values were -20.41 ± 14.47 and -13.87 ± 6.59% for the fixed 40% threshold according to the SUV max and SUV peak , respectively. Our new method enabled the measurement of tumor volume with a fixed threshold and reduced the influence of the changes in tumor volume associated with the reconstruction model.
QuEST for malware type-classification
NASA Astrophysics Data System (ADS)
Vaughan, Sandra L.; Mills, Robert F.; Grimaila, Michael R.; Peterson, Gilbert L.; Oxley, Mark E.; Dube, Thomas E.; Rogers, Steven K.
2015-05-01
Current cyber-related security and safety risks are unprecedented, due in no small part to information overload and skilled cyber-analyst shortages. Advances in decision support and Situation Awareness (SA) tools are required to support analysts in risk mitigation. Inspired by human intelligence, research in Artificial Intelligence (AI) and Computational Intelligence (CI) have provided successful engineering solutions in complex domains including cyber. Current AI approaches aggregate large volumes of data to infer the general from the particular, i.e. inductive reasoning (pattern-matching) and generally cannot infer answers not previously programmed. Whereas humans, rarely able to reason over large volumes of data, have successfully reached the top of the food chain by inferring situations from partial or even partially incorrect information, i.e. abductive reasoning (pattern-completion); generating a hypothetical explanation of observations. In order to achieve an engineering advantage in computational decision support and SA we leverage recent research in human consciousness, the role consciousness plays in decision making, modeling the units of subjective experience which generate consciousness, qualia. This paper introduces a novel computational implementation of a Cognitive Modeling Architecture (CMA) which incorporates concepts of consciousness. We apply our model to the malware type-classification task. The underlying methodology and theories are generalizable to many domains.
Coupled discrete element and finite volume solution of two classical soil mechanics problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Feng; Drumm, Eric; Guiochon, Georges A
One dimensional solutions for the classic critical upward seepage gradient/quick condition and the time rate of consolidation problems are obtained using coupled routines for the finite volume method (FVM) and discrete element method (DEM), and the results compared with the analytical solutions. The two phase flow in a system composed of fluid and solid is simulated with the fluid phase modeled by solving the averaged Navier-Stokes equation using the FVM and the solid phase is modeled using the DEM. A framework is described for the coupling of two open source computer codes: YADE-OpenDEM for the discrete element method and OpenFOAMmore » for the computational fluid dynamics. The particle-fluid interaction is quantified using a semi-empirical relationship proposed by Ergun [12]. The two classical verification problems are used to explore issues encountered when using coupled flow DEM codes, namely, the appropriate time step size for both the fluid and mechanical solution processes, the choice of the viscous damping coefficient, and the number of solid particles per finite fluid volume.« less
An object-oriented computational model to study cardiopulmonary hemodynamic interactions in humans.
Ngo, Chuong; Dahlmanns, Stephan; Vollmer, Thomas; Misgeld, Berno; Leonhardt, Steffen
2018-06-01
This work introduces an object-oriented computational model to study cardiopulmonary interactions in humans. Modeling was performed in object-oriented programing language Matlab Simscape, where model components are connected with each other through physical connections. Constitutive and phenomenological equations of model elements are implemented based on their non-linear pressure-volume or pressure-flow relationship. The model includes more than 30 physiological compartments, which belong either to the cardiovascular or respiratory system. The model considers non-linear behaviors of veins, pulmonary capillaries, collapsible airways, alveoli, and the chest wall. Model parameters were derisved based on literature values. Model validation was performed by comparing simulation results with clinical and animal data reported in literature. The model is able to provide quantitative values of alveolar, pleural, interstitial, aortic and ventricular pressures, as well as heart and lung volumes during spontaneous breathing and mechanical ventilation. Results of baseline simulation demonstrate the consistency of the assigned parameters. Simulation results during mechanical ventilation with PEEP trials can be directly compared with animal and clinical data given in literature. Object-oriented programming languages can be used to model interconnected systems including model non-linearities. The model provides a useful tool to investigate cardiopulmonary activity during spontaneous breathing and mechanical ventilation. Copyright © 2018 Elsevier B.V. All rights reserved.
Computational Fluid Dynamics of Whole-Body Aircraft
NASA Astrophysics Data System (ADS)
Agarwal, Ramesh
1999-01-01
The current state of the art in computational aerodynamics for whole-body aircraft flowfield simulations is described. Recent advances in geometry modeling, surface and volume grid generation, and flow simulation algorithms have led to accurate flowfield predictions for increasingly complex and realistic configurations. As a result, computational aerodynamics has emerged as a crucial enabling technology for the design and development of flight vehicles. Examples illustrating the current capability for the prediction of transport and fighter aircraft flowfields are presented. Unfortunately, accurate modeling of turbulence remains a major difficulty in the analysis of viscosity-dominated flows. In the future, inverse design methods, multidisciplinary design optimization methods, artificial intelligence technology, and massively parallel computer technology will be incorporated into computational aerodynamics, opening up greater opportunities for improved product design at substantially reduced costs.
NASA Technical Reports Server (NTRS)
Noor, Ahmed K. (Editor)
1986-01-01
The papers contained in this volume provide an overview of the advances made in a number of aspects of computational mechanics, identify some of the anticipated industry needs in this area, discuss the opportunities provided by new hardware and parallel algorithms, and outline some of the current government programs in computational mechanics. Papers are included on advances and trends in parallel algorithms, supercomputers for engineering analysis, material modeling in nonlinear finite-element analysis, the Navier-Stokes computer, and future finite-element software systems.
Accuracy of CBCT for volumetric measurement of simulated periapical lesions.
Ahlowalia, M S; Patel, S; Anwar, H M S; Cama, G; Austin, R S; Wilson, R; Mannocci, F
2013-06-01
To compare the accuracy of cone beam computed tomography (CBCT) and micro-computed tomography (μCT) when measuring the volume of bone cavities. Ten irregular-shaped cavities of varying dimensions were created in bovine bone specimens using a rotary diamond bur. The samples were then scanned using the Accuitomo 3D CBCT scanner. The scanned information was converted to the Digital Imaging and Communication in Medicine (DICOM) format ready for analysis. Once formatted, 10 trained and calibrated examiners segmented the scans and measured the volumes of the lesions. Intra/interexaminer agreement was assessed by each examiner re-segmenting each scan after a 2-week interval. Micro-CT scans were analysed by a single examiner. To achieve a physical reading of the artificially created cavities, replicas were created using dimensionally stable silicone impression material. After measuring the mass of each impression sample, the volume was calculated by dividing the mass of each sample by the density of the set impression material. Further corroboration of these measurements was obtained by employing Archimedes' principle to measure the volume of each impression sample. Intraclass correlation was used to assess agreement. Both CBCT (mean volume: 175.9 mm3) and μCT (mean volume: 163.1 mm3) showed a high degree of agreement (intraclass correlation coefficient >0.9) when compared to both weighed and 'Archimedes' principle' measurements (mean volume: 177.7 and 182.6 mm3, respectively). Cone beam computed tomography is an accurate means of measuring volume of artificially created bone cavities in an ex vivo model. This may provide a valuable tool for monitoring the healing rate of apical periodontitis; further investigations are warranted. © 2012 International Endodontic Journal. Published by Blackwell Publishing Ltd.
NASA Technical Reports Server (NTRS)
Heffley, R. K.; Jewell, W. F.; Whitbeck, R. F.; Schulman, T. M.
1980-01-01
The effects of spurious delays in real time digital computing systems are examined. Various sources of spurious delays are defined and analyzed using an extant simulator system as an example. A specific analysis procedure is set forth and four cases are viewed in terms of their time and frequency domain characteristics. Numerical solutions are obtained for three single rate one- and two-computer examples, and the analysis problem is formulated for a two-rate, two-computer example.
Synthesized interstitial lung texture for use in anthropomorphic computational phantoms
NASA Astrophysics Data System (ADS)
Becchetti, Marc F.; Solomon, Justin B.; Segars, W. Paul; Samei, Ehsan
2016-04-01
A realistic model of the anatomical texture from the pulmonary interstitium was developed with the goal of extending the capability of anthropomorphic computational phantoms (e.g., XCAT, Duke University), allowing for more accurate image quality assessment. Contrast-enhanced, high dose, thorax images for a healthy patient from a clinical CT system (Discovery CT750HD, GE healthcare) with thin (0.625 mm) slices and filtered back- projection (FBP) were used to inform the model. The interstitium which gives rise to the texture was defined using 24 volumes of interest (VOIs). These VOIs were selected manually to avoid vasculature, bronchi, and bronchioles. A small scale Hessian-based line filter was applied to minimize the amount of partial-volumed supernumerary vessels and bronchioles within the VOIs. The texture in the VOIs was characterized using 8 Haralick and 13 gray-level run length features. A clustered lumpy background (CLB) model with added noise and blurring to match CT system was optimized to resemble the texture in the VOIs using a genetic algorithm with the Mahalanobis distance as a similarity metric between the texture features. The most similar CLB model was then used to generate the interstitial texture to fill the lung. The optimization improved the similarity by 45%. This will substantially enhance the capabilities of anthropomorphic computational phantoms, allowing for more realistic CT simulations.
The application of cloud computing to scientific workflows: a study of cost and performance.
Berriman, G Bruce; Deelman, Ewa; Juve, Gideon; Rynge, Mats; Vöckler, Jens-S
2013-01-28
The current model of transferring data from data centres to desktops for analysis will soon be rendered impractical by the accelerating growth in the volume of science datasets. Processing will instead often take place on high-performance servers co-located with data. Evaluations of how new technologies such as cloud computing would support such a new distributed computing model are urgently needed. Cloud computing is a new way of purchasing computing and storage resources on demand through virtualization technologies. We report here the results of investigations of the applicability of commercial cloud computing to scientific computing, with an emphasis on astronomy, including investigations of what types of applications can be run cheaply and efficiently on the cloud, and an example of an application well suited to the cloud: processing a large dataset to create a new science product.
Spacelab user implementation assessment study. Volume 4: SUIAS appendixes
NASA Technical Reports Server (NTRS)
1975-01-01
The capital investment for the integration and checkout of Spacelab payloads is assessed. Detailed data pertaining to this assessment and a computer cost model utilized in the compilation of programmatic resource requirements are delineated.
Development of a hip joint model for finite volume simulations.
Cardiff, P; Karač, A; FitzPatrick, D; Ivanković, A
2014-01-01
This paper establishes a procedure for numerical analysis of a hip joint using the finite volume method. Patient-specific hip joint geometry is segmented directly from computed tomography and magnetic resonance imaging datasets and the resulting bone surfaces are processed into a form suitable for volume meshing. A high resolution continuum tetrahedral mesh has been generated, where a sandwich model approach is adopted; the bones are represented as a stiffer cortical shells surrounding more flexible cancellous cores. Cartilage is included as a uniform thickness extruded layer and the effect of layer thickness is investigated. To realistically position the bones, gait analysis has been performed giving the 3D positions of the bones for the full gait cycle. Three phases of the gait cycle are examined using a finite volume based custom structural contact solver implemented in open-source software OpenFOAM.
NASA Technical Reports Server (NTRS)
Lee, S. S.; Sengupta, S.; Tuann, S. Y.; Lee, C. R.
1982-01-01
The six-volume report: describes the theory of a three-dimensional (3-D) mathematical thermal discharge model and a related one-dimensional (1-D) model, includes model verification at two sites, and provides a separate user's manual for each model. The 3-D model has two forms: free surface and rigid lid. The former, verified at Anclote Anchorage (FL), allows a free air/water interface and is suited for significant surface wave heights compared to mean water depth; e.g., estuaries and coastal regions. The latter, verified at Lake Keowee (SC), is suited for small surface wave heights compared to depth. These models allow computation of time-dependent velocity and temperature fields for given initial conditions and time-varying boundary conditions.
NASA Technical Reports Server (NTRS)
Price, R.; Gady, S.; Heinemann, K.; Nelson, E. S.; Mulugeta, L.; Ethier, C. R.; Samuels, B. C.; Feola, A.; Vera, J.; Myers, J. G.
2015-01-01
A recognized side effect of prolonged microgravity exposure is visual impairment and intracranial pressure (VIIP) syndrome. The medical understanding of this phenomenon is at present preliminary, although it is hypothesized that the headward shift of bodily fluids in microgravity may be a contributor. Computational models can be used to provide insight into the origins of VIIP. In order to further investigate this phenomenon, NASAs Digital Astronaut Project (DAP) is developing an integrated computational model of the human body which is divided into the eye, the cerebrovascular system, and the cardiovascular system. This presentation will focus on the development and testing of the computational model of an integrated model of the cardiovascular system (CVS) and central nervous system (CNS) that simulates the behavior of pressures, volumes, and flows within these two physiological systems.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-18
... figure to compute volume regulation percentages for 2010- 11 crop Natural (sun-dried) Seedless (NS... compute volume regulation percentages for 2010-11 crop Natural (sun-dried) Seedless (NS) raisins covered...
DOT National Transportation Integrated Search
1981-09-01
Volume III is the third and last volume of a three volume document describing the computer program HEVSIM. This volume includes appendices which list the HEVSIM program, sample part data, some typical outputs and updated nomenclature.
A scattering model for defoliated vegetation
NASA Technical Reports Server (NTRS)
Karam, M. A.; Fung, A. K.
1986-01-01
A scattering model for defoliated vegetation is conceived as a layer of dielectric, finite-length cylinders with specified size and orientation distributions above an irregular ground surface. The scattering phase matrix of a single cylinder is computed, then the radiative transfer technique is applied to link volume scattering from vegetation to surface scattering from the soil surface. Polarized and depolarized scattering are computed and the effects of the cylinder size and orientation distributions are illustrated. It is found that size and orientation distributions have significant effects on the backscattered signal. The model is compared with scattering from defoliated trees and agricultural crops.
Auto-recognition of surfaces and auto-generation of material removal volume for finishing process
NASA Astrophysics Data System (ADS)
Kataraki, Pramod S.; Salman Abu Mansor, Mohd
2018-03-01
Auto-recognition of a surface and auto-generation of material removal volumes for the so recognised surfaces has become a need to achieve successful downstream manufacturing activities like automated process planning and scheduling. Few researchers have contributed to generation of material removal volume for a product but resulted in material removal volume discontinuity between two adjacent material removal volumes generated from two adjacent faces that form convex geometry. The need for limitation free material removal volume generation was attempted and an algorithm that automatically recognises computer aided design (CAD) model’s surface and also auto-generate material removal volume for finishing process of the recognised surfaces was developed. The surfaces of CAD model are successfully recognised by the developed algorithm and required material removal volume is obtained. The material removal volume discontinuity limitation that occurred in fewer studies is eliminated.
Fortmeier, Dirk; Mastmeyer, Andre; Schröder, Julian; Handels, Heinz
2016-01-01
This study presents a new visuo-haptic virtual reality (VR) training and planning system for percutaneous transhepatic cholangio-drainage (PTCD) based on partially segmented virtual patient models. We only use partially segmented image data instead of a full segmentation and circumvent the necessity of surface or volume mesh models. Haptic interaction with the virtual patient during virtual palpation, ultrasound probing and needle insertion is provided. Furthermore, the VR simulator includes X-ray and ultrasound simulation for image-guided training. The visualization techniques are GPU-accelerated by implementation in Cuda and include real-time volume deformations computed on the grid of the image data. Computation on the image grid enables straightforward integration of the deformed image data into the visualization components. To provide shorter rendering times, the performance of the volume deformation algorithm is improved by a multigrid approach. To evaluate the VR training system, a user evaluation has been performed and deformation algorithms are analyzed in terms of convergence speed with respect to a fully converged solution. The user evaluation shows positive results with increased user confidence after a training session. It is shown that using partially segmented patient data and direct volume rendering is suitable for the simulation of needle insertion procedures such as PTCD.
A computational method for sharp interface advection
Bredmose, Henrik; Jasak, Hrvoje
2016-01-01
We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volume of fluid (VOF) idea of calculating the volume of one of the fluids transported across the mesh faces during a time step. The novelty of the isoAdvector concept consists of two parts. First, we exploit an isosurface concept for modelling the interface inside cells in a geometric surface reconstruction step. Second, from the reconstructed surface, we model the motion of the face–interface intersection line for a general polygonal face to obtain the time evolution within a time step of the submerged face area. Integrating this submerged area over the time step leads to an accurate estimate for the total volume of fluid transported across the face. The method was tested on simple two-dimensional and three-dimensional interface advection problems on both structured and unstructured meshes. The results are very satisfactory in terms of volume conservation, boundedness, surface sharpness and efficiency. The isoAdvector method was implemented as an OpenFOAM® extension and is published as open source. PMID:28018619
Nelson, D A; Curlee, J S; Curran, A R; Ziriax, J M; Mason, P A
2005-12-01
The localized thermal insulation value expresses a garment's thermal resistance over the region which is covered by the garment, rather than over the entire surface of a subject or manikin. The determination of localized garment insulation values is critical to the development of high-resolution models of sensible heat exchange. A method is presented for determining and validating localized garment insulation values, based on whole-body insulation values (clo units) and using computer-aided design and thermal analysis software. Localized insulation values are presented for a catalog consisting of 106 garments and verified using computer-generated models. The values presented are suitable for use on volume element-based or surface element-based models of heat transfer involving clothed subjects.
NASA Astrophysics Data System (ADS)
Pawlik, Marzena; Lu, Yiling
2018-05-01
Computational micromechanics is a useful tool to predict properties of carbon fibre reinforced polymers. In this paper, a representative volume element (RVE) is used to investigate a fuzzy fibre reinforced polymer. The fuzzy fibre results from the introduction of nanofillers in the fibre surface. The composite being studied contains three phases, namely: the T650 carbon fibre, the carbon nanotubes (CNTs) reinforced interphase and the epoxy resin EPIKOTE 862. CNTs are radially grown on the surface of the carbon fibre, and thus resultant interphase composed of nanotubes and matrix is transversely isotropic. Transversely isotropic properties of the interphase are numerically implemented in the ANSYS FEM software using element orientation command. Obtained numerical predictions are compared with the available analytical models. It is found that the CNTs interphase significantly increased the transverse mechanical properties of the fuzzy fibre reinforced polymer. This extent of enhancement changes monotonically with the carbon fibre volume fraction. This RVE model enables to investigate different orientation of CNTs in the fuzzy fibre model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaurov, Alexander A., E-mail: kaurov@uchicago.edu
The methods for studying the epoch of cosmic reionization vary from full radiative transfer simulations to purely analytical models. While numerical approaches are computationally expensive and are not suitable for generating many mock catalogs, analytical methods are based on assumptions and approximations. We explore the interconnection between both methods. First, we ask how the analytical framework of excursion set formalism can be used for statistical analysis of numerical simulations and visual representation of the morphology of ionization fronts. Second, we explore the methods of training the analytical model on a given numerical simulation. We present a new code which emergedmore » from this study. Its main application is to match the analytical model with a numerical simulation. Then, it allows one to generate mock reionization catalogs with volumes exceeding the original simulation quickly and computationally inexpensively, meanwhile reproducing large-scale statistical properties. These mock catalogs are particularly useful for cosmic microwave background polarization and 21 cm experiments, where large volumes are required to simulate the observed signal.« less
Flowfield computation of entry vehicles
NASA Technical Reports Server (NTRS)
Prabhu, Dinesh K.
1990-01-01
The equations governing the multidimensional flow of a reacting mixture of thermally perfect gasses were derived. The modeling procedures for the various terms of the conservation laws are discussed. A numerical algorithm, based on the finite-volume approach, to solve these conservation equations was developed. The advantages and disadvantages of the present numerical scheme are discussed from the point of view of accuracy, computer time, and memory requirements. A simple one-dimensional model problem was solved to prove the feasibility and accuracy of the algorithm. A computer code implementing the above algorithm was developed and is presently being applied to simple geometries and conditions. Once the code is completely debugged and validated, it will be used to compute the complete unsteady flow field around the Aeroassist Flight Experiment (AFE) body.
NASA Astrophysics Data System (ADS)
Smith, Katharine A.; Schlag, Zachary; North, Elizabeth W.
2018-07-01
Coupled three-dimensional circulation and biogeochemical models predict changes in water properties that can be used to define fish habitat, including physiologically important parameters such as temperature, salinity, and dissolved oxygen. However, methods for calculating the volume of habitat defined by the intersection of multiple water properties are not well established for coupled three-dimensional models. The objectives of this research were to examine multiple methods for calculating habitat volume from three-dimensional model predictions, select the most robust approach, and provide an example application of the technique. Three methods were assessed: the "Step," "Ruled Surface", and "Pentahedron" methods, the latter of which was developed as part of this research. Results indicate that the analytical Pentahedron method is exact, computationally efficient, and preserves continuity in water properties between adjacent grid cells. As an example application, the Pentahedron method was implemented within the Habitat Volume Model (HabVol) using output from a circulation model with an Arakawa C-grid and physiological tolerances of juvenile striped bass (Morone saxatilis). This application demonstrates that the analytical Pentahedron method can be successfully applied to calculate habitat volume using output from coupled three-dimensional circulation and biogeochemical models, and it indicates that the Pentahedron method has wide application to aquatic and marine systems for which these models exist and physiological tolerances of organisms are known.
29 CFR 794.123 - Method of computing annual volume of sales.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 3 2010-07-01 2010-07-01 false Method of computing annual volume of sales. 794.123 Section... of Sales § 794.123 Method of computing annual volume of sales. (a) Where the enterprise, during the... gross volume of sales in excess of the amount specified in the statute, it is plain that its annual...
A 4DCT imaging-based breathing lung model with relative hysteresis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.
To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for bothmore » models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry. - Highlights: • We developed a breathing human lung CFD model based on 4D-dynamic CT images. • The 4DCT-based breathing lung model is able to capture lung relative hysteresis. • A new boundary condition for lung model based on one static CT image was proposed. • The difference between lung models based on 4D and static CT images was quantified.« less
Simulation model for wind energy storage systems. Volume II. Operation manual. [SIMWEST code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warren, A.W.; Edsinger, R.W.; Burroughs, J.D.
1977-08-01
The effort developed a comprehensive computer program for the modeling of wind energy/storage systems utilizing any combination of five types of storage (pumped hydro, battery, thermal, flywheel and pneumatic). An acronym for the program is SIMWEST (Simulation Model for Wind Energy Storage). The level of detail of SIMWEST is consistent with a role of evaluating the economic feasibility as well as the general performance of wind energy systems. The software package consists of two basic programs and a library of system, environmental, and load components. Volume II, the SIMWEST operation manual, describes the usage of the SIMWEST program, the designmore » of the library components, and a number of simple example simulations intended to familiarize the user with the program's operation. Volume II also contains a listing of each SIMWEST library subroutine.« less
NASA Technical Reports Server (NTRS)
1974-01-01
The manual for the use of the computer program SYSTID under the Univac operating system is presented. The computer program is used in the simulation and evaluation of the space shuttle orbiter electric power supply. The models described in the handbook are those which were available in the original versions of SYSTID. The subjects discussed are: (1) program description, (2) input language, (3) node typing, (4) problem submission, and (5) basic and power system SYSTID libraries.
NASA Technical Reports Server (NTRS)
1981-01-01
Relevant differences between the MPPM resident IBM 370computer and the NASA Sigma 9 computer are described as well as the MPPM system itself and its development. Problems encountered and solutions used to overcome these difficulties during installation of the MPPM system at MSFC are discussed. Remaining work on the installation effort is summarized. The relevant hardware features incorporated in the program are described and their implications on the transportability of the MPPM source code are examined.
NASA Computational Fluid Dynamics Conference. Volume 1: Sessions 1-6
NASA Technical Reports Server (NTRS)
1989-01-01
Presentations given at the NASA Computational Fluid Dynamics (CFD) Conference held at the NASA Ames Research Center, Moffett Field, California, March 7-9, 1989 are given. Topics covered include research facility overviews of CFD research and applications, validation programs, direct simulation of compressible turbulence, turbulence modeling, advances in Runge-Kutta schemes for solving 3-D Navier-Stokes equations, grid generation and invicid flow computation around aircraft geometries, numerical simulation of rotorcraft, and viscous drag prediction for rotor blades.
Volunteered Cloud Computing for Disaster Management
NASA Astrophysics Data System (ADS)
Evans, J. D.; Hao, W.; Chettri, S. R.
2014-12-01
Disaster management relies increasingly on interpreting earth observations and running numerical models; which require significant computing capacity - usually on short notice and at irregular intervals. Peak computing demand during event detection, hazard assessment, or incident response may exceed agency budgets; however some of it can be met through volunteered computing, which distributes subtasks to participating computers via the Internet. This approach has enabled large projects in mathematics, basic science, and climate research to harness the slack computing capacity of thousands of desktop computers. This capacity is likely to diminish as desktops give way to battery-powered mobile devices (laptops, smartphones, tablets) in the consumer market; but as cloud computing becomes commonplace, it may offer significant slack capacity -- if its users are given an easy, trustworthy mechanism for participating. Such a "volunteered cloud computing" mechanism would also offer several advantages over traditional volunteered computing: tasks distributed within a cloud have fewer bandwidth limitations; granular billing mechanisms allow small slices of "interstitial" computing at no marginal cost; and virtual storage volumes allow in-depth, reversible machine reconfiguration. Volunteered cloud computing is especially suitable for "embarrassingly parallel" tasks, including ones requiring large data volumes: examples in disaster management include near-real-time image interpretation, pattern / trend detection, or large model ensembles. In the context of a major disaster, we estimate that cloud users (if suitably informed) might volunteer hundreds to thousands of CPU cores across a large provider such as Amazon Web Services. To explore this potential, we are building a volunteered cloud computing platform and targeting it to a disaster management context. Using a lightweight, fault-tolerant network protocol, this platform helps cloud users join parallel computing projects; automates reconfiguration of their virtual machines; ensures accountability for donated computing; and optimizes the use of "interstitial" computing. Initial applications include fire detection from multispectral satellite imagery and flood risk mapping through hydrological simulations.
Estimation of tunnel blockage from wall pressure signatures: A review and data correlation
NASA Technical Reports Server (NTRS)
Hackett, J. E.; Wilsden, D. J.; Lilley, D. E.
1979-01-01
A method is described for estimating low speed wind tunnel blockage, including model volume, bubble separation and viscous wake effects. A tunnel-centerline, source/sink distribution is derived from measured wall pressure signatures using fast algorithms to solve the inverse problem in three dimensions. Blockage may then be computed throughout the test volume. Correlations using scaled models or tests in two tunnels were made in all cases. In many cases model reference area exceeded 10% of the tunnel cross-sectional area. Good correlations were obtained regarding model surface pressures, lift drag and pitching moment. It is shown that blockage-induced velocity variations across the test section are relatively unimportant but axial gradients should be considered when model size is determined.
OCD: The offshore and coastal dispersion model. Volume 2. Appendices
DOE Office of Scientific and Technical Information (OSTI.GOV)
DiCristofaro, D.C.; Hanna, S.R.
1989-11-01
The Offshore and Coastal Dispersion (OCD) Model has been developed to simulate the effect of offshore emissions from point, area, or line sources on the air quality of coastal regions. The OCD model was adapted from the EPA guideline model MPTER (EPA, 1980). Modifications were made to incorporate overwater plume transport and dispersion as well as changes that occur as the plume crosses the shoreline. This is a revised OCD model, the fourth version to date. The volume is an appendices for the OCD documentation, included are three appendices: Appendix A the OCD computer program, Appendix B an Analysis Post-processor,more » Appendix C Offshore Meteorological data Collection Instrumentation, also included are general References.« less
Solute solver 'what if' module for modeling urea kinetics.
Daugirdas, John T
2016-11-01
The publicly available Solute Solver module allows calculation of a variety of two-pool urea kinetic measures of dialysis adequacy using pre- and postdialysis plasma urea and estimated dialyzer clearance or estimated urea distribution volumes as inputs. However, the existing program does not have a 'what if' module, which would estimate the plasma urea values as well as commonly used measures of hemodialysis adequacy for a patient with a given urea distribution volume and urea nitrogen generation rate dialyzed according to a particular dialysis schedule. Conventional variable extracellular volume 2-pool urea kinetic equations were used. A javascript-HTML Web form was created that can be used on any personal computer equipped with internet browsing software, to compute commonly used Kt/V-based measures of hemodialysis adequacy for patients with differing amounts of residual kidney function and following a variety of treatment schedules. The completed Web form calculator may be particularly useful in computing equivalent continuous clearances for incremental hemodialysis strategies. © The Author 2016. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khangaonkar, Tarang; Long, Wen; Xu, Wenwei
The Salish Sea consisting of Puget Sound and Georgia Basin in U.S and Canadian waters has been the subject of several independent data collection and modeling studies. However, these interconnected basins and their hydrodynamic interactions have not received attention as a contiguous unit. The Strait of Juan de Fuca is the primary pathway through which Pacific Ocean water enters the Salish Sea but the role played by Johnstone Strait and the complex channels northeast of Vancouver Island, connecting the Salish Sea and the Pacific Ocean, on overall Salish Sea circulation has not been characterized. In this paper we present amore » modeling-based assessment of the two-layer circulation and transport through the multiple interconnected sub-basins within the Salish Sea including the effect of exchange via Johnstone Strait and Discovery Islands. The Salish Sea Model previously developed using the finite volume community ocean model (FVCOM) was expanded over the continental shelf for this assessment encircling Vancouver Island, including Discovery Islands, Johnstone Strait, Broughton Archipelago and the associated waterways. A computational technique was developed to allow summation of volume fluxes across arbitrary transects through unstructured finite volume cells. Tidally averaged volume fluxes were computed at multiple transects. The results were used to validate the classic model of Circulation in Embracing Sills for Puget Sound and to provide quantitative estimates of the lateral distribution of tidally averaged transport through the system. Sensitivity tests with and without exchanges through Johnstone Strait demonstrate that it is a pathway for Georgia Basin runoff and Fraser River water to exit the Salish Sea and for Pacific Ocean inflow. However the relative impact of this exchange on circulation and flushing in Puget Sound Basin is small.« less
Pulmonary lobar volumetry using novel volumetric computer-aided diagnosis and computed tomography
Iwano, Shingo; Kitano, Mariko; Matsuo, Keiji; Kawakami, Kenichi; Koike, Wataru; Kishimoto, Mariko; Inoue, Tsutomu; Li, Yuanzhong; Naganawa, Shinji
2013-01-01
OBJECTIVES To compare the accuracy of pulmonary lobar volumetry using the conventional number of segments method and novel volumetric computer-aided diagnosis using 3D computed tomography images. METHODS We acquired 50 consecutive preoperative 3D computed tomography examinations for lung tumours reconstructed at 1-mm slice thicknesses. We calculated the lobar volume and the emphysematous lobar volume < −950 HU of each lobe using (i) the slice-by-slice method (reference standard), (ii) number of segments method, and (iii) semi-automatic and (iv) automatic computer-aided diagnosis. We determined Pearson correlation coefficients between the reference standard and the three other methods for lobar volumes and emphysematous lobar volumes. We also compared the relative errors among the three measurement methods. RESULTS Both semi-automatic and automatic computer-aided diagnosis results were more strongly correlated with the reference standard than the number of segments method. The correlation coefficients for automatic computer-aided diagnosis were slightly lower than those for semi-automatic computer-aided diagnosis because there was one outlier among 50 cases (2%) in the right upper lobe and two outliers among 50 cases (4%) in the other lobes. The number of segments method relative error was significantly greater than those for semi-automatic and automatic computer-aided diagnosis (P < 0.001). The computational time for automatic computer-aided diagnosis was 1/2 to 2/3 than that of semi-automatic computer-aided diagnosis. CONCLUSIONS A novel lobar volumetry computer-aided diagnosis system could more precisely measure lobar volumes than the conventional number of segments method. Because semi-automatic computer-aided diagnosis and automatic computer-aided diagnosis were complementary, in clinical use, it would be more practical to first measure volumes by automatic computer-aided diagnosis, and then use semi-automatic measurements if automatic computer-aided diagnosis failed. PMID:23526418
[Measurement of intracranial hematoma volume by personal computer].
DU, Wanping; Tan, Lihua; Zhai, Ning; Zhou, Shunke; Wang, Rui; Xue, Gongshi; Xiao, An
2011-01-01
To explore the method for intracranial hematoma volume measurement by the personal computer. Forty cases of various intracranial hematomas were measured by the computer tomography with quantitative software and personal computer with Photoshop CS3 software, respectively. the data from the 2 methods were analyzed and compared. There was no difference between the data from the computer tomography and the personal computer (P>0.05). The personal computer with Photoshop CS3 software can measure the volume of various intracranial hematomas precisely, rapidly and simply. It should be recommended in the clinical medicolegal identification.
Coupled Finite Volume and Finite Element Method Analysis of a Complex Large-Span Roof Structure
NASA Astrophysics Data System (ADS)
Szafran, J.; Juszczyk, K.; Kamiński, M.
2017-12-01
The main goal of this paper is to present coupled Computational Fluid Dynamics and structural analysis for the precise determination of wind impact on internal forces and deformations of structural elements of a longspan roof structure. The Finite Volume Method (FVM) serves for a solution of the fluid flow problem to model the air flow around the structure, whose results are applied in turn as the boundary tractions in the Finite Element Method problem structural solution for the linear elastostatics with small deformations. The first part is carried out with the use of ANSYS 15.0 computer system, whereas the FEM system Robot supports stress analysis in particular roof members. A comparison of the wind pressure distribution throughout the roof surface shows some differences with respect to that available in the engineering designing codes like Eurocode, which deserves separate further numerical studies. Coupling of these two separate numerical techniques appears to be promising in view of future computational models of stochastic nature in large scale structural systems due to the stochastic perturbation method.
Forest management opportunities for Michigan, 1981-1990.
W. Brad Smith; John S. Jr. Spencer
1985-01-01
Uses a computer model to identify commercial forest that would benefit from forest management and the volume of removals that would result. Targets 63% of the state's commercial area for treatment--primarily harvest, stand conversions, and thinning.
NASA Astrophysics Data System (ADS)
McCune, Matthew; Shafiee, Ashkan; Forgacs, Gabor; Kosztin, Ioan
2014-03-01
Cellular Particle Dynamics (CPD) is an effective computational method for describing and predicting the time evolution of biomechanical relaxation processes of multicellular systems. A typical example is the fusion of spheroidal bioink particles during post bioprinting structure formation. In CPD cells are modeled as an ensemble of cellular particles (CPs) that interact via short-range contact interactions, characterized by an attractive (adhesive interaction) and a repulsive (excluded volume interaction) component. The time evolution of the spatial conformation of the multicellular system is determined by following the trajectories of all CPs through integration of their equations of motion. CPD was successfully applied to describe and predict the fusion of 3D tissue construct involving identical spherical aggregates. Here, we demonstrate that CPD can also predict tissue formation involving uneven spherical aggregates whose volumes decrease during the fusion process. Work supported by NSF [PHY-0957914]. Computer time provided by the University of Missouri Bioinformatics Consortium.
Zhang, Baofeng; Kilburg, Denise; Eastman, Peter; Pande, Vijay S; Gallicchio, Emilio
2017-04-15
We present an algorithm to efficiently compute accurate volumes and surface areas of macromolecules on graphical processing unit (GPU) devices using an analytic model which represents atomic volumes by continuous Gaussian densities. The volume of the molecule is expressed by means of the inclusion-exclusion formula, which is based on the summation of overlap integrals among multiple atomic densities. The surface area of the molecule is obtained by differentiation of the molecular volume with respect to atomic radii. The many-body nature of the model makes a port to GPU devices challenging. To our knowledge, this is the first reported full implementation of this model on GPU hardware. To accomplish this, we have used recursive strategies to construct the tree of overlaps and to accumulate volumes and their gradients on the tree data structures so as to minimize memory contention. The algorithm is used in the formulation of a surface area-based non-polar implicit solvent model implemented as an open source plug-in (named GaussVol) for the popular OpenMM library for molecular mechanics modeling. GaussVol is 50 to 100 times faster than our best optimized implementation for the CPUs, achieving speeds in excess of 100 ns/day with 1 fs time-step for protein-sized systems on commodity GPUs. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
High resolution global flood hazard map from physically-based hydrologic and hydraulic models.
NASA Astrophysics Data System (ADS)
Begnudelli, L.; Kaheil, Y.; McCollum, J.
2017-12-01
The global flood map published online at http://www.fmglobal.com/research-and-resources/global-flood-map at 90m resolution is being used worldwide to understand flood risk exposure, exercise certain measures of mitigation, and/or transfer the residual risk financially through flood insurance programs. The modeling system is based on a physically-based hydrologic model to simulate river discharges, and 2D shallow-water hydrodynamic model to simulate inundation. The model can be applied to large-scale flood hazard mapping thanks to several solutions that maximize its efficiency and the use of parallel computing. The hydrologic component of the modeling system is the Hillslope River Routing (HRR) hydrologic model. HRR simulates hydrological processes using a Green-Ampt parameterization, and is calibrated against observed discharge data from several publicly-available datasets. For inundation mapping, we use a 2D Finite-Volume Shallow-Water model with wetting/drying. We introduce here a grid Up-Scaling Technique (UST) for hydraulic modeling to perform simulations at higher resolution at global scale with relatively short computational times. A 30m SRTM is now available worldwide along with higher accuracy and/or resolution local Digital Elevation Models (DEMs) in many countries and regions. UST consists of aggregating computational cells, thus forming a coarser grid, while retaining the topographic information from the original full-resolution mesh. The full-resolution topography is used for building relationships between volume and free surface elevation inside cells and computing inter-cell fluxes. This approach almost achieves computational speed typical of the coarse grids while preserving, to a significant extent, the accuracy offered by the much higher resolution available DEM. The simulations are carried out along each river of the network by forcing the hydraulic model with the streamflow hydrographs generated by HRR. Hydrographs are scaled so that the peak corresponds to the return period corresponding to the hazard map being produced (e.g. 100 years, 500 years). Each numerical simulation models one river reach, except for the longest reaches which are split in smaller parts. Here we show results for selected river basins worldwide.
DOT National Transportation Integrated Search
1975-12-01
Frequency domain computer programs developed or acquired by TSC for the analysis of rail vehicle dynamics are described in two volumes. Volume I defines the general analytical capabilities required for computer programs applicable to single rail vehi...
NASA Technical Reports Server (NTRS)
Wolfe, M. G.
1978-01-01
Contents: (1) general study guidelines and assumptions; (2) launch vehicle performance and cost assumptions; (3) satellite programs 1959 to 1979; (4) initiative mission and design characteristics; (5) satellite listing; (6) spacecraft design model; (7) spacecraft cost model; (8) mission cost model; and (9) nominal and optimistic budget program cost summaries.
Information-driven trade and price-volume relationship in artificial stock markets
NASA Astrophysics Data System (ADS)
Liu, Xinghua; Liu, Xin; Liang, Xiaobei
2015-07-01
The positive relation between stock price changes and trading volume (price-volume relationship) as a stylized fact has attracted significant interest among finance researchers and investment practitioners. However, until now, consensus has not been reached regarding the causes of the relationship based on real market data because extracting valuable variables (such as information-driven trade volume) from real data is difficult. This lack of general consensus motivates us to develop a simple agent-based computational artificial stock market where extracting the necessary variables is easy. Based on this model and its artificial data, our tests have found that the aggressive trading style of informed agents can produce a price-volume relationship. Therefore, the information spreading process is not a necessary condition for producing price-volume relationship.
DOT National Transportation Integrated Search
1981-09-01
Volume III is the third and last volume of a three volume document describing the computer program HEVSIM. This volume includes appendices which list the HEVSIM program, sample part data, some typical outputs and updated nomenclature.
NASA Astrophysics Data System (ADS)
Ovanesyan, Zaven
Highly charged cylindrical and spherical objects (macroions) are probably the simplest structures for modeling nucleic acids, proteins and nanoparticles. Their ubiquitous presence within biophysical systems ensures that Coulomb forces are among the most important interactions that regulate the behavior of these systems. In these systems, ions position themselves in a strongly correlated manner near the surface of a macroion and form electrical double layers (EDLs). These EDLs play an important role in many biophysical and biochemical processes. For instance, the macroion's net charge can change due to the binding of many multivalent ions to its surface. Thus, proper description of EDLs near the surface of a macroion may reveal a counter-intuitive charge inversion behavior, which can generate attraction between like-charged objects. This is relevant for the variety of fields such as self-assembly of DNA and RNA folding, as well as for protein aggregation and neurodegenerative diseases. Certainly, the key factors that contribute to these phenomena cannot be properly understood without an accurate solvation model. With recent advancements in computer technologies, the possibility to use computational tools for fundamental understanding of the role of EDLs around biomolecules and nanoparticles on their physical and chemical properties is becoming more feasible. Establishing the impact of the excluded volume and ion-ion correlations, ionic strength and pH of the electrolyte on the EDL around biomolecules and nanoparticles, and how changes in these properties consequently affect the Zeta potential and surface charge density are still not well understood. Thus, modeling and understanding the role of these properties on EDLs will provide more insights on the stability, adsorption, binding and function of biomolecules and nanoparticles. Existing mean-field theories such as Poisson Boltzmann (PB) often neglect the ion-ion correlations, solvent and ion excluded volume effects, which are important details for proper description of EDL properties. In this thesis, we implement an efficient and accurate classical solvation density functional theory (CDSFT) for EDLs of spherical macroions and cylindrical polyelectrolytes embedded in aqueous electrolytes. This approach extends the capabilities of mean field approximations by taking into account electrostatic ion-ion correlations, size asymmetry and excluded volume effects without compromising the computational cost. We apply the computational tool to study the structural and thermodynamic properties of the ionic atmosphere around B-DNA and spherical nanoparticles. We demonstrate that the presence of solvent molecules at experimental concentration and size values has a significant impact on the layering of ions. This layering directly influences the integrated charge and mean electrostatic potential in the diffuse region of the spherical electrical double layer (SEDL) and have a noticeable impact on the behavior of zeta potential (ZP). Recently, we have extended the aforementioned CSDFT to account for the charge-regulated mechanisms of the macroion surface on the structural and thermodynamic properties of spherical EDLs. In the approach, the CSDFT is combined with a surface complexation model to account for ion correlation and excluded volume effects on the surface titration of spherical macroions. We apply the proposed computational approach to describe the role that the ion size and solvent excluded volume play on the surface titration properties of silica nanoparticles. We analyze the effects of the nanoparticle size, pH and salt concentration of the aqueous solution on the nanoparticle's surface charge and zeta potential. The results reveal that surface charge density and zeta potential significantly depend on excluded volume and ion-ion correlation effects as well as on pH for monovalent ion species at high salt concentrations. Overall, our results are in good agreement with Monte Carlo simulations and available experimental data. We discuss future directions of this work, which includes extension of the solvation model for studying the flexibility properties of rigid peptides and globular proteins, and describes benefits that this research can potentially bring to scientific and non scientific communities.
Zheng, Yefeng; Barbu, Adrian; Georgescu, Bogdan; Scheuering, Michael; Comaniciu, Dorin
2008-11-01
We propose an automatic four-chamber heart segmentation system for the quantitative functional analysis of the heart from cardiac computed tomography (CT) volumes. Two topics are discussed: heart modeling and automatic model fitting to an unseen volume. Heart modeling is a nontrivial task since the heart is a complex nonrigid organ. The model must be anatomically accurate, allow manual editing, and provide sufficient information to guide automatic detection and segmentation. Unlike previous work, we explicitly represent important landmarks (such as the valves and the ventricular septum cusps) among the control points of the model. The control points can be detected reliably to guide the automatic model fitting process. Using this model, we develop an efficient and robust approach for automatic heart chamber segmentation in 3-D CT volumes. We formulate the segmentation as a two-step learning problem: anatomical structure localization and boundary delineation. In both steps, we exploit the recent advances in learning discriminative models. A novel algorithm, marginal space learning (MSL), is introduced to solve the 9-D similarity transformation search problem for localizing the heart chambers. After determining the pose of the heart chambers, we estimate the 3-D shape through learning-based boundary delineation. The proposed method has been extensively tested on the largest dataset (with 323 volumes from 137 patients) ever reported in the literature. To the best of our knowledge, our system is the fastest with a speed of 4.0 s per volume (on a dual-core 3.2-GHz processor) for the automatic segmentation of all four chambers.
NASA Astrophysics Data System (ADS)
Missiaen, Jean-Michel; Raharijaona, Jean-Joël; Delannay, Francis
2016-11-01
A model is developed to compute the capillary pressure for the migration of the liquid phase out or into a uniform solid-liquid-vapor system. The capillary pressure is defined as the reduction of the overall interface energy per volume increment of the transferred fluid phase. The model takes into account the particle size of the solid particle aggregate, the packing configuration (coordination number, porosity), the volume fractions of the different phases, and the values of the interface energies in the system. The model is used for analyzing the stability of the composition profile during processing of W-Cu functionally graded materials combining a composition gradient with a particle size gradient. The migration pressure is computed with the model in two stages: (1) just after the melting of copper, i.e., when sintering and shape accommodation of the W particle aggregate can still be neglected and (2) at high temperature, when the system is close to full density with equilibrium particle shape. The model predicts well the different stages of liquid-phase migration observed experimentally.
Air-gas exchange reevaluated: clinically important results of a computer simulation.
Shunmugam, Manoharan; Shunmugam, Sudhakaran; Williamson, Tom H; Laidlaw, D Alistair
2011-10-21
The primary aim of this study was to evaluate the efficiency of air-gas exchange techniques and the factors that influence the final concentration of an intraocular gas tamponade. Parameters were varied to find the optimum method of performing an air-gas exchange in ideal circumstances. A computer model of the eye was designed using 3D software with fluid flow analysis capabilities. Factors such as angular distance between ports, gas infusion gauge, exhaust vent gauge and depth were varied in the model. Flow rate and axial length were also modulated to simulate faster injections and more myopic eyes, respectively. The flush volume of gas required to achieve a 97% intraocular gas fraction concentration were compared. Modulating individual factors did not reveal any clinically significant difference in the angular distance between ports, exhaust vent size, and depth or rate of gas injection. In combination, however, there was a 28% increase in air-gas exchange efficiency comparing the most efficient with the least efficient studied parameters in this model. The gas flush volume required to achieve a 97% gas fill also increased proportionately at a ratio of 5.5 to 6.2 times the volume of the eye. A 35-mL flush is adequate for eyes up to 25 mm in axial length; however, eyes longer than this would require a much greater flush volume, and surgeons should consider using two separate 50-mL gas syringes to ensure optimal gas concentration for eyes greater than 25 mm in axial length.
NASA Technical Reports Server (NTRS)
Smith, J. H.
1980-01-01
A quick reference for obtaining estimates of available solar insolation for numerous locations and array angles is presented. A model and a computer program are provided which considered the effects of array shadowing reflector augmentation as design variables.
Economic Analysis. Computer Simulation Models.
ERIC Educational Resources Information Center
Sterling Inst., Washington, DC. Educational Technology Center.
A multimedia course in economic analysis was developed and used in conjunction with the United States Naval Academy. (See ED 043 790 and ED 043 791 for final reports of the project evaluation and development model.) This volume of the text discusses the simulation of behavioral relationships among variable elements in an economy and presents…
Development of seismic tomography software for hybrid supercomputers
NASA Astrophysics Data System (ADS)
Nikitin, Alexandr; Serdyukov, Alexandr; Duchkov, Anton
2015-04-01
Seismic tomography is a technique used for computing velocity model of geologic structure from first arrival travel times of seismic waves. The technique is used in processing of regional and global seismic data, in seismic exploration for prospecting and exploration of mineral and hydrocarbon deposits, and in seismic engineering for monitoring the condition of engineering structures and the surrounding host medium. As a consequence of development of seismic monitoring systems and increasing volume of seismic data, there is a growing need for new, more effective computational algorithms for use in seismic tomography applications with improved performance, accuracy and resolution. To achieve this goal, it is necessary to use modern high performance computing systems, such as supercomputers with hybrid architecture that use not only CPUs, but also accelerators and co-processors for computation. The goal of this research is the development of parallel seismic tomography algorithms and software package for such systems, to be used in processing of large volumes of seismic data (hundreds of gigabytes and more). These algorithms and software package will be optimized for the most common computing devices used in modern hybrid supercomputers, such as Intel Xeon CPUs, NVIDIA Tesla accelerators and Intel Xeon Phi co-processors. In this work, the following general scheme of seismic tomography is utilized. Using the eikonal equation solver, arrival times of seismic waves are computed based on assumed velocity model of geologic structure being analyzed. In order to solve the linearized inverse problem, tomographic matrix is computed that connects model adjustments with travel time residuals, and the resulting system of linear equations is regularized and solved to adjust the model. The effectiveness of parallel implementations of existing algorithms on target architectures is considered. During the first stage of this work, algorithms were developed for execution on supercomputers using multicore CPUs only, with preliminary performance tests showing good parallel efficiency on large numerical grids. Porting of the algorithms to hybrid supercomputers is currently ongoing.
NASA Astrophysics Data System (ADS)
Skripnyak, Vladimir A.; Skripnyak, Natalia V.; Skripnyak, Evgeniya G.; Skripnyak, Vladimir V.
2017-01-01
Inelastic deformation and damage at the mesoscale level of ultrafine grained (UFG) light alloys with distribution of grain size were investigated in wide loading conditions by experimental and computer simulation methods. The computational multiscale models of representative volume element (RVE) with the unimodal and bimodal grain size distributions were developed using the data of structure researches aluminum and magnesium UFG alloys. The critical fracture stress of UFG alloys on mesoscale level depends on relative volumes of coarse grains. Microcracks nucleation at quasi-static and dynamic loading is associated with strain localization in UFG partial volumes with bimodal grain size distribution. Microcracks arise in the vicinity of coarse and ultrafine grains boundaries. It is revealed that the occurrence of bimodal grain size distributions causes the increasing of UFG alloys ductility, but decreasing of the tensile strength.
Parallel volume ray-casting for unstructured-grid data on distributed-memory architectures
NASA Technical Reports Server (NTRS)
Ma, Kwan-Liu
1995-01-01
As computing technology continues to advance, computational modeling of scientific and engineering problems produces data of increasing complexity: large in size and unstructured in shape. Volume visualization of such data is a challenging problem. This paper proposes a distributed parallel solution that makes ray-casting volume rendering of unstructured-grid data practical. Both the data and the rendering process are distributed among processors. At each processor, ray-casting of local data is performed independent of the other processors. The global image composing processes, which require inter-processor communication, are overlapped with the local ray-casting processes to achieve maximum parallel efficiency. This algorithm differs from previous ones in four ways: it is completely distributed, less view-dependent, reasonably scalable, and flexible. Without using dynamic load balancing, test results on the Intel Paragon using from two to 128 processors show, on average, about 60% parallel efficiency.
Coupled CFD-PBE Predictions of Renal Stone Size Distributions in the Nephron in Microgravity
NASA Technical Reports Server (NTRS)
Kassemi, Mohammad; Griffin, Elise; Thompson, David
2016-01-01
In this paper, a deterministic model is developed to assess the risk of critical renal stone formation for astronauts during space travel. A Population Balance Equation (PBE) model is used to compute the size distribution of a population of nucleating, growing and agglomerating renal calculi as they are transported through different sections of the nephron. The PBE model is coupled to a Computational Fluid Dynamics (CFD) model that solves for steady state flow of urine and transport of renal calculi along with the concentrations of ionic species, calcium and oxalate, in the nephron using an Eulerian two-phase mathematical framework. Parametric simulation are performed to study stone size enhancement and steady state volume fraction distributions in the four main sections of the nephron under weightlessness conditions. Contribution of agglomeration to the stone size distribution and effect of wall friction on the stone volume fraction distributions are carefully examined. Case studies using measured astronaut urinary calcium and oxalate concentrations in microgravity as input indicate that under nominal conditions the largest stone sizes developed in Space will be still considerably below the critical range for problematic stone development. However, results also indicate that the highest stone volume fraction occurs next to the tubule and duct walls. This suggests that there is an increased potential for wall adhesion with the possibility of evolution towards critical stone sizes.
ARACHNE: A neural-neuroglial network builder with remotely controlled parallel computing
Rusakov, Dmitri A.; Savtchenko, Leonid P.
2017-01-01
Creating and running realistic models of neural networks has hitherto been a task for computing professionals rather than experimental neuroscientists. This is mainly because such networks usually engage substantial computational resources, the handling of which requires specific programing skills. Here we put forward a newly developed simulation environment ARACHNE: it enables an investigator to build and explore cellular networks of arbitrary biophysical and architectural complexity using the logic of NEURON and a simple interface on a local computer or a mobile device. The interface can control, through the internet, an optimized computational kernel installed on a remote computer cluster. ARACHNE can combine neuronal (wired) and astroglial (extracellular volume-transmission driven) network types and adopt realistic cell models from the NEURON library. The program and documentation (current version) are available at GitHub repository https://github.com/LeonidSavtchenko/Arachne under the MIT License (MIT). PMID:28362877
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kudryashov, Nikolay A.; Shilnikov, Kirill E.
Numerical computation of the three dimensional problem of the freezing interface propagation during the cryosurgery coupled with the multi-objective optimization methods is used in order to improve the efficiency and safety of the cryosurgery operations performing. Prostate cancer treatment and cutaneous cryosurgery are considered. The heat transfer in soft tissue during the thermal exposure to low temperature is described by the Pennes bioheat model and is coupled with an enthalpy method for blurred phase change computations. The finite volume method combined with the control volume approximation of the heat fluxes is applied for the cryosurgery numerical modeling on the tumormore » tissue of a quite arbitrary shape. The flux relaxation approach is used for the stability improvement of the explicit finite difference schemes. The method of the additional heating elements mounting is studied as an approach to control the cellular necrosis front propagation. Whereas the undestucted tumor tissue and destucted healthy tissue volumes are considered as objective functions, the locations of additional heating elements in cutaneous cryosurgery and cryotips in prostate cancer cryotreatment are considered as objective variables in multi-objective problem. The quasi-gradient method is proposed for the searching of the Pareto front segments as the multi-objective optimization problem solutions.« less
Space and Earth Sciences, Computer Systems, and Scientific Data Analysis Support, Volume 1
NASA Technical Reports Server (NTRS)
Estes, Ronald H. (Editor)
1993-01-01
This Final Progress Report covers the specific technical activities of Hughes STX Corporation for the last contract triannual period of 1 June through 30 Sep. 1993, in support of assigned task activities at Goddard Space Flight Center (GSFC). It also provides a brief summary of work throughout the contract period of performance on each active task. Technical activity is presented in Volume 1, while financial and level-of-effort data is presented in Volume 2. Technical support was provided to all Division and Laboratories of Goddard's Space Sciences and Earth Sciences Directorates. Types of support include: scientific programming, systems programming, computer management, mission planning, scientific investigation, data analysis, data processing, data base creation and maintenance, instrumentation development, and management services. Mission and instruments supported include: ROSAT, Astro-D, BBXRT, XTE, AXAF, GRO, COBE, WIND, UIT, SMM, STIS, HEIDI, DE, URAP, CRRES, Voyagers, ISEE, San Marco, LAGEOS, TOPEX/Poseidon, Pioneer-Venus, Galileo, Cassini, Nimbus-7/TOMS, Meteor-3/TOMS, FIFE, BOREAS, TRMM, AVHRR, and Landsat. Accomplishments include: development of computing programs for mission science and data analysis, supercomputer applications support, computer network support, computational upgrades for data archival and analysis centers, end-to-end management for mission data flow, scientific modeling and results in the fields of space and Earth physics, planning and design of GSFC VO DAAC and VO IMS, fabrication, assembly, and testing of mission instrumentation, and design of mission operations center.
NASA Technical Reports Server (NTRS)
Lin, Shian-Jiann; DaSilva, Arlindo; Atlas, Robert (Technical Monitor)
2001-01-01
Toward the development of a finite-volume Data Assimilation System (fvDAS), a consistent finite-volume methodology is developed for interfacing the NASA/DAO's Physical Space Statistical Analysis System (PSAS) to the joint NASA/NCAR finite volume CCM3 (fvCCM3). To take advantage of the Lagrangian control-volume vertical coordinate of the fvCCM3, a novel "shaving" method is applied to the lowest few model layers to reflect the surface pressure changes as implied by the final analysis. Analysis increments (from PSAS) to the upper air variables are then consistently put onto the Lagrangian layers as adjustments to the volume-mean quantities during the analysis cycle. This approach is demonstrated to be superior to the conventional method of using independently computed "tendency terms" for surface pressure and upper air prognostic variables.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Dongsheng; Lavender, Curt
2015-05-08
Improving yield strength and asymmetry is critical to expand applications of magnesium alloys in industry for higher fuel efficiency and lower CO 2 production. Grain refinement is an efficient method for strengthening low symmetry magnesium alloys, achievable by precipitate refinement. This study provides guidance on how precipitate engineering will improve mechanical properties through grain refinement. Precipitate refinement for improving yield strengths and asymmetry is simulated quantitatively by coupling a stochastic second phase grain refinement model and a modified polycrystalline crystal viscoplasticity φ-model. Using the stochastic second phase grain refinement model, grain size is quantitatively determined from the precipitate size andmore » volume fraction. Yield strengths, yield asymmetry, and deformation behavior are calculated from the modified φ-model. If the precipitate shape and size remain constant, grain size decreases with increasing precipitate volume fraction. If the precipitate volume fraction is kept constant, grain size decreases with decreasing precipitate size during precipitate refinement. Yield strengths increase and asymmetry approves to one with decreasing grain size, contributed by increasing precipitate volume fraction or decreasing precipitate size.« less
Determining blood and plasma volumes using bioelectrical response spectroscopy
NASA Technical Reports Server (NTRS)
Siconolfi, S. F.; Nusynowitz, M. L.; Suire, S. S.; Moore, A. D. Jr; Leig, J.
1996-01-01
We hypothesized that an electric field (inductance) produced by charged blood components passing through the many branches of arteries and veins could assess total blood volume (TBV) or plasma volume (PV). Individual (N = 29) electrical circuits (inductors, two resistors, and a capacitor) were determined from bioelectrical response spectroscopy (BERS) using a Hewlett Packard 4284A Precision LCR Meter. Inductance, capacitance, and resistance from the circuits of 19 subjects modeled TBV (sum of PV and computed red cell volume) and PV (based on 125I-albumin). Each model (N = 10, cross validation group) had good validity based on 1) mean differences (-2.3 to 1.5%) between the methods that were not significant and less than the propagated errors (+/- 5.2% for TBV and PV), 2) high correlations (r > 0.92) with low SEE (< 7.7%) between dilution and BERS assessments, and 3) Bland-Altman pairwise comparisons that indicated "clinical equivalency" between the methods. Given the limitation of this study (10 validity subjects), we concluded that BERS models accurately assessed TBV and PV. Further evaluations of the models' validities are needed before they are used in clinical or research settings.
A new fractional order derivative based active contour model for colon wall segmentation
NASA Astrophysics Data System (ADS)
Chen, Bo; Li, Lihong C.; Wang, Huafeng; Wei, Xinzhou; Huang, Shan; Chen, Wensheng; Liang, Zhengrong
2018-02-01
Segmentation of colon wall plays an important role in advancing computed tomographic colonography (CTC) toward a screening modality. Due to the low contrast of CT attenuation around colon wall, accurate segmentation of the boundary of both inner and outer wall is very challenging. In this paper, based on the geodesic active contour model, we develop a new model for colon wall segmentation. First, tagged materials in CTC images were automatically removed via a partial volume (PV) based electronic colon cleansing (ECC) strategy. We then present a new fractional order derivative based active contour model to segment the volumetric colon wall from the cleansed CTC images. In this model, the regionbased Chan-Vese model is incorporated as an energy term to the whole model so that not only edge/gradient information but also region/volume information is taken into account in the segmentation process. Furthermore, a fractional order differentiation derivative energy term is also developed in the new model to preserve the low frequency information and improve the noise immunity of the new segmentation model. The proposed colon wall segmentation approach was validated on 16 patient CTC scans. Experimental results indicate that the present scheme is very promising towards automatically segmenting colon wall, thus facilitating computer aided detection of initial colonic polyp candidates via CTC.
1983-09-01
6ENFRAL. ELECTROMAGNETIC MODEL FOR THE ANALYSIS OF COMPLEX SYSTEMS **%(GEMA CS) Computer Code Documentation ii( Version 3 ). A the BDM Corporation Dr...ANALYSIS FnlTcnclRpr F COMPLEX SYSTEM (GmCS) February 81 - July 83- I TR CODE DOCUMENTATION (Version 3 ) 6.PROMN N.REPORT NUMBER 5. CONTRACT ORGAT97...the ti and t2 directions on the source patch. 3 . METHOD: The electric field at a segment observation point due to the source patch j is given by 1-- lnA
Kirkwood-Buff integrals of finite systems: shape effects
NASA Astrophysics Data System (ADS)
Dawass, Noura; Krüger, Peter; Simon, Jean-Marc; Vlugt, Thijs J. H.
2018-06-01
The Kirkwood-Buff (KB) theory provides an important connection between microscopic density fluctuations in liquids and macroscopic properties. Recently, Krüger et al. derived equations for KB integrals for finite subvolumes embedded in a reservoir. Using molecular simulation of finite systems, KB integrals can be computed either from density fluctuations inside such subvolumes, or from integrals of radial distribution functions (RDFs). Here, based on the second approach, we establish a framework to compute KB integrals for subvolumes with arbitrary convex shapes. This requires a geometric function w(x) which depends on the shape of the subvolume, and the relative position inside the subvolume. We present a numerical method to compute w(x) based on Umbrella Sampling Monte Carlo (MC). We compute KB integrals of a liquid with a model RDF for subvolumes with different shapes. KB integrals approach the thermodynamic limit in the same way: for sufficiently large volumes, KB integrals are a linear function of area over volume, which is independent of the shape of the subvolume.
Passive and active ventricular elastances of the left ventricle
Zhong, Liang; Ghista, Dhanjoo N; Ng, Eddie YK; Lim, Soo T
2005-01-01
Background Description of the heart as a pump has been dominated by models based on elastance and compliance. Here, we are presenting a somewhat new concept of time-varying passive and active elastance. The mathematical basis of time-varying elastance of the ventricle is presented. We have defined elastance in terms of the relationship between ventricular pressure and volume, as: dP = EdV + VdE, where E includes passive (Ep) and active (Ea) elastance. By incorporating this concept in left ventricular (LV) models to simulate filling and systolic phases, we have obtained the time-varying expression for Ea and the LV-volume dependent expression for Ep. Methods and Results Using the patient's catheterization-ventriculogram data, the values of passive and active elastance are computed. Ea is expressed as: ; Epis represented as: . Ea is deemed to represent a measure of LV contractility. Hence, Peak dP/dt and ejection fraction (EF) are computed from the monitored data and used as the traditional measures of LV contractility. When our computed peak active elastance (Ea,max) is compared against these traditional indices by linear regression, a high degree of correlation is obtained. As regards Ep, it constitutes a volume-dependent stiffness property of the LV, and is deemed to represent resistance-to-filling. Conclusions Passive and active ventricular elastance formulae can be evaluated from a single-beat P-V data by means of a simple-to-apply LV model. The active elastance (Ea) can be used to characterize the ventricle's contractile state, while passive elastance (Ep) can represent a measure of resistance-to-filling. PMID:15707494
Kasaven, C P; McIntyre, G T; Mossey, P A
2017-01-01
Our objective was to assess the accuracy of virtual and printed 3-dimensional models derived from cone-beam computed tomographic (CT) scans to measure the volume of alveolar clefts before bone grafting. Fifteen subjects with unilateral cleft lip and palate had i-CAT cone-beam CT scans recorded at 0.2mm voxel and sectioned transversely into slices 0.2mm thick using i-CAT Vision. Volumes of alveolar clefts were calculated using first a validated algorithm; secondly, commercially-available virtual 3-dimensional model software; and finally 3-dimensional printed models, which were scanned with microCT and analysed using 3-dimensional software. For inter-observer reliability, a two-way mixed model intraclass correlation coefficient (ICC) was used to evaluate the reproducibility of identification of the cranial and caudal limits of the clefts among three observers. We used a Friedman test to assess the significance of differences among the methods, and probabilities of less than 0.05 were accepted as significant. Inter-observer reliability was almost perfect (ICC=0.987). There were no significant differences among the three methods. Virtual and printed 3-dimensional models were as precise as the validated computer algorithm in the calculation of volumes of the alveolar cleft before bone grafting, but virtual 3-dimensional models were the most accurate with the smallest 95% CI and, subject to further investigation, could be a useful adjunct in clinical practice. Copyright © 2016 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
38th JANNAF Combustion Subcommittee Meeting. Volume 1
NASA Technical Reports Server (NTRS)
Fry, Ronald S. (Editor); Eggleston, Debra S. (Editor); Gannaway, Mary T. (Editor)
2002-01-01
This volume, the first of two volumes, is a collection of 55 unclassified/unlimited-distribution papers which were presented at the Joint Army-Navy-NASA-Air Force (JANNAF) 38th Combustion Subcommittee (CS), 26 th Airbreathing Propulsion Subcommittee (APS), 20th Propulsion Systems Hazards Subcommittee (PSHS), and 21 Modeling and Simulation Subcommittee. The meeting was held 8-12 April 2002 at the Bayside Inn at The Sandestin Golf & Beach Resort and Eglin Air Force Base, Destin, Florida. Topics cover five major technology areas including: 1) Combustion - Propellant Combustion, Ingredient Kinetics, Metal Combustion, Decomposition Processes and Material Characterization, Rocket Motor Combustion, and Liquid & Hybrid Combustion; 2) Liquid Rocket Engines - Low Cost Hydrocarbon Liquid Rocket Engines, Liquid Propulsion Turbines, Liquid Propulsion Pumps, and Staged Combustion Injector Technology; 3) Modeling & Simulation - Development of Multi- Disciplinary RBCC Modeling, Gun Modeling, and Computational Modeling for Liquid Propellant Combustion; 4) Guns Gun Propelling Charge Design, and ETC Gun Propulsion; and 5) Airbreathing - Scramjet an Ramjet- S&T Program Overviews.
NASA Astrophysics Data System (ADS)
Mousavi, Seyed Jamshid; Mahdizadeh, Kourosh; Afshar, Abbas
2004-08-01
Application of stochastic dynamic programming (SDP) models to reservoir optimization calls for state variables discretization. As an important variable discretization of reservoir storage volume has a pronounced effect on the computational efforts. The error caused by storage volume discretization is examined by considering it as a fuzzy state variable. In this approach, the point-to-point transitions between storage volumes at the beginning and end of each period are replaced by transitions between storage intervals. This is achieved by using fuzzy arithmetic operations with fuzzy numbers. In this approach, instead of aggregating single-valued crisp numbers, the membership functions of fuzzy numbers are combined. Running a simulated model with optimal release policies derived from fuzzy and non-fuzzy SDP models shows that a fuzzy SDP with a coarse discretization scheme performs as well as a classical SDP having much finer discretized space. It is believed that this advantage in the fuzzy SDP model is due to the smooth transitions between storage intervals which benefit from soft boundaries.
Pressure induced ageing of polymers
NASA Technical Reports Server (NTRS)
Emri, I.; Knauss, W. G.
1988-01-01
The nonlinearly viscoelastic response of an amorphous homopolymer is considered under aspects of time dependent free volume behavior. In contrast to linearly viscoelastic solids, this model couples shear and volume deformation through a shift function which influences the rate of molecular relaxation or creep. Sample computations produce all those qualitative features one observes normally in uniaxial tension including the rate dependent formation of a yield point as a consequence of the history of an imposed pressure.
A comparative approach to computer aided design model of a dog femur.
Turamanlar, O; Verim, O; Karabulut, A
2016-01-01
Computer assisted technologies offer new opportunities in medical imaging and rapid prototyping in biomechanical engineering. Three dimensional (3D) modelling of soft tissues and bones are becoming more important. The accuracy of the analysis in modelling processes depends on the outline of the tissues derived from medical images. The aim of this study is the evaluation of the accuracy of 3D models of a dog femur derived from computed tomography data by using point cloud method and boundary line method on several modelling software. Solidworks, Rapidform and 3DSMax software were used to create 3D models and outcomes were evaluated statistically. The most accurate 3D prototype of the dog femur was created with stereolithography method using rapid prototype device. Furthermore, the linearity of the volumes of models was investigated between software and the constructed models. The difference between the software and real models manifests the sensitivity of the software and the devices used in this manner.
1978-09-01
Models HELP Ductile Material HEMP Brittle Material PUFF Iron Aluminum Eulerian Codea Tap«.r«»H Flyor Pl^«-» rmp«^» tO. ABITRACT (Conllmjm M r«v... HEMP ) code with those obtained by the Eulerian (HELP) code 5.3 Relative void volume of damage regions at three times after impact in the 1145...plate calculation 5.5 Relative void volume of material in the 1145 aluminum target at 1.46 us after impact as computed by the Lagrangian ( HEMP
Haufe, Stefan; Huang, Yu; Parra, Lucas C
2015-08-01
In electroencephalographic (EEG) source imaging as well as in transcranial current stimulation (TCS), it is common to model the head using either three-shell boundary element (BEM) or more accurate finite element (FEM) volume conductor models. Since building FEMs is computationally demanding and labor intensive, they are often extensively reused as templates even for subjects with mismatching anatomies. BEMs can in principle be used to efficiently build individual volume conductor models; however, the limiting factor for such individualization are the high acquisition costs of structural magnetic resonance images. Here, we build a highly detailed (0.5mm(3) resolution, 6 tissue type segmentation, 231 electrodes) FEM based on the ICBM152 template, a nonlinear average of 152 adult human heads, which we call ICBM-NY. We show that, through more realistic electrical modeling, our model is similarly accurate as individual BEMs. Moreover, through using an unbiased population average, our model is also more accurate than FEMs built from mismatching individual anatomies. Our model is made available in Matlab format.
A trait-based test for habitat filtering: Convex hull volume
Cornwell, W.K.; Schwilk, D.W.; Ackerly, D.D.
2006-01-01
Community assembly theory suggests that two processes affect the distribution of trait values within communities: competition and habitat filtering. Within a local community, competition leads to ecological differentiation of coexisting species, while habitat filtering reduces the spread of trait values, reflecting shared ecological tolerances. Many statistical tests for the effects of competition exist in the literature, but measures of habitat filtering are less well-developed. Here, we present convex hull volume, a construct from computational geometry, which provides an n-dimensional measure of the volume of trait space occupied by species in a community. Combined with ecological null models, this measure offers a useful test for habitat filtering. We use convex hull volume and a null model to analyze California woody-plant trait and community data. Our results show that observed plant communities occupy less trait space than expected from random assembly, a result consistent with habitat filtering. ?? 2006 by the Ecological Society of America.
Mondoñedo, Jarred R; Suki, Béla
2017-02-01
Lung volume reduction surgery (LVRS) and bronchoscopic lung volume reduction (bLVR) are palliative treatments aimed at reducing hyperinflation in advanced emphysema. Previous work has evaluated functional improvements and survival advantage for these techniques, although their effects on the micromechanical environment in the lung have yet to be determined. Here, we introduce a computational model to simulate a force-based destruction of elastic networks representing emphysema progression, which we use to track the response to lung volume reduction via LVRS and bLVR. We find that (1) LVRS efficacy can be predicted based on pre-surgical network structure; (2) macroscopic functional improvements following bLVR are related to microscopic changes in mechanical force heterogeneity; and (3) both techniques improve aspects of survival and quality of life influenced by lung compliance, albeit while accelerating disease progression. Our model predictions yield unique insights into the microscopic origins underlying emphysema progression before and after lung volume reduction.
Mondoñedo, Jarred R.
2017-01-01
Lung volume reduction surgery (LVRS) and bronchoscopic lung volume reduction (bLVR) are palliative treatments aimed at reducing hyperinflation in advanced emphysema. Previous work has evaluated functional improvements and survival advantage for these techniques, although their effects on the micromechanical environment in the lung have yet to be determined. Here, we introduce a computational model to simulate a force-based destruction of elastic networks representing emphysema progression, which we use to track the response to lung volume reduction via LVRS and bLVR. We find that (1) LVRS efficacy can be predicted based on pre-surgical network structure; (2) macroscopic functional improvements following bLVR are related to microscopic changes in mechanical force heterogeneity; and (3) both techniques improve aspects of survival and quality of life influenced by lung compliance, albeit while accelerating disease progression. Our model predictions yield unique insights into the microscopic origins underlying emphysema progression before and after lung volume reduction. PMID:28182686
Analysis of Aerospike Plume Induced Base-Heating Environment
NASA Technical Reports Server (NTRS)
Wang, Ten-See
1998-01-01
Computational analysis is conducted to study the effect of an aerospike engine plume on X-33 base-heating environment during ascent flight. To properly account for the effect of forebody and aftbody flowfield such as shocks and to allow for potential plume-induced flow-separation, thermo-flowfield of trajectory points is computed. The computational methodology is based on a three-dimensional finite-difference, viscous flow, chemically reacting, pressure-base computational fluid dynamics formulation, and a three-dimensional, finite-volume, spectral-line based weighted-sum-of-gray-gases radiation absorption model computational heat transfer formulation. The predicted convective and radiative base-heat fluxes are presented.
Combining Modeling and Gaming for Predictive Analytics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riensche, Roderick M.; Whitney, Paul D.
2012-08-22
Many of our most significant challenges involve people. While human behavior has long been studied, there are recent advances in computational modeling of human behavior. With advances in computational capabilities come increases in the volume and complexity of data that humans must understand in order to make sense of and capitalize on these modeling advances. Ultimately, models represent an encapsulation of human knowledge. One inherent challenge in modeling is efficient and accurate transfer of knowledge from humans to models, and subsequent retrieval. The simulated real-world environment of games presents one avenue for these knowledge transfers. In this paper we describemore » our approach of combining modeling and gaming disciplines to develop predictive capabilities, using formal models to inform game development, and using games to provide data for modeling.« less
Vehicle Component Characterization. Volume 2 : Data Appendices.
DOT National Transportation Integrated Search
1987-01-01
This study developed a set of data which could be used in computer crash occupant simulation models to study automobile crashworthiness. The data generated has been used to develop a data base on the National Highway Traffic Safety Administration's V...
Vehicle Component Characterization. Volume 1 : Project Results.
DOT National Transportation Integrated Search
1987-01-01
This study developed a set of data which could be used in computer crash occupant simulation models to study automobile crashworthiness. The data generated has been used to develop a data base on the National Highway Traffic Safety Administration's V...
NASA Technical Reports Server (NTRS)
Kline, S. J. (Editor); Cantwell, B. J. (Editor); Lilley, G. M.
1982-01-01
Computational techniques for simulating turbulent flows were explored, together with the results of experimental investigations. Particular attention was devoted to the possibility of defining a universal closure model, applicable for all turbulence situations; however, conclusions were drawn that zonal models, describing localized structures, were the most promising techniques to date. The taxonomy of turbulent flows was summarized, as were algebraic, differential, integral, and partial differential methods for numerical depiction of turbulent flows. Numerous comparisons of theoretically predicted and experimentally obtained data for wall pressure distributions, velocity profiles, turbulent kinetic energy profiles, Reynolds shear stress profiles, and flows around transonic airfoils were presented. Simplifying techniques for reducing the necessary computational time for modeling complex flowfields were surveyed, together with the industrial requirements and applications of computational fluid dynamics techniques.
Isotani, Shuji; Shimoyama, Hirofumi; Yokota, Isao; Noma, Yasuhiro; Kitamura, Kousuke; China, Toshiyuki; Saito, Keisuke; Hisasue, Shin-ichi; Ide, Hisamitsu; Muto, Satoru; Yamaguchi, Raizo; Ukimura, Osamu; Gill, Inderbir S; Horie, Shigeo
2015-10-01
The predictive model of postoperative renal function may impact on planning nephrectomy. To develop the novel predictive model using combination of clinical indices with computer volumetry to measure the preserved renal cortex volume (RCV) using multidetector computed tomography (MDCT), and to prospectively validate performance of the model. Total 60 patients undergoing radical nephrectomy from 2011 to 2013 participated, including a development cohort of 39 patients and an external validation cohort of 21 patients. RCV was calculated by voxel count using software (Vincent, FUJIFILM). Renal function before and after radical nephrectomy was assessed via the estimated glomerular filtration rate (eGFR). Factors affecting postoperative eGFR were examined by regression analysis to develop the novel model for predicting postoperative eGFR with a backward elimination method. The predictive model was externally validated and the performance of the model was compared with that of the previously reported models. The postoperative eGFR value was associated with age, preoperative eGFR, preserved renal parenchymal volume (RPV), preserved RCV, % of RPV alteration, and % of RCV alteration (p < 0.01). The significant correlated variables for %eGFR alteration were %RCV preservation (r = 0.58, p < 0.01) and %RPV preservation (r = 0.54, p < 0.01). We developed our regression model as follows: postoperative eGFR = 57.87 - 0.55(age) - 15.01(body surface area) + 0.30(preoperative eGFR) + 52.92(%RCV preservation). Strong correlation was seen between postoperative eGFR and the calculated estimation model (r = 0.83; p < 0.001). The external validation cohort (n = 21) showed our model outperformed previously reported models. Combining MDCT renal volumetry and clinical indices might yield an important tool for predicting postoperative renal function.
NASA Astrophysics Data System (ADS)
Du, Jinsong; Chen, Chao; Lesur, Vincent; Lane, Richard; Wang, Huilin
2015-06-01
We examined the mathematical and computational aspects of the magnetic potential, vector and gradient tensor fields of a tesseroid in a geocentric spherical coordinate system (SCS). This work is relevant for 3-D modelling that is performed with lithospheric vertical scales and global, continent or large regional horizontal scales. The curvature of the Earth is significant at these scales and hence, a SCS is more appropriate than the usual Cartesian coordinate system (CCS). The 3-D arrays of spherical prisms (SP; `tesseroids') can be used to model the response of volumes with variable magnetic properties. Analytical solutions do not exist for these model elements and numerical or mixed numerical and analytical solutions must be employed. We compared various methods for calculating the response in terms of accuracy and computational efficiency. The methods were (1) the spherical coordinate magnetic dipole method (MD), (2) variants of the 3-D Gauss-Legendre quadrature integration method (3-D GLQI) with (i) different numbers of nodes in each of the three directions, and (ii) models where we subdivided each SP into a number of smaller tesseroid volume elements, (3) a procedure that we term revised Gauss-Legendre quadrature integration (3-D RGLQI) where the magnetization direction which is constant in a SCS is assumed to be constant in a CCS and equal to the direction at the geometric centre of each tesseroid, (4) the Taylor's series expansion method (TSE) and (5) the rectangular prism method (RP). In any realistic application, both the accuracy and the computational efficiency factors must be considered to determine the optimum approach to employ. In all instances, accuracy improves with increasing distance from the source. It is higher in the percentage terms for potential than the vector or tensor response. The tensor errors are the largest, but they decrease more quickly with distance from the source. In our comparisons of relative computational efficiency, we found that the magnetic potential takes less time to compute than the vector response, which in turn takes less time to compute than the tensor gradient response. The MD method takes less time to compute than either the TSE or RP methods. The efficiency of the (GLQI and) RGLQI methods depends on the polynomial order, but the response typically takes longer to compute than it does for the other methods. The optimum method is a complex function of the desired accuracy, the size of the volume elements, the element latitude and the distance between the source and the observation. For a model of global extent with typical model element size (e.g. 1 degree horizontally and 10 km radially) and observations at altitudes of 10s to 100s of km, a mixture of methods based on the horizontal separation of the source and observation separation would be the optimum approach. To demonstrate the RGLQI method described within this paper, we applied it to the computation of the response for a global magnetization model for observations at 300 and 30 km altitude.
Burrowes, Kelly S; Hunter, Peter J; Tawhai, Merryn H
2005-11-01
A computational model of blood flow through the human pulmonary arterial tree has been developed to investigate the relative influence of branching structure and gravity on blood flow distribution in the human lung. Geometric models of the largest arterial vessels and lobar boundaries were first derived using multidetector row x-ray computed tomography (MDCT) scans. Further accompanying arterial vessels were generated from the MDCT vessel endpoints into the lobar volumes using a volume-filling branching algorithm. Equations governing the conservation of mass and momentum were solved within the geometric model to calculate pressure, velocity, and vessel radius. Blood flow results in the anatomically based model, with and without gravity, and in a symmetric geometric model were compared to investigate their relative contributions to blood flow heterogeneity. Results showed a persistent blood flow gradient and flow heterogeneity in the absence of gravitational forces in the anatomically based model. Comparison with flow results in the symmetric model revealed that the asymmetric vascular branching structure was largely responsible for producing this heterogeneity. Analysis of average results in varying slice thicknesses illustrated a clear flow gradient because of gravity in "lower resolution" data (thicker slices), but on examination of higher resolution data, a trend was less obvious. Results suggest that although gravity does influence flow distribution, the influence of the tree branching structure is also a dominant factor. These results are consistent with high-resolution experimental studies that have demonstrated gravity to be only a minor determinant of blood flow distribution.
The Shock and Vibration Digest. Volume 18, Number 7
1986-07-01
long-term dynamic irregularity of a soluble Los Alamos, NM, July 21-23, 1981 quantum mechanical model known as the Jaynes - Cummings model . The analysis...substructure models are obtained % substructure computation can be performed by approximating each state space vector as a independently of the other...Non- and rotational residual flexibilities at the inter- linear joint behavior is modeled by an equivalent face. Data were taken in the form of
Generalized Advanced Propeller Analysis System (GAPAS). Volume 2: Computer program user manual
NASA Technical Reports Server (NTRS)
Glatt, L.; Crawford, D. R.; Kosmatka, J. B.; Swigart, R. J.; Wong, E. W.
1986-01-01
The Generalized Advanced Propeller Analysis System (GAPAS) computer code is described. GAPAS was developed to analyze advanced technology multi-bladed propellers which operate on aircraft with speeds up to Mach 0.8 and altitudes up to 40,000 feet. GAPAS includes technology for analyzing aerodynamic, structural, and acoustic performance of propellers. The computer code was developed for the CDC 7600 computer and is currently available for industrial use on the NASA Langley computer. A description of all the analytical models incorporated in GAPAS is included. Sample calculations are also described as well as users requirements for modifying the analysis system. Computer system core requirements and running times are also discussed.
Dawn: A Simulation Model for Evaluating Costs and Tradeoffs of Big Data Science Architectures
NASA Astrophysics Data System (ADS)
Cinquini, L.; Crichton, D. J.; Braverman, A. J.; Kyo, L.; Fuchs, T.; Turmon, M.
2014-12-01
In many scientific disciplines, scientists and data managers are bracing for an upcoming deluge of big data volumes, which will increase the size of current data archives by a factor of 10-100 times. For example, the next Climate Model Inter-comparison Project (CMIP6) will generate a global archive of model output of approximately 10-20 Peta-bytes, while the upcoming next generation of NASA decadal Earth Observing instruments are expected to collect tens of Giga-bytes/day. In radio-astronomy, the Square Kilometre Array (SKA) will collect data in the Exa-bytes/day range, of which (after reduction and processing) around 1.5 Exa-bytes/year will be stored. The effective and timely processing of these enormous data streams will require the design of new data reduction and processing algorithms, new system architectures, and new techniques for evaluating computation uncertainty. Yet at present no general software tool or framework exists that will allow system architects to model their expected data processing workflow, and determine the network, computational and storage resources needed to prepare their data for scientific analysis. In order to fill this gap, at NASA/JPL we have been developing a preliminary model named DAWN (Distributed Analytics, Workflows and Numerics) for simulating arbitrary complex workflows composed of any number of data processing and movement tasks. The model can be configured with a representation of the problem at hand (the data volumes, the processing algorithms, the available computing and network resources), and is able to evaluate tradeoffs between different possible workflows based on several estimators: overall elapsed time, separate computation and transfer times, resulting uncertainty, and others. So far, we have been applying DAWN to analyze architectural solutions for 4 different use cases from distinct science disciplines: climate science, astronomy, hydrology and a generic cloud computing use case. This talk will present preliminary results and discuss how DAWN can be evolved into a powerful tool for designing system architectures for data intensive science.
NASA Technical Reports Server (NTRS)
Claus, Steven J.; Loos, Alfred C.
1989-01-01
RTM is a FORTRAN '77 computer code which simulates the infiltration of textile reinforcements and the kinetics of thermosetting polymer resin systems. The computer code is based on the process simulation model developed by the author. The compaction of dry, woven textile composites is simulated to describe the increase in fiber volume fraction with increasing compaction pressure. Infiltration is assumed to follow D'Arcy's law for Newtonian viscous fluids. The chemical changes which occur in the resin during processing are simulated with a thermo-kinetics model. The computer code is discussed on the basis of the required input data, output files and some comments on how to interpret the results. An example problem is solved and a complete listing is included.
DOT National Transportation Integrated Search
1981-09-01
Volume II is the second volume of a three volume document describing the computer program HEVSIM for use with buses and heavy duty trucks. This volume is a user's manual describing how to prepare data input and execute the program. A strong effort ha...
Earthquake-origin expansion of the Earth inferred from a spherical-Earth elastic dislocation theory
NASA Astrophysics Data System (ADS)
Xu, Changyi; Sun, Wenke
2014-12-01
In this paper, we propose an approach to compute the coseismic Earth's volume change based on a spherical-Earth elastic dislocation theory. We present a general expression of the Earth's volume change for three typical dislocations: the shear, tensile and explosion sources. We conduct a case study for the 2004 Sumatra earthquake (Mw9.3), the 2010 Chile earthquake (Mw8.8), the 2011 Tohoku-Oki earthquake (Mw9.0) and the 2013 Okhotsk Sea earthquake (Mw8.3). The results show that mega-thrust earthquakes make the Earth expand and earthquakes along a normal fault make the Earth contract. We compare the volume changes computed for finite fault models and a point source of the 2011 Tohoku-Oki earthquake (Mw9.0). The big difference of the results indicates that the coseismic changes in the Earth's volume (or the mean radius) are strongly dependent on the earthquakes' focal mechanism, especially the depth and the dip angle. Then we estimate the cumulative volume changes by historical earthquakes (Mw ≥ 7.0) since 1960, and obtain an Earth mean radius expanding rate about 0.011 mm yr-1.
Effect of restoration volume on stresses in a mandibular molar: a finite element study.
Wayne, Jennifer S; Chande, Ruchi; Porter, H Christian; Janus, Charles
2014-10-01
There can be significant disagreement among dentists when planning treatment for a tooth with a failing medium-to-large--sized restoration. The clinician must determine whether the restoration should be replaced or treated with a crown, which covers and protects the remaining weakened tooth structure during function. The purpose of this study was to evaluate the stresses generated in different sized amalgam restorations via a computational modeling approach and reveal whether a predictable pattern emerges. A computer tomography scan was performed of an extracted mandibular first molar, and the resulting images were imported into a medical imaging software package for tissue segmentation. The software was used to separate the enamel, dentin, and pulp cavity through density thresholding and surface rendering. These tissue structures then were imported into 3-dimensional computer-aided design software in which material properties appropriate to the tissues in the model were assigned. A static finite element analysis was conducted to investigate the stresses that result from normal occlusal forces. Five models were analyzed, 1 with no restoration and 4 with increasingly larger restoration volume proportions: a normal-sized tooth, a small-sized restoration, 2 medium-sized restorations, and 1 large restoration as determined from bitewing radiographs and occlusal surface digital photographs. The resulting von Mises stresses for dentin-enamel of the loaded portion of the tooth grew progressively greater as the size of the restoration increased. The average stress in the normal, unrestored tooth was 4.13 MPa, whereas the smallest restoration size increased this stress to 5.52 MPa. The largest restoration had a dentin-enamel stress of 6.47 MPa. A linear correlation existed between restoration size and dentin-enamel stress, with an R(2) of 0.97. A larger restoration volume proportion resulted in higher dentin-enamel stresses under static loading. A comparison of the von Mises stresses to the yield strengths of the materials revealed a relationship between a tooth's restoration volume proportion and the potential for failure, although factors other than restoration volume proportion may also impact the stresses generated in moderate-sized restorations. Copyright © 2014 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Reheating-volume measure for random-walk inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winitzki, Sergei; Yukawa Institute of Theoretical Physics, Kyoto University, Kyoto
2008-09-15
The recently proposed 'reheating-volume' (RV) measure promises to solve the long-standing problem of extracting probabilistic predictions from cosmological multiverse scenarios involving eternal inflation. I give a detailed description of the new measure and its applications to generic models of eternal inflation of random-walk type. For those models I derive a general formula for RV-regulated probability distributions that is suitable for numerical computations. I show that the results of the RV cutoff in random-walk type models are always gauge invariant and independent of the initial conditions at the beginning of inflation. In a toy model where equal-time cutoffs lead to themore » 'youngness paradox', the RV cutoff yields unbiased results that are distinct from previously proposed measures.« less
Atomization simulations using an Eulerian-VOF-Lagrangian method
NASA Technical Reports Server (NTRS)
Chen, Yen-Sen; Shang, Huan-Min; Liaw, Paul; Chen, C. P.
1994-01-01
This paper summarizes the technical development and validation of a multiphase computational fluid dynamics (CFD) numerical method using the volume-of-fluid (VOF) model and a Lagrangian tracking model which can be employed to analyze general multiphase flow problems with free surface mechanism. The gas-liquid interface mass, momentum and energy conservations are modeled by continuum surface mechanisms. A new solution method is developed such that the present VOF model can be applied for all-speed flow regimes. The objectives of the present study are to develop and verify the fractional volume-of-fluid cell partitioning approach into a predictor-corrector algorithm and to demonstrate the effectiveness of the present innovative approach by simulating benchmark problems including the coaxial jet atomization.
NASA Astrophysics Data System (ADS)
Maire, Pierre-Henri; Abgrall, Rémi; Breil, Jérôme; Loubère, Raphaël; Rebourcet, Bernard
2013-02-01
In this paper, we describe a cell-centered Lagrangian scheme devoted to the numerical simulation of solid dynamics on two-dimensional unstructured grids in planar geometry. This numerical method, utilizes the classical elastic-perfectly plastic material model initially proposed by Wilkins [M.L. Wilkins, Calculation of elastic-plastic flow, Meth. Comput. Phys. (1964)]. In this model, the Cauchy stress tensor is decomposed into the sum of its deviatoric part and the thermodynamic pressure which is defined by means of an equation of state. Regarding the deviatoric stress, its time evolution is governed by a classical constitutive law for isotropic material. The plasticity model employs the von Mises yield criterion and is implemented by means of the radial return algorithm. The numerical scheme relies on a finite volume cell-centered method wherein numerical fluxes are expressed in terms of sub-cell force. The generic form of the sub-cell force is obtained by requiring the scheme to satisfy a semi-discrete dissipation inequality. Sub-cell force and nodal velocity to move the grid are computed consistently with cell volume variation by means of a node-centered solver, which results from total energy conservation. The nominally second-order extension is achieved by developing a two-dimensional extension in the Lagrangian framework of the Generalized Riemann Problem methodology, introduced by Ben-Artzi and Falcovitz [M. Ben-Artzi, J. Falcovitz, Generalized Riemann Problems in Computational Fluid Dynamics, Cambridge Monogr. Appl. Comput. Math. (2003)]. Finally, the robustness and the accuracy of the numerical scheme are assessed through the computation of several test cases.
ERIC Educational Resources Information Center
Goldman, Charles I.
The manual is part of a series to assist in planning procedures for local and State vocational agencies. It details steps required to process a local education agency's data after the data have been coded onto keypunch forms. Program, course, and overhead data are input into a computer data base and error checks are performed. A computer model is…
ERIC Educational Resources Information Center
Vergnaud, Gerard, Ed.; Rogalski, Janine, Ed.; Artique, Michele, Ed.
This proceedings of the annual conference of the International Group for the Psychology of Mathematics Education (PME) includes the following research papers: "A Model of Understanding Two-Digit Numeration and Computation" (H. Murray & A. Olivier); "The Computer Produces a Special Graphic Situation of Learning the Change of Coordinate System" (S.…
NASA Technical Reports Server (NTRS)
Saltsman, J. F.
1973-01-01
The relations between clad creep strain and fuel volume swelling are shown for cylindrical UO2 fuel pins with a Nb-1Zr clad. These relations were obtained by using the computer code CYGRO-2. These clad-strain - fuel-volume-swelling relations may be used with any fuel-volume-swelling model, provided the fuel volume swelling is isotropic and independent of the clad restraints. The effects of clad temperature (over a range from 118 to 1642 K (2010 to 2960 R)), pin diameter, clad thickness and central hole size in the fuel have been investigated. In all calculations the irradiation time was 500 hours. The burnup rate was varied.
Hurricane Forecasting with the High-resolution NASA Finite-volume General Circulation Model
NASA Technical Reports Server (NTRS)
Atlas, R.; Reale, O.; Shen, B.-W.; Lin, S.-J.; Chern, J.-D.; Putman, W.; Lee, T.; Yeh, K.-S.; Bosilovich, M.; Radakovich, J.
2004-01-01
A high-resolution finite-volume General Circulation Model (fvGCM), resulting from a development effort of more than ten years, is now being run operationally at the NASA Goddard Space Flight Center and Ames Research Center. The model is based on a finite-volume dynamical core with terrain-following Lagrangian control-volume discretization and performs efficiently on massive parallel architectures. The computational efficiency allows simulations at a resolution of a quarter of a degree, which is double the resolution currently adopted by most global models in operational weather centers. Such fine global resolution brings us closer to overcoming a fundamental barrier in global atmospheric modeling for both weather and climate, because tropical cyclones and even tropical convective clusters can be more realistically represented. In this work, preliminary results of the fvGCM are shown. Fifteen simulations of four Atlantic tropical cyclones in 2002 and 2004 are chosen because of strong and varied difficulties presented to numerical weather forecasting. It is shown that the fvGCM, run at the resolution of a quarter of a degree, can produce very good forecasts of these tropical systems, adequately resolving problems like erratic track, abrupt recurvature, intense extratropical transition, multiple landfall and reintensification, and interaction among vortices.
NASA Astrophysics Data System (ADS)
Li, Hua; Wang, Xiaogui; Yan, Guoping; Lam, K. Y.; Cheng, Sixue; Zou, Tao; Zhuo, Renxi
2005-03-01
In this paper, a novel multiphysic mathematical model is developed for simulation of swelling equilibrium of ionized temperature sensitive hydrogels with the volume phase transition, and it is termed the multi-effect-coupling thermal-stimulus (MECtherm) model. This model consists of the steady-state Nernst-Planck equation, Poisson equation and swelling equilibrium governing equation based on the Flory's mean field theory, in which two types of polymer-solvent interaction parameters, as the functions of temperature and polymer-network volume fraction, are specified with or without consideration of the hydrogen bond interaction. In order to examine the MECtherm model consisting of nonlinear partial differential equations, a meshless Hermite-Cloud method is used for numerical solution of one-dimensional swelling equilibrium of thermal-stimulus responsive hydrogels immersed in a bathing solution. The computed results are in very good agreements with experimental data for the variation of volume swelling ratio with temperature. The influences of the salt concentration and initial fixed-charge density are discussed in detail on the variations of volume swelling ratio of hydrogels, mobile ion concentrations and electric potential of both interior hydrogels and exterior bathing solution.
Simulation of a G-tolerance curve using the pulsatile cardiovascular model
NASA Technical Reports Server (NTRS)
Solomon, M.; Srinivasan, R.
1985-01-01
A computer simulation study, performed to assess the ability of the cardiovascular model to reproduce the G tolerance curve (G level versus tolerance time) is reported. A composite strength duration curve derived from experimental data obtained in human centrifugation studies was used for comparison. The effects of abolishing automomic control and of blood volume loss on G tolerance were also simulated. The results provide additional validation of the model. The need for the presence of autonomic reflexes even at low levels of G is pointed out. The low margin of safety with a loss of blood volume indicated by the simulation results underscores the necessity for protective measures during Shuttle reentry.
Thermally developed peristaltic propulsion of magnetic solid particles in biorheological fluids
NASA Astrophysics Data System (ADS)
Bhatti, M. M.; Zeeshan, A.; Tripathi, D.; Ellahi, R.
2018-04-01
In this article, effects of heat and mass transfer on MHD peristaltic motion of solid particles in a dusty fluid are investigated. The effects of nonlinear thermal radiation and Hall current are also taken into account. The relevant flow analysis is modelled for fluid phase and dust phase in wave frame by means of Casson fluid model. Computation of solutions is presented for velocity profile, temperature profile and concentration profile. The effects of all the physical parameters such as particle volume fraction, Hartmann number, Hall Effect, Prandtl number, Eckert number, Schmidt number and Soret number are discussed mathematically and graphically. It is noted that the influence of magnetic field and particle volume fraction opposes the flow. Also, the impact of particle volume fraction is quite opposite on temperature and concentration profile. This model is applicable in smart drug delivery systems and bacteria movement in urine flow through the ureter.
Chung, Beom Sun; Chung, Min Suk; Shin, Byeong Seok; Kwon, Koojoo
2018-02-19
The hand anatomy, including the complicated hand muscles, can be grasped by using computer-assisted learning tools with high quality two-dimensional images and three-dimensional models. The purpose of this study was to present up-to-date software tools that promote learning of stereoscopic morphology of the hand. On the basis of horizontal sectioned images and outlined images of a male cadaver, vertical planes, volume models, and surface models were elaborated. Software to browse pairs of the sectioned and outlined images in orthogonal planes and software to peel and rotate the volume models, as well as a portable document format (PDF) file to select and rotate the surface models, were produced. All of the software tools were downloadable free of charge and usable off-line. The three types of tools for viewing multiple aspects of the hand could be adequately employed according to individual needs. These new tools involving the realistic images of a cadaver and the diverse functions are expected to improve comprehensive knowledge of the hand shape. © 2018 The Korean Academy of Medical Sciences.
2018-01-01
Background The hand anatomy, including the complicated hand muscles, can be grasped by using computer-assisted learning tools with high quality two-dimensional images and three-dimensional models. The purpose of this study was to present up-to-date software tools that promote learning of stereoscopic morphology of the hand. Methods On the basis of horizontal sectioned images and outlined images of a male cadaver, vertical planes, volume models, and surface models were elaborated. Software to browse pairs of the sectioned and outlined images in orthogonal planes and software to peel and rotate the volume models, as well as a portable document format (PDF) file to select and rotate the surface models, were produced. Results All of the software tools were downloadable free of charge and usable off-line. The three types of tools for viewing multiple aspects of the hand could be adequately employed according to individual needs. Conclusion These new tools involving the realistic images of a cadaver and the diverse functions are expected to improve comprehensive knowledge of the hand shape. PMID:29441756
Analytic models of ducted turbomachinery tone noise sources. Volume 1: Analysis
NASA Technical Reports Server (NTRS)
Clark, T. L.; Ganz, U. W.; Graf, G. A.; Westall, J. S.
1974-01-01
The analytic models developed for computing the periodic sound pressure of subsonic fans and compressors in an infinite, hardwall annular duct with uniform flow are described. The basic sound-generating mechanism is the scattering into sound waves of velocity disturbances appearing to the rotor or stator blades as a series of harmonic gusts. The models include component interactions and rotor alone.
European Scientific Notes. Volume 35, Number 6.
1981-06-30
center where the operator can relationships are promoted in four ways: regulate the test conditions (air storage 211 +* , FSN 35-6 (1981) tanks, model ...tunnel models , for itself asymmetric aileron deflection and post- and its clients, using computer-controlled stall aerodynamics are in progress...those attending the Discussion nonaqueous solutions, (6) new theoretical was somewhat reduced by the fact that models , (7) Fermi-level concepts in solu
NASA Astrophysics Data System (ADS)
Kees, C. E.; Miller, C. T.; Dimakopoulos, A.; Farthing, M.
2016-12-01
The last decade has seen an expansion in the development and application of 3D free surface flow models in the context of environmental simulation. These models are based primarily on the combination of effective algorithms, namely level set and volume-of-fluid methods, with high-performance, parallel computing. These models are still computationally expensive and suitable primarily when high-fidelity modeling near structures is required. While most research on algorithms and implementations has been conducted in the context of finite volume methods, recent work has extended a class of level set schemes to finite element methods on unstructured methods. This work considers models of three-phase flow in domains containing air, water, and granular phases. These multi-phase continuum mechanical formulations show great promise for applications such as analysis of coastal and riverine structures. This work will consider formulations proposed in the literature over the last decade as well as new formulations derived using the thermodynamically constrained averaging theory, an approach to deriving and closing macroscale continuum models for multi-phase and multi-component processes. The target applications require the ability to simulate wave breaking and structure over-topping, particularly fully three-dimensional, non-hydrostatic flows that drive these phenomena. A conservative level set scheme suitable for higher-order finite element methods is used to describe the air/water phase interaction. The interaction of these air/water flows with granular materials, such as sand and rubble, must also be modeled. The range of granular media dynamics targeted including flow and wave transmision through the solid media as well as erosion and deposition of granular media and moving bed dynamics. For the granular phase we consider volume- and time-averaged continuum mechanical formulations that are discretized with the finite element method and coupled to the underlying air/water flow via operator splitting (fractional step) schemes. Particular attention will be given to verification and validation of the numerical model and important qualitative features of the numerical methods including phase conservation, wave energy dissipation, and computational efficiency in regimes of interest.
Surface models of the male urogenital organs built from the Visible Korean using popular software
Shin, Dong Sun; Park, Jin Seo; Shin, Byeong-Seok
2011-01-01
Unlike volume models, surface models, which are empty three-dimensional images, have a small file size, so they can be displayed, rotated, and modified in real time. Thus, surface models of male urogenital organs can be effectively applied to an interactive computer simulation and contribute to the clinical practice of urologists. To create high-quality surface models, the urogenital organs and other neighboring structures were outlined in 464 sectioned images of the Visible Korean male using Adobe Photoshop; the outlines were interpolated on Discreet Combustion; then an almost automatic volume reconstruction followed by surface reconstruction was performed on 3D-DOCTOR. The surface models were refined and assembled in their proper positions on Maya, and a surface model was coated with actual surface texture acquired from the volume model of the structure on specially programmed software. In total, 95 surface models were prepared, particularly complete models of the urinary and genital tracts. These surface models will be distributed to encourage other investigators to develop various kinds of medical training simulations. Increasingly automated surface reconstruction technology using commercial software will enable other researchers to produce their own surface models more effectively. PMID:21829759
NASA Technical Reports Server (NTRS)
Hammett, J. C.; Hayes, C. H.; Price, J. M.; Robinson, J. K.; Teal, G. A.; Thomson, J. M.; Tilley, D. M.; Welch, C. T.
1983-01-01
Normal modes of the blades and nozzles of the HPFTP and HPOTP are defined and potential driving forces for the blades are identified. The computer models used in blade analyses are described, with results. Similar information is given for the nozzles.
Numerical modeling of spray combustion with an advanced VOF method
NASA Technical Reports Server (NTRS)
Chen, Yen-Sen; Shang, Huan-Min; Shih, Ming-Hsin; Liaw, Paul
1995-01-01
This paper summarizes the technical development and validation of a multiphase computational fluid dynamics (CFD) numerical method using the volume-of-fluid (VOF) model and a Lagrangian tracking model which can be employed to analyze general multiphase flow problems with free surface mechanism. The gas-liquid interface mass, momentum and energy conservation relationships are modeled by continuum surface mechanisms. A new solution method is developed such that the present VOF model can be applied for all-speed flow regimes. The objectives of the present study are to develop and verify the fractional volume-of-fluid cell partitioning approach into a predictor-corrector algorithm and to demonstrate the effectiveness of the present approach by simulating benchmark problems including laminar impinging jets, shear coaxial jet atomization and shear coaxial spray combustion flows.
Computational Intelligence‐Assisted Understanding of Nature‐Inspired Superhydrophobic Behavior
Zhang, Xia; Ding, Bei; Dixon, Sebastian C.
2017-01-01
Abstract In recent years, state‐of‐the‐art computational modeling of physical and chemical systems has shown itself to be an invaluable resource in the prediction of the properties and behavior of functional materials. However, construction of a useful computational model for novel systems in both academic and industrial contexts often requires a great depth of physicochemical theory and/or a wealth of empirical data, and a shortage in the availability of either frustrates the modeling process. In this work, computational intelligence is instead used, including artificial neural networks and evolutionary computation, to enhance our understanding of nature‐inspired superhydrophobic behavior. The relationships between experimental parameters (water droplet volume, weight percentage of nanoparticles used in the synthesis of the polymer composite, and distance separating the superhydrophobic surface and the pendant water droplet in adhesive force measurements) and multiple objectives (water droplet contact angle, sliding angle, and adhesive force) are built and weighted. The obtained optimal parameters are consistent with the experimental observations. This new approach to materials modeling has great potential to be applied more generally to aid design, fabrication, and optimization for myriad functional materials. PMID:29375975
Computational Intelligence-Assisted Understanding of Nature-Inspired Superhydrophobic Behavior.
Zhang, Xia; Ding, Bei; Cheng, Ran; Dixon, Sebastian C; Lu, Yao
2018-01-01
In recent years, state-of-the-art computational modeling of physical and chemical systems has shown itself to be an invaluable resource in the prediction of the properties and behavior of functional materials. However, construction of a useful computational model for novel systems in both academic and industrial contexts often requires a great depth of physicochemical theory and/or a wealth of empirical data, and a shortage in the availability of either frustrates the modeling process. In this work, computational intelligence is instead used, including artificial neural networks and evolutionary computation, to enhance our understanding of nature-inspired superhydrophobic behavior. The relationships between experimental parameters (water droplet volume, weight percentage of nanoparticles used in the synthesis of the polymer composite, and distance separating the superhydrophobic surface and the pendant water droplet in adhesive force measurements) and multiple objectives (water droplet contact angle, sliding angle, and adhesive force) are built and weighted. The obtained optimal parameters are consistent with the experimental observations. This new approach to materials modeling has great potential to be applied more generally to aid design, fabrication, and optimization for myriad functional materials.
Pin on flat wear volume prediction of UHMWPE against cp Ti for orthopedic applications
NASA Astrophysics Data System (ADS)
Handoko, Suyitno, Dharmastiti, Rini; Magetsari, Rahadyan
2018-04-01
Tribological assessment of orthopedic biomaterials requires a lot of testing time. Researchers must test the biomaterials in millions of cycles at low frequency (1 Hz) to mimic the in vivo conditions. It is a problem because product designs and developments could not wait longer for wear data to predict the lifetime of their products. The problem can be solved with the use of computation techniques to model the wear phenomena and provide predicted data. The aim of this research is to predict the wear volume of the commonly used ultra high molecular weight polyethylene (UHMWPE) sliding against commercially pure titanium (cp Ti) in the unidirectional pin on flat tests. The 9 mm diameter UHMWPE pin and cp Ti plate contact mechanics were modeled using Abaqus. Contact pressure was set at 3 MPa. Outputs of the computations (contact pressure and contact area) were used to calculate the wear volume with Archard law. A custom Python script was made to automate the process. The results were then compared with experimental data for validations. The predicted data were in a good trend with numerical errors from 0.3% up to 26%.
NASA Astrophysics Data System (ADS)
Daude, F.; Galon, P.
2018-06-01
A Finite-Volume scheme for the numerical computations of compressible single- and two-phase flows in flexible pipelines is proposed based on an approximate Godunov-type approach. The spatial discretization is here obtained using the HLLC scheme. In addition, the numerical treatment of abrupt changes in area and network including several pipelines connected at junctions is also considered. The proposed approach is based on the integral form of the governing equations making it possible to tackle general equations of state. A coupled approach for the resolution of fluid-structure interaction of compressible fluid flowing in flexible pipes is considered. The structural problem is solved using Euler-Bernoulli beam finite elements. The present Finite-Volume method is applied to ideal gas and two-phase steam-water based on the Homogeneous Equilibrium Model (HEM) in conjunction with a tabulated equation of state in order to demonstrate its ability to tackle general equations of state. The extensive application of the scheme for both shock tube and other transient flow problems demonstrates its capability to resolve such problems accurately and robustly. Finally, the proposed 1-D fluid-structure interaction model appears to be computationally efficient.
An engineering closure for heavily under-resolved coarse-grid CFD in large applications
NASA Astrophysics Data System (ADS)
Class, Andreas G.; Yu, Fujiang; Jordan, Thomas
2016-11-01
Even though high performance computation allows very detailed description of a wide range of scales in scientific computations, engineering simulations used for design studies commonly merely resolve the large scales thus speeding up simulation time. The coarse-grid CFD (CGCFD) methodology is developed for flows with repeated flow patterns as often observed in heat exchangers or porous structures. It is proposed to use inviscid Euler equations on a very coarse numerical mesh. This coarse mesh needs not to conform to the geometry in all details. To reinstall physics on all smaller scales cheap subgrid models are employed. Subgrid models are systematically constructed by analyzing well-resolved generic representative simulations. By varying the flow conditions in these simulations correlations are obtained. These comprehend for each individual coarse mesh cell a volume force vector and volume porosity. Moreover, for all vertices, surface porosities are derived. CGCFD is related to the immersed boundary method as both exploit volume forces and non-body conformal meshes. Yet, CGCFD differs with respect to the coarser mesh and the use of Euler equations. We will describe the methodology based on a simple test case and the application of the method to a 127 pin wire-wrap fuel bundle.
NASA Astrophysics Data System (ADS)
Zhou, Jie E.; Yan, Yongke; Priya, Shashank; Wang, Yu U.
2017-01-01
Quantitative relationships between processing, microstructure, and properties in textured ferroelectric polycrystals and the underlying responsible mechanisms are investigated by phase field modeling and computer simulation. This study focuses on three important aspects of textured ferroelectric ceramics: (i) grain microstructure evolution during templated grain growth processing, (ii) crystallographic texture development as a function of volume fraction and seed size of the templates, and (iii) dielectric and piezoelectric properties of the obtained template-matrix composites of textured polycrystals. Findings on the third aspect are presented here, while an accompanying paper of this work reports findings on the first two aspects. In this paper, the competing effects of crystallographic texture and template seed volume fraction on the dielectric and piezoelectric properties of ferroelectric polycrystals are investigated. The phase field model of ferroelectric composites consisting of template seeds embedded in matrix grains is developed to simulate domain evolution, polarization-electric field (P-E), and strain-electric field (ɛ-E) hysteresis loops. The coercive field, remnant polarization, dielectric permittivity, piezoelectric coefficient, and dissipation factor are studied as a function of grain texture and template seed volume fraction. It is found that, while crystallographic texture significantly improves the polycrystal properties towards those of single crystals, a higher volume fraction of template seeds tends to decrease the electromechanical properties, thus canceling the advantage of ferroelectric polycrystals textured by templated grain growth processing. This competing detrimental effect is shown to arise from the composite effect, where the template phase possesses material properties inferior to the matrix phase, causing mechanical clamping and charge accumulation at inter-phase interfaces between matrix and template inclusions. The computational results are compared with complementary experiments, where good agreement is obtained.
Flow behaviour in normal and Meniere’s disease of endolymphatic fluid inside the inner ear
NASA Astrophysics Data System (ADS)
Paisal, Muhammad Sufyan Amir; Azmi Wahab, Muhamad; Taib, Ishkrizat; Mat Isa, Norasikin; Ramli, Yahaya; Seri, Suzairin Md; Darlis, Nofrizalidris; Osman, Kahar; Khudzari, Ahmad Zahran Md; Nordin, Normayati
2017-09-01
Meniere’s disease is a rare disorder that affects the inner ear which might be more severe if not treated. This is due to fluctuating pressure of the fluid in the endolymphatic sac and dysfunction of cochlea which causing the stretching of vestibular membrane. However, the pattern of the flow recirculation in endolymphatic region is still not fully understood. Thus, this study aims to investigate the correlation between the increasing volume of endolymphatic fluid and flow characteristics such as velocity, pressure and wall shear stress. Three dimensional model of simplified endolymphatic region is modeled using computer aided design (CAD) software and simulated using computational fluid dynamic (CFD) software. There are three different models are investigated; normal (N) model, Meniere’s disease model with less severity (M1) and Meniere’s disease model with high severity (M2). From the observed, the pressure drop between inlet and outlet of inner ear becomes decreases as the outlet pressure along with endolymphatic volume increases. However, constant flow rate imposed at the inlet of endolymphatic showing the lowest velocity. Flow recirculation near to endolymphatic region is occurred as the volume in endolympathic increases. Overall, high velocity is monitored near to cochlear duct, ductus reuniens and endolymphatic duct. Hence, these areas show high distributions of wall shear stress (WSS) that indicating a high probability of endolymphatic wall membrane dilation. Thus, more severe conditions of Meniere’s disease, more complex of flow characteristic is occurred. This phenomenon presenting high probability of rupture is predicted at the certain area in the anatomy of vestibular system.
Mathematical modeling of fluid-electrolyte alterations during weightlessness
NASA Technical Reports Server (NTRS)
Leonard, J. I.
1984-01-01
Fluid electrolyte metabolism and renal endocrine control as it pertains to adaptation to weightlessness were studied. The mathematical models that have been particularly useful are discussed. However, the focus of the report is on the physiological meaning of the computer studies. A discussion of the major ground based analogs of weightlessness are included; for example, head down tilt, water immersion, and bed rest, and a comparison of findings. Several important zero g phenomena are described, including acute fluid volume regulation, blood volume regulation, circulatory changes, longer term fluid electrolyte adaptations, hormonal regulation, and body composition changes. Hypotheses are offered to explain the major findings in each area and these are integrated into a larger hypothesis of space flight adaptation. A conceptual foundation for fluid electrolyte metabolism, blood volume regulation, and cardiovascular regulation is reported.
NASA Astrophysics Data System (ADS)
Lehmann, Peter; von Ruette, Jonas; Fan, Linfeng; Or, Dani
2014-05-01
Rapid debris flows initiated by rainfall induced shallow landslides present a highly destructive natural hazard in steep terrain. The impact and run-out paths of debris flows depend on the volume, composition and initiation zone of released material and are requirements to make accurate debris flow predictions and hazard maps. For that purpose we couple the mechanistic 'Catchment-scale Hydro-mechanical Landslide Triggering (CHLT)' model to compute timing, location, and landslide volume with simple approaches to estimate debris flow runout distances. The runout models were tested using two landslide inventories obtained in the Swiss Alps following prolonged rainfall events. The predicted runout distances were in good agreement with observations, confirming the utility of such simple models for landscape scale estimates. In a next step debris flow paths were computed for landslides predicted with the CHLT model for a certain range of soil properties to explore its effect on runout distances. This combined approach offers a more complete spatial picture of shallow landslide and subsequent debris flow hazards. The additional information provided by CHLT model concerning location, shape, soil type and water content of the released mass may also be incorporated into more advanced models of runout to improve predictability and impact of such abruptly-released mass.
Hossain, Md Shakhawath; Bergstrom, D J; Chen, X B
2015-11-01
The in vitro chondrocyte cell culture process in a perfusion bioreactor provides enhanced nutrient supply as well as the flow-induced shear stress that may have a positive influence on the cell growth. Mathematical and computational modelling of such a culture process, by solving the coupled flow, mass transfer and cell growth equations simultaneously, can provide important insight into the biomechanical environment of a bioreactor and the related cell growth process. To do this, a two-way coupling between the local flow field and cell growth is required. Notably, most of the computational and mathematical models to date have not taken into account the influence of the cell growth on the local flow field and nutrient concentration. The present research aimed at developing a mathematical model and performing a numerical simulation using the lattice Boltzmann method to predict the chondrocyte cell growth without a scaffold on a flat plate placed inside a perfusion bioreactor. The model considers the two-way coupling between the cell growth and local flow field, and the simulation has been performed for 174 culture days. To incorporate the cell growth into the model, a control-volume-based surface growth modelling approach has been adopted. The simulation results show the variation of local fluid velocity, shear stress and concentration distribution during the culture period due to the growth of the cell phase and also illustrate that the shear stress can increase the cell volume fraction to a certain extent.
Patel, Amit R; Fatemi, Omid; Norton, Patrick T; West, J Jason; Helms, Adam S; Kramer, Christopher M; Ferguson, John D
2008-06-01
Left atrial (LA) volume determines prognosis and response to therapy for atrial fibrillation. Integration of electroanatomic maps with three-dimensional images rendered from computed tomography and magnetic resonance imaging (MRI) is used to facilitate atrial fibrillation ablation. The purpose of this study was to measure LA volume changes and regional motion during the cardiac cycle that might affect the accuracy of image integration and to determine their relationship to standard LA volume measurements. MRI was performed in 30 patients with paroxysmal atrial fibrillation. LA time-volume curves were generated and used to divide LA ejection fraction into pumping ejection fraction and conduit ejection fraction and to determine maximum LA volume (LA(max)) and preatrial contraction volume. LA volume was measured using an MRI angiogram and traditional geometric models from echocardiography (area-length model and ellipsoid model). In-plane displacement of the pulmonary veins, anterior left atrium, mitral annulus, and LA appendage was measured. LA(max) was 107 +/- 36 mL and occurred at 42% +/- 5% of the R-R interval. Preatrial contraction volume was 86 +/- 34 mL and occurred at 81% +/- 4% of the R-R interval. LA ejection fraction was 45% +/- 10%, and pumping ejection fraction was 31% +/- 10%. LA volume measurements made from MRI angiogram, area-length model, and ellipsoid model underestimated LA(max) by 21 +/- 25 mL, 16 +/- 26 mL, and 35 +/- 22 mL, respectively. Anterior LA, mitral annulus, and LA appendage were significantly displaced during the cardiac cycle (8.8 +/- 2.0 mm, 13.2 +/- 3.8 mm, and 10.2 +/- 3.4 mm, respectively); the pulmonary veins were not displaced. LA volume changes significantly during the cardiac cycle, and substantial regional variation in LA motion exists. Standard measurements of LA volume significantly underestimate LA(max) compared to the gold standard measure of three-dimensional volumetrics.
Scheimpflug with computational imaging to extend the depth of field of iris recognition systems
NASA Astrophysics Data System (ADS)
Sinharoy, Indranil
Despite the enormous success of iris recognition in close-range and well-regulated spaces for biometric authentication, it has hitherto failed to gain wide-scale adoption in less controlled, public environments. The problem arises from a limitation in imaging called the depth of field (DOF): the limited range of distances beyond which subjects appear blurry in the image. The loss of spatial details in the iris image outside the small DOF limits the iris image capture to a small volume-the capture volume. Existing techniques to extend the capture volume are usually expensive, computationally intensive, or afflicted by noise. Is there a way to combine the classical Scheimpflug principle with the modern computational imaging techniques to extend the capture volume? The solution we found is, surprisingly, simple; yet, it provides several key advantages over existing approaches. Our method, called Angular Focus Stacking (AFS), consists of capturing a set of images while rotating the lens, followed by registration, and blending of the in-focus regions from the images in the stack. The theoretical underpinnings of AFS arose from a pair of new and general imaging models we developed for Scheimpflug imaging that directly incorporates the pupil parameters. The model revealed that we could register the images in the stack analytically if we pivot the lens at the center of its entrance pupil, rendering the registration process exact. Additionally, we found that a specific lens design further reduces the complexity of image registration making AFS suitable for real-time performance. We have demonstrated up to an order of magnitude improvement in the axial capture volume over conventional image capture without sacrificing optical resolution and signal-to-noise ratio. The total time required for capturing the set of images for AFS is less than the time needed for a single-exposure, conventional image for the same DOF and brightness level. The net reduction in capture time can significantly relax the constraints on subject movement during iris acquisition, making it less restrictive.
Birch, Sharla M.; Lenox, Mark W.; Kornegay, Joe N.; Shen, Li; Ai, Huisi; Ren, Xiaowei; Goodlett, Charles R.; Cudd, Tim A.; Washburn, Shannon E.
2015-01-01
Identification of facial dysmorphology is essential for the diagnosis of fetal alcohol syndrome (FAS); however, most children with fetal alcohol spectrum disorders (FASD) do not meet the dysmorphology criterion. Additional objective indicators are needed to help identify the broader spectrum of children affected by prenatal alcohol exposure. Computed tomography (CT) was used in a sheep model of prenatal binge alcohol exposure to test the hypothesis that quantitative measures of craniofacial bone volumes and linear distances could identify alcohol-exposed lambs. Pregnant sheep were randomly assigned to four groups: heavy binge alcohol, 2.5 g/kg/day (HBA); binge alcohol, 1.75 g/kg/day (BA); saline control (SC); and normal control (NC). Intravenous alcohol (BA; HBA) or saline (SC) infusions were given three consecutive days per week from gestation day 4–41, and a CT scan was performed on postnatal day 182. The volumes of eight skull bones, cranial circumference, and 19 linear measures of the face and skull were compared among treatment groups. Lambs from both alcohol groups showed significant reduction in seven of the eight skull bones and total skull bone volume, as well as cranial circumference. Alcohol exposure also decreased four of the 19 craniofacial measures. Discriminant analysis showed that alcohol-exposed and control lambs could be classified with high accuracy based on total skull bone volume, frontal, parietal, or mandibular bone volumes, cranial circumference, or interorbital distance. Total skull volume was significantly more sensitive than cranial circumference in identifying the alcohol-exposed lambs when alcohol-exposed lambs were classified using the typical FAS diagnostic cutoff of ≤10th percentile. This first demonstration of the usefulness of CT-derived craniofacial measures in a sheep model of FASD following binge-like alcohol exposure during the first trimester suggests that volumetric measurement of cranial bones may be a novel biomarker for binge alcohol exposure during the first trimester to help identify non-dysmorphic children with FASD. PMID:26496796
Greenland Regional and Ice Sheet-wide Geometry Sensitivity to Boundary and Initial conditions
NASA Astrophysics Data System (ADS)
Logan, L. C.; Narayanan, S. H. K.; Greve, R.; Heimbach, P.
2017-12-01
Ice sheet and glacier model outputs require inputs from uncertainly known initial and boundary conditions, and other parameters. Conservation and constitutive equations formalize the relationship between model inputs and outputs, and the sensitivity of model-derived quantities of interest (e.g., ice sheet volume above floatation) to model variables can be obtained via the adjoint model of an ice sheet. We show how one particular ice sheet model, SICOPOLIS (SImulation COde for POLythermal Ice Sheets), depends on these inputs through comprehensive adjoint-based sensitivity analyses. SICOPOLIS discretizes the shallow-ice and shallow-shelf approximations for ice flow, and is well-suited for paleo-studies of Greenland and Antarctica, among other computational domains. The adjoint model of SICOPOLIS was developed via algorithmic differentiation, facilitated by the source transformation tool OpenAD (developed at Argonne National Lab). While model sensitivity to various inputs can be computed by costly methods involving input perturbation simulations, the time-dependent adjoint model of SICOPOLIS delivers model sensitivities to initial and boundary conditions throughout time at lower cost. Here, we explore both the sensitivities of the Greenland Ice Sheet's entire and regional volumes to: initial ice thickness, precipitation, basal sliding, and geothermal flux over the Holocene epoch. Sensitivity studies such as described here are now accessible to the modeling community, based on the latest version of SICOPOLIS that has been adapted for OpenAD to generate correct and efficient adjoint code.
Daugirdas, John T
2017-07-01
The protein catabolic rate normalized to body size (PCRn) often is computed in dialysis units to obtain information about protein ingestion. However, errors can manifest when inappropriate modeling methods are used. We used a variable volume 2-pool urea kinetic model to examine the percent errors in PCRn due to use of a 1-pool urea kinetic model or after omission of residual urea clearance (Kru). When a single-pool model was used, 2 sources of errors were identified. The first, dependent on the ratio of dialyzer urea clearance to urea distribution volume (K/V), resulted in a 7% inflation of the PCRn when K/V was in the range of 6 mL/min per L. A second, larger error appeared when Kt/V values were below 1.0 and was related to underestimation of urea distribution volume (due to overestimation of effective clearance) by the single-pool model. A previously reported prediction equation for PCRn was valid, but data suggest that it should be modified using 2-pool eKt/V and V coefficients instead of single-pool values. A third source of error, this one unrelated to use of a single-pool model, namely omission of Kru, was shown to result in an underestimation of PCRn, such that each ml/minute Kru per 35 L of V caused a 5.6% underestimate in PCRn. Marked overestimation of PCRn can result due to inappropriate use of a single-pool urea kinetic model, particularly when Kt/V <1.0 (as in short daily dialysis), or after omission of residual native kidney clearance. Copyright © 2017 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Doulgerakis, Matthaios; Eggebrecht, Adam; Wojtkiewicz, Stanislaw; Culver, Joseph; Dehghani, Hamid
2017-12-01
Parameter recovery in diffuse optical tomography is a computationally expensive algorithm, especially when used for large and complex volumes, as in the case of human brain functional imaging. The modeling of light propagation, also known as the forward problem, is the computational bottleneck of the recovery algorithm, whereby the lack of a real-time solution is impeding practical and clinical applications. The objective of this work is the acceleration of the forward model, within a diffusion approximation-based finite-element modeling framework, employing parallelization to expedite the calculation of light propagation in realistic adult head models. The proposed methodology is applicable for modeling both continuous wave and frequency-domain systems with the results demonstrating a 10-fold speed increase when GPU architectures are available, while maintaining high accuracy. It is shown that, for a very high-resolution finite-element model of the adult human head with ˜600,000 nodes, consisting of heterogeneous layers, light propagation can be calculated at ˜0.25 s/excitation source.
Functional Requirements of a Target Description System for Vulnerability Analysis
1979-11-01
called GIFT .1,2 Together the COMGEOM description model and GIFT codes make up the BRL’s target description system. The significance of a target...and modifying target descriptions are described. 1 Lawrence W. Bain, Jr. and Mathew J. Reisinger, "The GIFT Code User Manual; Volume 1...34The GIFT Code User Manual; Volume II, The Output Options," unpublished draft of BRL report. II. UNDERLYING PHILOSOPHY The BRL has a computer
1990-02-01
copies Pl ,...,P. of a multiple module fp resolve nondeterminism (local or global) in an identical manner. 5. The copies PI,...,P, axe physically...recovery block. A recovery block consists of a conventional block (like in ALGOL or PL /I) which is provided with a means of error detection, called an...improved failures model for communicating processes. In Proceeding. NSF- SERC Seminar on Concurrency, volume 197 of Lecture Notes in Computer Science
1978-09-01
generally recognized that the best possible configura- tion for engines operating at high speeds and at high-pressure levels is probably the single...engines is invariably accom- plished by the operation of computer simulation models that generate specific numerical data rather than the generalized re...lationships common to other forms of prime mover based on units of mass or volume. Thus, providing such generalized relation- ships for a Stirling
A 3-D Finite-Volume Non-hydrostatic Icosahedral Model (NIM)
NASA Astrophysics Data System (ADS)
Lee, Jin
2014-05-01
The Nonhydrostatic Icosahedral Model (NIM) formulates the latest numerical innovation of the three-dimensional finite-volume control volume on the quasi-uniform icosahedral grid suitable for ultra-high resolution simulations. NIM's modeling goal is to improve numerical accuracy for weather and climate simulations as well as to utilize the state-of-art computing architecture such as massive parallel CPUs and GPUs to deliver routine high-resolution forecasts in timely manner. NIM dynamic corel innovations include: * A local coordinate system remapped spherical surface to plane for numerical accuracy (Lee and MacDonald, 2009), * Grid points in a table-driven horizontal loop that allow any horizontal point sequence (A.E. MacDonald, et al., 2010), * Flux-Corrected Transport formulated on finite-volume operators to maintain conservative positive definite transport (J.-L, Lee, ET. Al., 2010), *Icosahedral grid optimization (Wang and Lee, 2011), * All differentials evaluated as three-dimensional finite-volume integrals around the control volume. The three-dimensional finite-volume solver in NIM is designed to improve pressure gradient calculation and orographic precipitation over complex terrain. NIM dynamical core has been successfully verified with various non-hydrostatic benchmark test cases such as internal gravity wave, and mountain waves in Dynamical Cores Model Inter-comparisons Projects (DCMIP). Physical parameterizations suitable for NWP are incorporated into NIM dynamical core and successfully tested with multimonth aqua-planet simulations. Recently, NIM has started real data simulations using GFS initial conditions. Results from the idealized tests as well as real-data simulations will be shown in the conference.
Two-compartment modeling of tissue microcirculation revisited.
Brix, Gunnar; Salehi Ravesh, Mona; Griebel, Jürgen
2017-05-01
Conventional two-compartment modeling of tissue microcirculation is used for tracer kinetic analysis of dynamic contrast-enhanced (DCE) computed tomography or magnetic resonance imaging studies although it is well-known that the underlying assumption of an instantaneous mixing of the administered contrast agent (CA) in capillaries is far from being realistic. It was thus the aim of the present study to provide theoretical and computational evidence in favor of a conceptually alternative modeling approach that makes it possible to characterize the bias inherent to compartment modeling and, moreover, to approximately correct for it. Starting from a two-region distributed-parameter model that accounts for spatial gradients in CA concentrations within blood-tissue exchange units, a modified lumped two-compartment exchange model was derived. It has the same analytical structure as the conventional two-compartment model, but indicates that the apparent blood flow identifiable from measured DCE data is substantially overestimated, whereas the three other model parameters (i.e., the permeability-surface area product as well as the volume fractions of the plasma and interstitial distribution space) are unbiased. Furthermore, a simple formula was derived to approximately compute a bias-corrected flow from the estimates of the apparent flow and permeability-surface area product obtained by model fitting. To evaluate the accuracy of the proposed modeling and bias correction method, representative noise-free DCE curves were analyzed. They were simulated for 36 microcirculation and four input scenarios by an axially distributed reference model. As analytically proven, the considered two-compartment exchange model is structurally identifiable from tissue residue data. The apparent flow values estimated for the 144 simulated tissue/input scenarios were considerably biased. After bias-correction, the deviations between estimated and actual parameter values were (11.2 ± 6.4) % (vs. (105 ± 21) % without correction) for the flow, (3.6 ± 6.1) % for the permeability-surface area product, (5.8 ± 4.9) % for the vascular volume and (2.5 ± 4.1) % for the interstitial volume; with individual deviations of more than 20% being the exception and just marginal. Increasing the duration of CA administration only had a statistically significant but opposite effect on the accuracy of the estimated flow (declined) and intravascular volume (improved). Physiologically well-defined tissue parameters are structurally identifiable and accurately estimable from DCE data by the conceptually modified two-compartment model in combination with the bias correction. The accuracy of the bias-corrected flow is nearly comparable to that of the three other (theoretically unbiased) model parameters. As compared to conventional two-compartment modeling, this feature constitutes a major advantage for tracer kinetic analysis of both preclinical and clinical DCE imaging studies. © 2017 American Association of Physicists in Medicine.
NASA Technical Reports Server (NTRS)
Lee, S. S.; Sengupta, S.; Nwadike, E. V.; Sinha, S. K.
1982-01-01
The six-volume report: describes the theory of a three dimensional (3-D) mathematical thermal discharge model and a related one dimensional (1-D) model, includes model verification at two sites, and provides a separate user's manual for each model. The 3-D model has two forms: free surface and rigid lid. The former, verified at Anclote Anchorage (FL), allows a free air/water interface and is suited for significant surface wave heights compared to mean water depth; e.g., estuaries and coastal regions. The latter, verified at Lake Keowee (SC), is suited for small surface wave heights compared to depth (e.g., natural or man-made inland lakes) because surface elevation has been removed as a parameter. These models allow computation of time-dependent velocity and temperature fields for given initial conditions and time-varying boundary conditions. The free-surface model also provides surface height variations with time.
Stereolithographic volume evaluation of healing and shaping after rhinoplasty operations.
Tatlidede, Soner; Turgut, Gürsel; Gönen, Emre; Kayali, Mahmut Ulvi; Baş, Lütfü
2009-07-01
Nasal edema and volume changes are unavoidable processes during the healing period after rhinoplasty. Various applications were reported regarding the prevention of early edema; however, the literature shows no study focused on the course of the nasal edema and volume changes up-to-date. We aimed to study the nasal volume changes during the first year of postoperative healing period and to form a recovery and volume change diagram with the obtained data. We prepared standard frames and nasal molds of 7 rhinoplasty patients at regular time intervals (preoperative period and at the postoperative 1st, 2nd, 4th, 8th, 12th, 24th, and 52nd weeks). Plaster nasal models were created by using these molds. Volumes of models were measured by computed tomographic scanning and three-dimensional image processing programs. According to our results, the nasal edema reaches its maximum level at the postoperative fourth week and then rapidly decreases until its minimum level at the eighth week. In contrast with the general opinion, the nasal volume begins to increase smoothly reaching to a level minimally below the preoperative value by the end of the first year.
NASA Technical Reports Server (NTRS)
Combs, L. P.
1974-01-01
A computer program for analyzing rocket engine performance was developed. The program is concerned with the formation, distribution, flow, and combustion of liquid sprays and combustion product gases in conventional rocket combustion chambers. The capabilities of the program to determine the combustion characteristics of the rocket engine are described. Sample data code sheets show the correct sequence and formats for variable values and include notes concerning options to bypass the input of certain data. A seperate list defines the variables and indicates their required dimensions.
A comparison of two central difference schemes for solving the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Maksymiuk, C. M.; Swanson, R. C.; Pulliam, T. H.
1990-01-01
Five viscous transonic airfoil cases were computed by two significantly different computational fluid dynamics codes: An explicit finite-volume algorithm with multigrid, and an implicit finite-difference approximate-factorization method with Eigenvector diagonalization. Both methods are described in detail, and their performance on the test cases is compared. The codes utilized the same grids, turbulence model, and computer to provide the truest test of the algorithms. The two approaches produce very similar results, which, for attached flows, also agree well with experimental results; however, the explicit code is considerably faster.
Plasma properties in electron-bombardment ion thrusters
NASA Technical Reports Server (NTRS)
Matossian, J. N.; Beattie, J. R.
1987-01-01
The paper describes a technique for computing volume-averaged plasma properties within electron-bombardment ion thrusters, using spatially varying Langmuir-probe measurements. Average values of the electron densities are defined by integrating the spatially varying Maxwellian and primary electron densities over the ionization volume, and then dividing by the volume. Plasma properties obtained in the 30-cm-diameter J-series and ring-cusp thrusters are analyzed by the volume-averaging technique. The superior performance exhibited by the ring-cusp thruster is correlated with a higher average Maxwellian electron temperature. The ring-cusp thruster maintains the same fraction of primary electrons as does the J-series thruster, but at a much lower ion production cost. The volume-averaged predictions for both thrusters are compared with those of a detailed thruster performance model.
Parsing partial molar volumes of small molecules: a molecular dynamics study.
Patel, Nisha; Dubins, David N; Pomès, Régis; Chalikian, Tigran V
2011-04-28
We used molecular dynamics (MD) simulations in conjunction with the Kirkwood-Buff theory to compute the partial molar volumes for a number of small solutes of various chemical natures. We repeated our computations using modified pair potentials, first, in the absence of the Coulombic term and, second, in the absence of the Coulombic and the attractive Lennard-Jones terms. Comparison of our results with experimental data and the volumetric results of Monte Carlo simulation with hard sphere potentials and scaled particle theory-based computations led us to conclude that, for small solutes, the partial molar volume computed with the Lennard-Jones potential in the absence of the Coulombic term nearly coincides with the cavity volume. On the other hand, MD simulations carried out with the pair interaction potentials containing only the repulsive Lennard-Jones term produce unrealistically large partial molar volumes of solutes that are close to their excluded volumes. Our simulation results are in good agreement with the reported schemes for parsing partial molar volume data on small solutes. In particular, our determined interaction volumes() and the thickness of the thermal volume for individual compounds are in good agreement with empirical estimates. This work is the first computational study that supports and lends credence to the practical algorithms of parsing partial molar volume data that are currently in use for molecular interpretations of volumetric data.
Joly, Florian; Soulez, Gilles; Garcia, Damien; Lessard, Simon; Kauffmann, Claude
2018-01-01
Abdominal aortic aneurysms (AAA) are localized, commonly-occurring dilations of the aorta. When equilibrium between blood pressure (loading) and wall mechanical resistance is lost, rupture ensues, and patient death follows, if not treated immediately. Experimental and numerical analyses of flow patterns in arteries show direct correlations between wall shear stress and wall mechano-adaptation with the development of zones prone to thrombus formation. For further insights into AAA flow topology/growth interaction, a workout of patient-specific computational flow dynamics (CFD) is proposed to compute finite-time Lyapunov exponents and extract Lagrangian-coherent structures (LCS). This computational model was first compared with 4-D phase-contrast magnetic resonance imaging (MRI) in 5 patients. To better understand the impact of flow topology and transport on AAA growth, hyperbolic, repelling LCS were computed in 1 patient during 8-year follow-up, including 9 volumetric morphologic AAA measures by computed tomography-angiography (CTA). LCS defined barriers to Lagrangian jet cores entering AAA. Domains enclosed between LCS and the aortic wall were considered to be stagnation zones. Their evolution was studied during AAA growth. Good correlation - 2-D cross-correlation coefficients of 0.65, 0.86 and 0.082 (min, max, SD) - was obtained between numerical simulations and 4-D MRI acquisitions in 6 specific cross-sections from 4 patients. In follow-up study, LCS divided AAA lumens into 3 dynamically-isolated zones: 2 stagnation volumes lying in dilated portions of the AAA, and circulating volume connecting the inlet to the outlet. The volume of each zone was tracked over time. Although circulating volume remained unchanged during 8-year follow-up, the AAA lumen and main stagnation zones grew significantly (8 cm 3 /year and 6 cm 3 /year, respectively). This study reveals that transient transport topology can be quantified in patient-specific AAA during disease progression by CTA, in parallel with lumen morphology. It is anticipated that analysis of the main AAA stagnation zones by patient-specific CFD on a yearly basis could help to predict AAA growth and rupture. Copyright © 2017 Elsevier Ltd. All rights reserved.
Volume calculation of CT lung lesions based on Halton low-discrepancy sequences
NASA Astrophysics Data System (ADS)
Li, Shusheng; Wang, Liansheng; Li, Shuo
2017-03-01
Volume calculation from the Computed Tomography (CT) lung lesions data is a significant parameter for clinical diagnosis. The volume is widely used to assess the severity of the lung nodules and track its progression, however, the accuracy and efficiency of previous studies are not well achieved for clinical uses. It remains to be a challenging task due to its tight attachment to the lung wall, inhomogeneous background noises and large variations in sizes and shape. In this paper, we employ Halton low-discrepancy sequences to calculate the volume of the lung lesions. The proposed method directly compute the volume without the procedure of three-dimension (3D) model reconstruction and surface triangulation, which significantly improves the efficiency and reduces the complexity. The main steps of the proposed method are: (1) generate a certain number of random points in each slice using Halton low-discrepancy sequences and calculate the lesion area of each slice through the proportion; (2) obtain the volume by integrating the areas in the sagittal direction. In order to evaluate our proposed method, the experiments were conducted on the sufficient data sets with different size of lung lesions. With the uniform distribution of random points, our proposed method achieves more accurate results compared with other methods, which demonstrates the robustness and accuracy for the volume calculation of CT lung lesions. In addition, our proposed method is easy to follow and can be extensively applied to other applications, e.g., volume calculation of liver tumor, atrial wall aneurysm, etc.
Lim, Ki Moo; Lee, Jeong Sang; Gyeong, Min-Soo; Choi, Jae-Sung; Choi, Seong Wook
2013-01-01
To quantify the reduction in workload during intra-aortic balloon pump (IABP) therapy, indirect parameters are used, such as the mean arterial pressure during diastole, product of heart rate and peak systolic pressure, and pressure-volume area. Therefore, we investigated the cardiac energy consumption during IABP therapy using a cardiac electromechanics model. We incorporated an IABP function into a previously developed electromechanical model of the ventricle with a lumped model of the circulatory system and investigated the cardiac energy consumption at different IABP inflation volumes. When the IABP was used at inflation level 5, the cardiac output and stroke volume increased 11%, the ejection fraction increased 21%, the stroke work decreased 1%, the mean arterial pressure increased 10%, and the ATP consumption decreased 12%. These results show that although the ATP consumption is decreased significantly, stroke work is decreased only slightly, which indicates that the IABP helps the failed ventricle to pump blood efficiently. PMID:23341718
Lim, Ki Moo; Lee, Jeong Sang; Gyeong, Min-Soo; Choi, Jae-Sung; Choi, Seong Wook; Shim, Eun Bo
2013-01-01
To quantify the reduction in workload during intra-aortic balloon pump (IABP) therapy, indirect parameters are used, such as the mean arterial pressure during diastole, product of heart rate and peak systolic pressure, and pressure-volume area. Therefore, we investigated the cardiac energy consumption during IABP therapy using a cardiac electromechanics model. We incorporated an IABP function into a previously developed electromechanical model of the ventricle with a lumped model of the circulatory system and investigated the cardiac energy consumption at different IABP inflation volumes. When the IABP was used at inflation level 5, the cardiac output and stroke volume increased 11%, the ejection fraction increased 21%, the stroke work decreased 1%, the mean arterial pressure increased 10%, and the ATP consumption decreased 12%. These results show that although the ATP consumption is decreased significantly, stroke work is decreased only slightly, which indicates that the IABP helps the failed ventricle to pump blood efficiently.
Radiative transfer modeling applied to sea water constituent determination. [Gulf of Mexico
NASA Technical Reports Server (NTRS)
Faller, K. H.
1979-01-01
Optical radiation from the sea is influenced by pigments dissolved in the water and contained in discrete organisms suspended in the sea, and by pigmented and unpigmented inorganic and organic particles. The problem of extracting the information concerning these pigments and particulates from the optical properties of the sea is addressed and the properties which determine characteristics of the radiation that a remote sensor will detect and measure are considered. The results of the application of the volume scattering function model to the data collected in the Gulf of Mexico and its environs indicate that the size distribution of the concentrations of particles found in the sea can be predicted from measurements of the volume scattering function. Furthermore, with the volume scattering function model and knowledge of the absorption spectra of dissolved pigments, the radiative transfer model can compute a distribution of particle sizes and indices of refraction and concentration of dissolved pigments that give an upwelling light spectrum that closely matches measurements of that spectrum at sea.
1981-06-01
design of manufacturing systems, "ilidation and verification of ICAM modules, integration of ICAM modules and the orderly transition of ICAM modules into...Function Model of "Manufacture Product" (MFGO) VIII - Composite Function Model of " Design Product" (DESIGNO) IX - Composite Information Model of...User Interface Requirements; and the Architecture of Design . This work was performed during the period of 29 September 1978 through 10
Fractal Effects in Lanchester Models of Combat
2009-08-01
Lanchester models. 8. References 1 Aircraft in Warfare : The Dawn of the Fourth Arm, F W Lanchester , Constable & Co., London, 1916. 2 The Calculus of...Intermediate Asymptotics, G I Barenblatt, CUP, 1996. 14 Lanchester Models of Warfare Volumes 1 and 2, J G Taylor, Operations Research Society of America...Nagabhushana, Computers and Operations Research, 21, 615-628, 1994. 20 DSTO-TR-2331 18 Lanchester Type Models of Warfare , H K Weiss, Proc.First
Understanding Thiel Embalming in Pig Kidneys to Develop a New Circulation Model
Willaert, Wouter; De Vos, Marie; Van Hoof, Tom; Delrue, Louke; Pattyn, Piet; D’Herde, Katharina
2015-01-01
The quality of tissue preservation in Thiel embalmed bodies varies. Research on the administered embalming volume and its vascular distribution may elucidate one of the mechanisms of tissue preservation and allow for new applications of Thiel embalming. Vascular embalming with (group 1, n = 15) or without (group 2, n = 20) contrast agent was initiated in pig kidneys. The distribution of Thiel embalming solution in group 1 was visualized using computed tomography. The kidneys in both groups were then immersed in concentrated salt solutions to reduce their weight and volume. Afterwards, to mimic a lifelike circulation in the vessels, group 2 underwent pump-driven reperfusion for 120 minutes with either paraffinum perliquidum or diluted polyethylene glycol. The circulation was imaged with computed tomography. All of the kidneys were adequately preserved. The embalming solution spread diffusely in the kidney, but fluid accumulation was present. Subsequent immersion in concentrated salt solutions reduced weight (P < 0.01) and volume (P < 0.01). Reperfusion for 120 minutes was established in group 2. Paraffinum perliquidum filled both major vessels and renal tissue, whereas diluted polyethylene glycol spread widely in the kidney. There were no increases in weight (P = 0.26) and volume (P = 0.79); and pressure further decreased (P = 0.032) after more than 60 minutes of reperfusion with paraffinum perliquidum, whereas there were increases in weight (P = 0.005), volume (P = 0.032) and pressure (P < 0.0001) after reperfusion with diluted polyethylene glycol. Arterial embalming of kidneys results in successful preservation due to complete parenchymatous spreading. More research is needed to determine whether other factors affect embalming quality. Dehydration is an effective method to regain the organs’ initial status. Prolonged vascular reperfusion with paraffinum perliquidum can be established in this model without increases in weight, volume and pressure. PMID:25806527
A Simplified Model for Detonation Based Pressure-Gain Combustors
NASA Technical Reports Server (NTRS)
Paxson, Daniel E.
2010-01-01
A time-dependent model is presented which simulates the essential physics of a detonative or otherwise constant volume, pressure-gain combustor for gas turbine applications. The model utilizes simple, global thermodynamic relations to determine an assumed instantaneous and uniform post-combustion state in one of many envisioned tubes comprising the device. A simple, second order, non-upwinding computational fluid dynamic algorithm is then used to compute the (continuous) flowfield properties during the blowdown and refill stages of the periodic cycle which each tube undergoes. The exhausted flow is averaged to provide mixed total pressure and enthalpy which may be used as a cycle performance metric for benefits analysis. The simplicity of the model allows for nearly instantaneous results when implemented on a personal computer. The results compare favorably with higher resolution numerical codes which are more difficult to configure, and more time consuming to operate.
A simulation model for wind energy storage systems. Volume 1: Technical report
NASA Technical Reports Server (NTRS)
Warren, A. W.; Edsinger, R. W.; Chan, Y. K.
1977-01-01
A comprehensive computer program for the modeling of wind energy and storage systems utilizing any combination of five types of storage (pumped hydro, battery, thermal, flywheel and pneumatic) was developed. The level of detail of Simulation Model for Wind Energy Storage (SIMWEST) is consistent with a role of evaluating the economic feasibility as well as the general performance of wind energy systems. The software package consists of two basic programs and a library of system, environmental, and load components. The first program is a precompiler which generates computer models (in FORTRAN) of complex wind source storage application systems, from user specifications using the respective library components. The second program provides the techno-economic system analysis with the respective I/O, the integration of systems dynamics, and the iteration for conveyance of variables. SIMWEST program, as described, runs on the UNIVAC 1100 series computers.
Turbulent heat transfer performance of single stage turbine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amano, R.S.; Song, B.
1999-07-01
To increase the efficiency and the power of modern power plant gas turbines, designers are continually trying to raise the maximum turbine inlet temperature. Here, a numerical study based on the Navier-Stokes equations on a three-dimensional turbulent flow in a single stage turbine stator/rotor passage has been conducted and reported in this paper. The full Reynolds-stress closure model (RSM) was used for the computations and the results were also compared with the computations made by using the Launder-Sharma low-Reynolds-number {kappa}-{epsilon} model. The computational results obtained using these models were compared in order to investigate the turbulence effect in the near-wallmore » region. The set of the governing equations in a generalized curvilinear coordinate system was discretized by using the finite volume method with non-staggered grids. The numerical modeling was performed to interact between the stator and rotor blades.« less
Volume accumulator design analysis computer codes
NASA Technical Reports Server (NTRS)
Whitaker, W. D.; Shimazaki, T. T.
1973-01-01
The computer codes, VANEP and VANES, were written and used to aid in the design and performance calculation of the volume accumulator units (VAU) for the 5-kwe reactor thermoelectric system. VANEP computes the VAU design which meets the primary coolant loop VAU volume and pressure performance requirements. VANES computes the performance of the VAU design, determined from the VANEP code, at the conditions of the secondary coolant loop. The codes can also compute the performance characteristics of the VAU's under conditions of possible modes of failure which still permit continued system operation.
NASA Technical Reports Server (NTRS)
Lee, S. S.; Sengupta, S.; Nwadike, E. V.
1982-01-01
The six-volume report: describes the theory of a three dimensional (3-D) mathematical thermal discharge model and a related one dimensional (1-D) model, includes model verification at two sites, and provides a separate user's manual for each model. The 3-D model has two forms: free surface and rigid lid. The former, verified at Anclote Anchorate (FL), allows a free air/water interface and is suited for significant surface wave heights compared to mean water depth; e.g., estuaries and coastal regions. The latter, verified at Lake Keowee (SC), is suited for small surface wave heights compared to depth (e.g., natural or man-made inland lakes) because surface elevation has been removed as a parameter. These models allow computation of time dependent velocity and temperature fields for given initial conditions and time-varying boundary conditions.
NASA Astrophysics Data System (ADS)
Li, Chang; Wang, Qing; Shi, Wenzhong; Zhao, Sisi
2018-05-01
The accuracy of earthwork calculations that compute terrain volume is critical to digital terrain analysis (DTA). The uncertainties in volume calculations (VCs) based on a DEM are primarily related to three factors: 1) model error (ME), which is caused by an adopted algorithm for a VC model, 2) discrete error (DE), which is usually caused by DEM resolution and terrain complexity, and 3) propagation error (PE), which is caused by the variables' error. Based on these factors, the uncertainty modelling and analysis of VCs based on a regular grid DEM are investigated in this paper. Especially, how to quantify the uncertainty of VCs is proposed by a confidence interval based on truncation error (TE). In the experiments, the trapezoidal double rule (TDR) and Simpson's double rule (SDR) were used to calculate volume, where the TE is the major ME, and six simulated regular grid DEMs with different terrain complexity and resolution (i.e. DE) were generated by a Gauss synthetic surface to easily obtain the theoretical true value and eliminate the interference of data errors. For PE, Monte-Carlo simulation techniques and spatial autocorrelation were used to represent DEM uncertainty. This study can enrich uncertainty modelling and analysis-related theories of geographic information science.
Automated aortic calcification detection in low-dose chest CT images
NASA Astrophysics Data System (ADS)
Xie, Yiting; Htwe, Yu Maw; Padgett, Jennifer; Henschke, Claudia; Yankelevitz, David; Reeves, Anthony P.
2014-03-01
The extent of aortic calcification has been shown to be a risk indicator for vascular events including cardiac events. We have developed a fully automated computer algorithm to segment and measure aortic calcification in low-dose noncontrast, non-ECG gated, chest CT scans. The algorithm first segments the aorta using a pre-computed Anatomy Label Map (ALM). Then based on the segmented aorta, aortic calcification is detected and measured in terms of the Agatston score, mass score, and volume score. The automated scores are compared with reference scores obtained from manual markings. For aorta segmentation, the aorta is modeled as a series of discrete overlapping cylinders and the aortic centerline is determined using a cylinder-tracking algorithm. Then the aortic surface location is detected using the centerline and a triangular mesh model. The segmented aorta is used as a mask for the detection of aortic calcification. For calcification detection, the image is first filtered, then an elevated threshold of 160 Hounsfield units (HU) is used within the aorta mask region to reduce the effect of noise in low-dose scans, and finally non-aortic calcification voxels (bony structures, calcification in other organs) are eliminated. The remaining candidates are considered as true aortic calcification. The computer algorithm was evaluated on 45 low-dose non-contrast CT scans. Using linear regression, the automated Agatston score is 98.42% correlated with the reference Agatston score. The automated mass and volume score is respectively 98.46% and 98.28% correlated with the reference mass and volume score.
A finite area scheme for shallow granular flows on three-dimensional surfaces
NASA Astrophysics Data System (ADS)
Rauter, Matthias
2017-04-01
Shallow granular flow models have become a popular tool for the estimation of natural hazards, such as landslides, debris flows and avalanches. The shallowness of the flow allows to reduce the three-dimensional governing equations to a quasi two-dimensional system. Three-dimensional flow fields are replaced by their depth-integrated two-dimensional counterparts, which yields a robust and fast method [1]. A solution for a simple shallow granular flow model, based on the so-called finite area method [3] is presented. The finite area method is an adaption of the finite volume method [4] to two-dimensional curved surfaces in three-dimensional space. This method handles the three dimensional basal topography in a simple way, making the model suitable for arbitrary (but mildly curved) topography, such as natural terrain. Furthermore, the implementation into the open source software OpenFOAM [4] is shown. OpenFOAM is a popular computational fluid dynamics application, designed so that the top-level code mimics the mathematical governing equations. This makes the code easy to read and extendable to more sophisticated models. Finally, some hints on how to get started with the code and how to extend the basic model will be given. I gratefully acknowledge the financial support by the OEAW project "beyond dense flow avalanches". Savage, S. B. & Hutter, K. 1989 The motion of a finite mass of granular material down a rough incline. Journal of Fluid Mechanics 199, 177-215. Ferziger, J. & Peric, M. 2002 Computational methods for fluid dynamics, 3rd edn. Springer. Tukovic, Z. & Jasak, H. 2012 A moving mesh finite volume interface tracking method for surface tension dominated interfacial fluid flow. Computers & fluids 55, 70-84. Weller, H. G., Tabor, G., Jasak, H. & Fureby, C. 1998 A tensorial approach to computational continuum mechanics using object-oriented techniques. Computers in physics 12(6), 620-631.
Frimpter, M.H.; Donohue, J.J.; Rapacz, M.V.; Beye, H.G.
1990-01-01
A mass-balance accounting model can be used to guide the management of septic systems and fertilizers to control the degradation of groundwater quality in zones of an aquifer that contributes water to public supply wells. The nitrate nitrogen concentration of the mixture in the well can be predicted for steady-state conditions by calculating the concentration that results from the total weight of nitrogen and total volume of water entering the zone of contribution to the well. These calculations will allow water-quality managers to predict the nitrate concentrations that would be produced by different types and levels of development, and to plan development accordingly. Computations for different development schemes provide a technical basis for planners and managers to compare water quality effects and to select alternatives that limit nitrate concentration in wells. Appendix A contains tables of nitrate loads and water volumes from common sources for use with the accounting model. Appendix B describes the preparation of a spreadsheet for the nitrate loading calculations with a software package generally available for desktop computers. (USGS)
Analytical evaluation of ILM sensors. Volume 2: Appendices
NASA Technical Reports Server (NTRS)
Kirk, R. J.
1975-01-01
The applicability of various sensing concepts to independent landing monitor systems was analyzed. Microwave landing system MLS accuracy requirements are presented along with a description of MLS airborne equipment. Computer programs developed during the analysis are described and include: a mathematical computer model for use in the performance assessment of reconnaissance sensor systems; a theoretical formulation of electromagnetic scattering to generate data at high incidence angles; atmospheric attenuation of microwaves; and microwave radiometry, programs
2015-07-01
performance computing time from the US Department of Defense (DOD) High Performance Computing Modernization program at the US Army Research Laboratory...Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time ...dimensional, compressible, Reynolds-averaged Navier-Stokes (RANS) equations are solved using a finite volume method. A point-implicit time - integration
Quantitative vibro-acoustography of tissue-like objects by measurement of resonant modes
NASA Astrophysics Data System (ADS)
Mazumder, Dibbyan; Umesh, Sharath; Mohan Vasu, Ram; Roy, Debasish; Kanhirodan, Rajan; Asokan, Sundarrajan
2017-01-01
We demonstrate a simple and computationally efficient method to recover the shear modulus pertaining to the focal volume of an ultrasound transducer from the measured vibro-acoustic spectral peaks. A model that explains the transport of local deformation information with the acoustic wave acting as a carrier is put forth. It is also shown that the peaks correspond to the natural frequencies of vibration of the focal volume, which may be readily computed by solving an eigenvalue problem associated with the vibrating region. Having measured the first natural frequency with a fibre Bragg grating sensor, and armed with an expedient means of computing the same, we demonstrate a simple procedure, based on the method of bisection, to recover the average shear modulus of the object in the ultrasound focal volume. We demonstrate this recovery for four homogeneous agarose slabs of different stiffness and verify the accuracy of the recovery using independent rheometer-based measurements. Extension of the method to anisotropic samples through the measurement of a more complete set of resonant modes and the recovery of an elasticity tensor distribution, as is done in resonant ultrasound spectroscopy, is suggested.
Molecular dynamics simulation of diffusion of gases in a carbon-nanotube-polymer composite
NASA Astrophysics Data System (ADS)
Lim, Seong Y.; Sahimi, Muhammad; Tsotsis, Theodore T.; Kim, Nayong
2007-07-01
Extensive molecular dynamics (MD) simulations were carried out to compute the solubilities and self-diffusivities of CO2 and CH4 in amorphous polyetherimide (PEI) and mixed-matrix PEI generated by inserting single-walled carbon nanotubes into the polymer. Atomistic models of PEI and its composites were generated using energy minimizations, MD simulations, and the polymer-consistent force field. Two types of polymer composite were generated by inserting (7,0) and (12,0) zigzag carbon nanotubes into the PEI structure. The morphologies of PEI and its composites were characterized by their densities, radial distribution functions, and the accessible free volumes, which were computed with probe molecules of different sizes. The distributions of the cavity volumes were computed using the Voronoi tessellation method. The computed self-diffusivities of the gases in the polymer composites are much larger than those in pure PEI. We find, however, that the increase is not due to diffusion of the gases through the nanotubes which have smooth energy surfaces and, therefore, provide fast transport paths. Instead, the MD simulations indicate a squeezing effect of the nanotubes on the polymer matrix that changes the composite polymers’ free-volume distributions and makes them more sharply peaked. The presence of nanotubes also creates several cavities with large volumes that give rise to larger diffusivities in the polymer composites. This effect is due to the repulsive interactions between the polymer and the nanotubes. The solubilities of the gases in the polymer composites are also larger than those in pure PEI, hence indicating larger gas permeabilities for mixed-matrix PEI than PEI itself.
Geometric measures of large biomolecules: surface, volume, and pockets.
Mach, Paul; Koehl, Patrice
2011-11-15
Geometry plays a major role in our attempts to understand the activity of large molecules. For example, surface area and volume are used to quantify the interactions between these molecules and the water surrounding them in implicit solvent models. In addition, the detection of pockets serves as a starting point for predictive studies of biomolecule-ligand interactions. The alpha shape theory provides an exact and robust method for computing these geometric measures. Several implementations of this theory are currently available. We show however that these implementations fail on very large macromolecular systems. We show that these difficulties are not theoretical; rather, they are related to the architecture of current computers that rely on the use of cache memory to speed up calculation. By rewriting the algorithms that implement the different steps of the alpha shape theory such that we enforce locality, we show that we can remediate these cache problems; the corresponding code, UnionBall has an apparent O(n) behavior over a large range of values of n (up to tens of millions), where n is the number of atoms. As an example, it takes 136 sec with UnionBall to compute the contribution of each atom to the surface area and volume of a viral capsid with more than five million atoms on a commodity PC. UnionBall includes functions for computing analytically the surface area and volume of the intersection of two, three and four spheres that are fully detailed in an appendix. UnionBall is available as an OpenSource software. Copyright © 2011 Wiley Periodicals, Inc.
Geometric Measures of Large Biomolecules: Surface, Volume and Pockets
Mach, Paul; Koehl, Patrice
2011-01-01
Geometry plays a major role in our attempt to understand the activity of large molecules. For example, surface area and volume are used to quantify the interactions between these molecules and the water surrounding them in implicit solvent models. In addition, the detection of pockets serves as a starting point for predictive studies of biomolecule-ligand interactions. The alpha shape theory provides an exact and robust method for computing these geometric measures. Several implementations of this theory are currently available. We show however that these implementations fail on very large macromolecular systems. We show that these difficulties are not theoretical; rather, they are related to the architecture of current computers that rely on the use of cache memory to speed up calculation. By rewriting the algorithms that implement the different steps of the alpha shape theory such that we enforce locality, we show that we can remediate these cache problems; the corresponding code, UnionBall has an apparent (n) behavior over a large range of values of n (up to tens of millions), where n is the number of atoms. As an example, it takes 136 seconds with UnionBall to compute the contribution of each atom to the surface area and volume of a viral capsid with more than five million atoms on a commodity PC. UnionBall includes functions for computing the surface area and volume of the intersection of two, three and four spheres that are fully detailed in an appendix. UnionBall is available as an OpenSource software. PMID:21823134
Legg, A.D.; Bannerman, R.T.; Panuska, John
1996-01-01
The quality of runoff from residential lawns is a concern for municipal stormwater management programs. Land-use based computer models are increasingly being used to assess the impact of lawn runoff on urban watersheds. To accurately model the runoff for residential lawns, the variation in the relation of rainfall to runoff from lawns must be understood. The study described in this report measures the runoff parameters from 20 residential lawns in Madison, Wisconsin, using a rainfall simulator. It was determined that the saturated hydraulic conductivity does not vary significantly within a single residential lawn, but does vary significantly from one lawn to another. This variation is recognized in the entire rainfall-runoff relation from one lawn to another. The age of a lawn, or the years since development and turf establishment, is used as a surrogate of several lawn and soil characteristics to describe the variability in lawn runoff volumes. Runoff volumes from newly developed lawns are significantly greater than runoff from older lawns. This is an important consideration when modeling runoff for new developments. For older lawns, the date since lawn establishment does not explain the variation in the rainfall-runoff relation. In order for simple land-use based computer models to adequately account for the volume of runoff from pervious landscapes, field data from individual lawns would be necessary. A more realistic, alternative method may be to consider a basin-scale analysis of runoff from pervious landscapes.
Constructing a patient-specific computer model of the upper airway in sleep apnea patients.
Dhaliwal, Sandeep S; Hesabgar, Seyyed M; Haddad, Seyyed M H; Ladak, Hanif; Samani, Abbas; Rotenberg, Brian W
2018-01-01
The use of computer simulation to develop a high-fidelity model has been proposed as a novel and cost-effective alternative to help guide therapeutic intervention in sleep apnea surgery. We describe a computer model based on patient-specific anatomy of obstructive sleep apnea (OSA) subjects wherein the percentage and sites of upper airway collapse are compared to findings on drug-induced sleep endoscopy (DISE). Basic science computer model generation. Three-dimensional finite element techniques were undertaken for model development in a pilot study of four OSA patients. Magnetic resonance imaging was used to capture patient anatomy and software employed to outline critical anatomical structures. A finite-element mesh was applied to the volume enclosed by each structure. Linear and hyperelastic soft-tissue properties for various subsites (tonsils, uvula, soft palate, and tongue base) were derived using an inverse finite-element technique from surgical specimens. Each model underwent computer simulation to determine the degree of displacement on various structures within the upper airway, and these findings were compared to DISE exams performed on the four study patients. Computer simulation predictions for percentage of airway collapse and site of maximal collapse show agreement with observed results seen on endoscopic visualization. Modeling the upper airway in OSA patients is feasible and holds promise in aiding patient-specific surgical treatment. NA. Laryngoscope, 128:277-282, 2018. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.
Age and Pathway Diagnostics for a Stratospheric General Circulation Model
NASA Technical Reports Server (NTRS)
Schoeberl, Mark R.; Douglass, Anne R.; Polansky, Brian
2004-01-01
Using a variety of age diagnostic experiments we examine the stratospheric age spectrum of the Goddard Finite Volume Generd Circulation Model. Pulse tracer release age-of-air computations are compared to forward and backward trajectory computations. These comparisons show good agreement, and the age-of-air also compares well with observed long lived tracers. Pathway diagnostics show how air arrives in the lowermost stratosphere and the age structure of that region. Using tracers with different lifetimes we can estimate the age spectrum - this technique should be useful in diagnosing transport from various trace gas observations.
NASA Technical Reports Server (NTRS)
1974-01-01
Studies were conducted to develop appropriate space shuttle electrical power distribution and control (EPDC) subsystem simulation models and to apply the computer simulations to systems analysis of the EPDC. A previously developed software program (SYSTID) was adapted for this purpose. The following objectives were attained: (1) significant enhancement of the SYSTID time domain simulation software, (2) generation of functionally useful shuttle EPDC element models, and (3) illustrative simulation results in the analysis of EPDC performance, under the conditions of fault, current pulse injection due to lightning, and circuit protection sizing and reaction times.
NASA Technical Reports Server (NTRS)
Ramsey, J. W., Jr.; Taylor, J. T.; Wilson, J. F.; Gray, C. E., Jr.; Leatherman, A. D.; Rooker, J. R.; Allred, J. W.
1976-01-01
The results of extensive computer (finite element, finite difference and numerical integration), thermal, fatigue, and special analyses of critical portions of a large pressurized, cryogenic wind tunnel (National Transonic Facility) are presented. The computer models, loading and boundary conditions are described. Graphic capability was used to display model geometry, section properties, and stress results. A stress criteria is presented for evaluation of the results of the analyses. Thermal analyses were performed for major critical and typical areas. Fatigue analyses of the entire tunnel circuit are presented.
European Science Notes. Volume 41, Number 10,
1987-10-01
the following topics: laminar/turbulent transition in boundary layers; coherent structures in the modeling of turbulent boundary layers, wakes, and jets...of the labeling of a model protein, human immu- indicator. The amount of oxygen produced noglobulin (hIgG), with acridinium ester, can easily be...has concerned cations, and Computer Science. Research model reduction of large-scale systems in the controls area is conducted in the and state and
1991-01-01
EXPERIENCE IN DEVELOPING INTEGRATED OPTICAL DEVICES, NONLINEAR MAGNETIC-OPTIC MATERIALS, HIGH FREQUENCY MODULATORS, COMPUTER-AIDED MODELING AND SOPHISTICATED... HIGH -LEVEL PRESENTATION AND DISTRIBUTED CONTROL MODELS FOR INTEGRATING HETEROGENEOUS MECHANICAL ENGINEERING APPLICATIONS AND TOOLS. THE DESIGN IS FOCUSED...STATISTICALLY ACCURATE WORST CASE DEVICE MODELS FOR CIRCUIT SIMULATION. PRESENT METHODS OF WORST CASE DEVICE DESIGN ARE AD HOC AND DO NOT ALLOW THE
Probabilistic Assessment of Hypobaric Decompression Sickness Treatment Success
NASA Technical Reports Server (NTRS)
Conkin, Johnny; Abercromby, Andrew F. J.; Dervay, Joseph P.; Feiveson, Alan H.; Gernhardt, Michael L.; Norcross, Jason R.; Ploutz-Snyder, Robert; Wessel, James H., III
2014-01-01
The Hypobaric Decompression Sickness (DCS) Treatment Model links a decrease in computed bubble volume from increased pressure (DeltaP), increased oxygen (O2) partial pressure, and passage of time during treatment to the probability of symptom resolution [P(symptom resolution)]. The decrease in offending volume is realized in 2 stages: a) during compression via Boyle's Law and b) during subsequent dissolution of the gas phase via the O2 window. We established an empirical model for the P(symptom resolution) while accounting for multiple symptoms within subjects. The data consisted of 154 cases of hypobaric DCS symptoms along with ancillary information from tests on 56 men and 18 women. Our best estimated model is P(symptom resolution) = 1 / (1+exp(-(ln(Delta P) - 1.510 + 0.795×AMB - 0.00308×Ts) / 0.478)), where (DeltaP) is pressure difference (psid), AMB = 1 if ambulation took place during part of the altitude exposure, otherwise AMB = 0; and where Ts is the elapsed time in mins from start of the altitude exposure to recognition of a DCS symptom. To apply this model in future scenarios, values of DeltaP as inputs to the model would be calculated from the Tissue Bubble Dynamics Model based on the effective treatment pressure: (DeltaP) = P2 - P1 | = P1×V1/V2 - P1, where V1 is the computed volume of a spherical bubble in a unit volume of tissue at low pressure P1 and V2 is computed volume after a change to a higher pressure P2. If 100% ground level O2 (GLO) was breathed in place of air, then V2 continues to decrease through time at P2 at a faster rate. This calculated value of (DeltaP then represents the effective treatment pressure at any point in time. Simulation of a "pain-only" symptom at 203 min into an ambulatory extravehicular activity (EVA) at 4.3 psia on Mars resulted in a P(symptom resolution) of 0.49 (0.36 to 0.62 95% confidence intervals) on immediate return to 8.2 psia in the Multi-Mission Space Exploration Vehicle. The P(symptom resolution) increased to near certainty (0.99) after 2 hrs of GLO at 8.2 psia or with less certainty on immediate pressurization to 14.7 psia [0.90 (0.83 - 0.95)]. Given the low probability of DCS during EVA and the prompt treatment of a symptom with guidance from the model, it is likely that the symptom and gas phase will resolve with minimum resources and minimal impact on astronaut health, safety, and productivity.
Guyot, Y; Papantoniou, I; Luyten, F P; Geris, L
2016-02-01
The main challenge in tissue engineering consists in understanding and controlling the growth process of in vitro cultured neotissues toward obtaining functional tissues. Computational models can provide crucial information on appropriate bioreactor and scaffold design but also on the bioprocess environment and culture conditions. In this study, the development of a 3D model using the level set method to capture the growth of a microporous neotissue domain in a dynamic culture environment (perfusion bioreactor) was pursued. In our model, neotissue growth velocity was influenced by scaffold geometry as well as by flow- induced shear stresses. The neotissue was modeled as a homogenous porous medium with a given permeability, and the Brinkman equation was used to calculate the flow profile in both neotissue and void space. Neotissue growth was modeled until the scaffold void volume was filled, thus capturing already established experimental observations, in particular the differences between scaffold filling under different flow regimes. This tool is envisaged as a scaffold shape and bioprocess optimization tool with predictive capacities. It will allow controlling fluid flow during long-term culture, whereby neotissue growth alters flow patterns, in order to provide shear stress profiles and magnitudes across the whole scaffold volume influencing, in turn, the neotissue growth.
Mathematical and computational model for the analysis of micro hybrid rocket motor
NASA Astrophysics Data System (ADS)
Stoia-Djeska, Marius; Mingireanu, Florin
2012-11-01
The hybrid rockets use a two-phase propellant system. In the present work we first develop a simplified model of the coupling of the hybrid combustion process with the complete unsteady flow, starting from the combustion port and ending with the nozzle. The physical and mathematical model are adapted to the simulations of micro hybrid rocket motors. The flow model is based on the one-dimensional Euler equations with source terms. The flow equations and the fuel regression rate law are solved in a coupled manner. The platform of the numerical simulations is an implicit fourth-order Runge-Kutta second order cell-centred finite volume method. The numerical results obtained with this model show a good agreement with published experimental and numerical results. The computational model developed in this work is simple, computationally efficient and offers the advantage of taking into account a large number of functional and constructive parameters that are used by the engineers.
2014-01-01
Background This study aimed to evaluate the accuracy of surgical outcomes in free iliac crest mandibular reconstructions that were carried out with virtual surgical plans and rapid prototyping templates. Methods This study evaluated eight patients who underwent mandibular osteotomy and reconstruction with free iliac crest grafts using virtual surgical planning and designed guiding templates. Operations were performed using the prefabricated guiding templates. Postoperative three-dimensional computer models were overlaid and compared with the preoperatively designed models in the same coordinate system. Results Compared to the virtual osteotomy, the mean error of distance of the actual mandibular osteotomy was 2.06 ± 0.86 mm. When compared to the virtual harvested grafts, the mean error volume of the actual harvested grafts was 1412.22 ± 439.24 mm3 (9.12% ± 2.84%). The mean error between the volume of the actual harvested grafts and the shaped grafts was 2094.35 ± 929.12 mm3 (12.40% ± 5.50%). Conclusions The use of computer-aided rapid prototyping templates for virtual surgical planning appears to positively influence the accuracy of mandibular reconstruction. PMID:24957053
Proteus three-dimensional Navier-Stokes computer code, version 1.0. Volume 2: User's guide
NASA Technical Reports Server (NTRS)
Towne, Charles E.; Schwab, John R.; Bui, Trong T.
1993-01-01
A computer code called Proteus 3D was developed to solve the three-dimensional, Reynolds-averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The objective in this effort was to develop a code for aerospace propulsion applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The governing equations are solved in generalized nonorthogonal body-fitted coordinates, by marching in time using a fully-coupled ADI solution procedure. The boundary conditions are treated implicitly. All terms, including the diffusion terms, are linearized using second-order Taylor series expansions. Turbulence is modeled using either an algebraic or two-equation eddy viscosity model. The thin-layer or Euler equations may also be solved. The energy equation may be eliminated by the assumption of constant total enthalpy. Explicit and implicit artificial viscosity may be used. Several time step options are available for convergence acceleration. The documentation is divided into three volumes. This User's Guide describes the program's features, the input and output, the procedure for setting up initial conditions, the computer resource requirements, the diagnostic messages that may be generated, the job control language used to run the program, and several test cases.
A 2D nonlinear multiring model for blood flow in large elastic arteries
NASA Astrophysics Data System (ADS)
Ghigo, Arthur R.; Fullana, Jose-Maria; Lagrée, Pierre-Yves
2017-12-01
In this paper, we propose a two-dimensional nonlinear ;multiring; model to compute blood flow in axisymmetric elastic arteries. This model is designed to overcome the numerical difficulties of three-dimensional fluid-structure interaction simulations of blood flow without using the over-simplifications necessary to obtain one-dimensional blood flow models. This multiring model is derived by integrating over concentric rings of fluid the simplified long-wave Navier-Stokes equations coupled to an elastic model of the arterial wall. The resulting system of balance laws provides a unified framework in which both the motion of the fluid and the displacement of the wall are dealt with simultaneously. The mathematical structure of the multiring model allows us to use a finite volume method that guarantees the conservation of mass and the positivity of the numerical solution and can deal with nonlinear flows and large deformations of the arterial wall. We show that the finite volume numerical solution of the multiring model provides at a reasonable computational cost an asymptotically valid description of blood flow velocity profiles and other averaged quantities (wall shear stress, flow rate, ...) in large elastic and quasi-rigid arteries. In particular, we validate the multiring model against well-known solutions such as the Womersley or the Poiseuille solutions as well as against steady boundary layer solutions in quasi-rigid constricted and expanded tubes.
An, Gary
2009-01-01
The sheer volume of biomedical research threatens to overwhelm the capacity of individuals to effectively process this information. Adding to this challenge is the multiscale nature of both biological systems and the research community as a whole. Given this volume and rate of generation of biomedical information, the research community must develop methods for robust representation of knowledge in order for individuals, and the community as a whole, to "know what they know." Despite increasing emphasis on "data-driven" research, the fact remains that researchers guide their research using intuitively constructed conceptual models derived from knowledge extracted from publications, knowledge that is generally qualitatively expressed using natural language. Agent-based modeling (ABM) is a computational modeling method that is suited to translating the knowledge expressed in biomedical texts into dynamic representations of the conceptual models generated by researchers. The hierarchical object-class orientation of ABM maps well to biomedical ontological structures, facilitating the translation of ontologies into instantiated models. Furthermore, ABM is suited to producing the nonintuitive behaviors that often "break" conceptual models. Verification in this context is focused at determining the plausibility of a particular conceptual model, and qualitative knowledge representation is often sufficient for this goal. Thus, utilized in this fashion, ABM can provide a powerful adjunct to other computational methods within the research process, as well as providing a metamodeling framework to enhance the evolution of biomedical ontologies.
Umeda, Yasuyuki; Ishida, Fujimaro; Tsuji, Masanori; Furukawa, Kazuhiro; Shiba, Masato; Yasuda, Ryuta; Toma, Naoki; Sakaida, Hiroshi; Suzuki, Hidenori
2017-01-01
This study aimed to predict recurrence after coil embolization of unruptured cerebral aneurysms with computational fluid dynamics (CFD) using porous media modeling (porous media CFD). A total of 37 unruptured cerebral aneurysms treated with coiling were analyzed using follow-up angiograms, simulated CFD prior to coiling (control CFD), and porous media CFD. Coiled aneurysms were classified into stable or recurrence groups according to follow-up angiogram findings. Morphological parameters, coil packing density, and hemodynamic variables were evaluated for their correlations with aneurysmal recurrence. We also calculated residual flow volumes (RFVs), a novel hemodynamic parameter used to quantify the residual aneurysm volume after simulated coiling, which has a mean fluid domain > 1.0 cm/s. Follow-up angiograms showed 24 aneurysms in the stable group and 13 in the recurrence group. Mann-Whitney U test demonstrated that maximum size, dome volume, neck width, neck area, and coil packing density were significantly different between the two groups (P < 0.05). Among the hemodynamic parameters, aneurysms in the recurrence group had significantly larger inflow and outflow areas in the control CFD and larger RFVs in the porous media CFD. Multivariate logistic regression analyses demonstrated that RFV was the only independently significant factor (odds ratio, 1.06; 95% confidence interval, 1.01-1.11; P = 0.016). The study findings suggest that RFV collected under porous media modeling predicts the recurrence of coiled aneurysms.
Computer vision system for egg volume prediction using backpropagation neural network
NASA Astrophysics Data System (ADS)
Siswantoro, J.; Hilman, M. Y.; Widiasri, M.
2017-11-01
Volume is one of considered aspects in egg sorting process. A rapid and accurate volume measurement method is needed to develop an egg sorting system. Computer vision system (CVS) provides a promising solution for volume measurement problem. Artificial neural network (ANN) has been used to predict the volume of egg in several CVSs. However, volume prediction from ANN could have less accuracy due to inappropriate input features or inappropriate ANN structure. This paper proposes a CVS for predicting the volume of egg using ANN. The CVS acquired an image of egg from top view and then processed the image to extract its 1D and 2 D size features. The features were used as input for ANN in predicting the volume of egg. The experiment results show that the proposed CSV can predict the volume of egg with a good accuracy and less computation time.
Tree volume and biomass equations for the Lake States.
Jerold T. Hahn
1984-01-01
Presents species specific equations and methods for computing tree height, cubic foot, and board foot volume, and biomass for the Lake States (Michigan, Minnesota, and Wisconsin). Height equations compute either total or merchantable height to a variable top d.o.b. from d.b.h., site index, and basal area. Volumes and biomass are computed from d.b.h. and height.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnston, Henry; Wang, Cong; Winterfeld, Philip
An efficient modeling approach is described for incorporating arbitrary 3D, discrete fractures, such as hydraulic fractures or faults, into modeling fracture-dominated fluid flow and heat transfer in fractured geothermal reservoirs. This technique allows 3D discrete fractures to be discretized independently from surrounding rock volume and inserted explicitly into a primary fracture/matrix grid, generated without including 3D discrete fractures in prior. An effective computational algorithm is developed to discretize these 3D discrete fractures and construct local connections between 3D fractures and fracture/matrix grid blocks of representing the surrounding rock volume. The constructed gridding information on 3D fractures is then added tomore » the primary grid. This embedded fracture modeling approach can be directly implemented into a developed geothermal reservoir simulator via the integral finite difference (IFD) method or with TOUGH2 technology This embedded fracture modeling approach is very promising and computationally efficient to handle realistic 3D discrete fractures with complicated geometries, connections, and spatial distributions. Compared with other fracture modeling approaches, it avoids cumbersome 3D unstructured, local refining procedures, and increases computational efficiency by simplifying Jacobian matrix size and sparsity, while keeps sufficient accuracy. Several numeral simulations are present to demonstrate the utility and robustness of the proposed technique. Our numerical experiments show that this approach captures all the key patterns about fluid flow and heat transfer dominated by fractures in these cases. Thus, this approach is readily available to simulation of fractured geothermal reservoirs with both artificial and natural fractures.« less
NASA Technical Reports Server (NTRS)
Demerdash, Nabeel A. O.; Wang, Ren-Hong
1988-01-01
The main purpose of this project is the development of computer-aided models for purposes of studying the effects of various design changes on the parameters and performance characteristics of the modified Lundell class of alternators (MLA) as components of a solar dynamic power system supplying electric energy needs in the forthcoming space station. Key to this modeling effort is the computation of magnetic field distribution in MLAs. Since the nature of the magnetic field is three-dimensional, the first step in the investigation was to apply the finite element method to discretize volume, using the tetrahedron as the basic 3-D element. Details of the stator 3-D finite element grid are given. A preliminary look at the early stage of a 3-D rotor grid is presented.
Interactive computer aided technology, evolution in the design/manufacturing process
NASA Technical Reports Server (NTRS)
English, C. H.
1975-01-01
A powerful computer-operated three dimensional graphic system and associated auxiliary computer equipment used in advanced design, production design, and manufacturing was described. This system has made these activities more productive than when using older and more conventional methods to design and build aerospace vehicles. With the use of this graphic system, designers are now able to define parts using a wide variety of geometric entities, define parts as fully surface 3-dimensional models as well as "wire-frame" models. Once geometrically defined, the designer is able to take section cuts of the surfaced model and automatically determine all of the section properties of the planar cut, lightpen detect all of the surface patches and automatically determine the volume and weight of the part. Further, his designs are defined mathematically at a degree of accuracy never before achievable.
Pitcher, Brandon; Alaqla, Ali; Noujeim, Marcel; Wealleans, James A; Kotsakis, Georgios; Chrepa, Vanessa
2017-03-01
Cone-beam computed tomographic (CBCT) analysis allows for 3-dimensional assessment of periradicular lesions and may facilitate preoperative periapical cyst screening. The purpose of this study was to develop and assess the predictive validity of a cyst screening method based on CBCT volumetric analysis alone or combined with designated radiologic criteria. Three independent examiners evaluated 118 presurgical CBCT scans from cases that underwent apicoectomies and had an accompanying gold standard histopathological diagnosis of either a cyst or granuloma. Lesion volume, density, and specific radiologic characteristics were assessed using specialized software. Logistic regression models with histopathological diagnosis as the dependent variable were constructed for cyst prediction, and receiver operating characteristic curves were used to assess the predictive validity of the models. A conditional inference binary decision tree based on a recursive partitioning algorithm was constructed to facilitate preoperative screening. Interobserver agreement was excellent for volume and density, but it varied from poor to good for the radiologic criteria. Volume and root displacement were strong predictors for cyst screening in all analyses. The binary decision tree classifier determined that if the volume of the lesion was >247 mm 3 , there was 80% probability of a cyst. If volume was <247 mm 3 and root displacement was present, cyst probability was 60% (78% accuracy). The good accuracy and high specificity of the decision tree classifier renders it a useful preoperative cyst screening tool that can aid in clinical decision making but not a substitute for definitive histopathological diagnosis after biopsy. Confirmatory studies are required to validate the present findings. Published by Elsevier Inc.
NASA Technical Reports Server (NTRS)
Yanosy, James L.
1988-01-01
This manual describes how to use the Emulation Simulation Computer Model (ESCM). Based on G189A, ESCM computes the transient performance of a Space Station atmospheric revitalization subsystem (ARS) with CO2 removal provided by a solid amine water desorbed subsystem called SAWD. Many performance parameters are computed some of which are cabin CO2 partial pressure, relative humidity, temperature, O2 partial pressure, and dew point. The program allows the user to simulate various possible combinations of man loading, metabolic profiles, cabin volumes and certain hypothesized failures that could occur.
NASA Astrophysics Data System (ADS)
Gustafsson, Johan; Brolin, Gustav; Cox, Maurice; Ljungberg, Michael; Johansson, Lena; Sjögreen Gleisner, Katarina
2015-11-01
A computer model of a patient-specific clinical 177Lu-DOTATATE therapy dosimetry system is constructed and used for investigating the variability of renal absorbed dose and biologically effective dose (BED) estimates. As patient models, three anthropomorphic computer phantoms coupled to a pharmacokinetic model of 177Lu-DOTATATE are used. Aspects included in the dosimetry-process model are the gamma-camera calibration via measurement of the system sensitivity, selection of imaging time points, generation of mass-density maps from CT, SPECT imaging, volume-of-interest delineation, calculation of absorbed-dose rate via a combination of local energy deposition for electrons and Monte Carlo simulations of photons, curve fitting and integration to absorbed dose and BED. By introducing variabilities in these steps the combined uncertainty in the output quantity is determined. The importance of different sources of uncertainty is assessed by observing the decrease in standard deviation when removing a particular source. The obtained absorbed dose and BED standard deviations are approximately 6% and slightly higher if considering the root mean square error. The most important sources of variability are the compensation for partial volume effects via a recovery coefficient and the gamma-camera calibration via the system sensitivity.
QSPR modeling: graph connectivity indices versus line graph connectivity indices
Basak; Nikolic; Trinajstic; Amic; Beslo
2000-07-01
Five QSPR models of alkanes were reinvestigated. Properties considered were molecular surface-dependent properties (boiling points and gas chromatographic retention indices) and molecular volume-dependent properties (molar volumes and molar refractions). The vertex- and edge-connectivity indices were used as structural parameters. In each studied case we computed connectivity indices of alkane trees and alkane line graphs and searched for the optimum exponent. Models based on indices with an optimum exponent and on the standard value of the exponent were compared. Thus, for each property we generated six QSPR models (four for alkane trees and two for the corresponding line graphs). In all studied cases QSPR models based on connectivity indices with optimum exponents have better statistical characteristics than the models based on connectivity indices with the standard value of the exponent. The comparison between models based on vertex- and edge-connectivity indices gave in two cases (molar volumes and molar refractions) better models based on edge-connectivity indices and in three cases (boiling points for octanes and nonanes and gas chromatographic retention indices) better models based on vertex-connectivity indices. Thus, it appears that the edge-connectivity index is more appropriate to be used in the structure-molecular volume properties modeling and the vertex-connectivity index in the structure-molecular surface properties modeling. The use of line graphs did not improve the predictive power of the connectivity indices. Only in one case (boiling points of nonanes) a better model was obtained with the use of line graphs.
NASA Astrophysics Data System (ADS)
Newman, Gregory A.
2014-01-01
Many geoscientific applications exploit electrostatic and electromagnetic fields to interrogate and map subsurface electrical resistivity—an important geophysical attribute for characterizing mineral, energy, and water resources. In complex three-dimensional geologies, where many of these resources remain to be found, resistivity mapping requires large-scale modeling and imaging capabilities, as well as the ability to treat significant data volumes, which can easily overwhelm single-core and modest multicore computing hardware. To treat such problems requires large-scale parallel computational resources, necessary for reducing the time to solution to a time frame acceptable to the exploration process. The recognition that significant parallel computing processes must be brought to bear on these problems gives rise to choices that must be made in parallel computing hardware and software. In this review, some of these choices are presented, along with the resulting trade-offs. We also discuss future trends in high-performance computing and the anticipated impact on electromagnetic (EM) geophysics. Topics discussed in this review article include a survey of parallel computing platforms, graphics processing units to multicore CPUs with a fast interconnect, along with effective parallel solvers and associated solver libraries effective for inductive EM modeling and imaging.
Computer Aided Design of Polyhedron Solids to Model Air in Com-Geom Descriptions
1983-08-01
34The GIFT Code User Manual, Volume I, Introduction and Input Requirements," BRL Report No. 1802, July 1975 (Unclassified). (AD B0060Z7LK 2G...Kuehl, L. Bain and M. Reisinger, "The GIFT Code User Manual, Volume II, The Output Options," BRL Report ARBRL-TR-02189, September 1979...is generated from the GIFT code under op- tion XSECT. This option produces plot files which define cross- sectional views of the COM-GEOM
3D liver volume reconstructed for palpation training.
Tibamoso, Gerardo; Perez-Gutierrez, Byron; Uribe-Quevedo, Alvaro
2013-01-01
Virtual Reality systems for medical procedures such as the palpation of different organs, requires fast, robust, accurate and reliable computational methods for providing realism during interaction with the 3D biological models. This paper presents the segmentation, reconstruction and palpation simulation of a healthy liver volume as a tool for training. The chosen method considers the mechanical characteristics and liver properties for correctly simulating palpation interactions, which results appropriate as a complementary tool for training medical students in familiarizing with the liver anatomy.
The Â-genus as a Projective Volume form on the Derived Loop Space
NASA Astrophysics Data System (ADS)
Grady, Ryan
2018-06-01
In the present work, we extend our previous work with Gwilliam by realizing \\hat {A}(X) as the projective volume form associated to the BV operator in our quantization of a one-dimensional sigma model. We also discuss the associated integration/expectation map. We work in the formalism of L ∞ spaces, objects of which are computationally convenient presentations for derived stacks. Both smooth and complex geometry embed into L ∞ spaces and we specialize our results in both of these cases.
Brain tissues volume measurements from 2D MRI using parametric approach
NASA Astrophysics Data System (ADS)
L'vov, A. A.; Toropova, O. A.; Litovka, Yu. V.
2018-04-01
The purpose of the paper is to propose a fully automated method of volume assessment of structures within human brain. Our statistical approach uses maximum interdependency principle for decision making process of measurements consistency and unequal observations. Detecting outliers performed using maximum normalized residual test. We propose a statistical model which utilizes knowledge of tissues distribution in human brain and applies partial data restoration for precision improvement. The approach proposes completed computationally efficient and independent from segmentation algorithm used in the application.
Volume dependence of baryon number cumulants and their ratios
Almási, Gábor A.; Pisarski, Robert D.; Skokov, Vladimir V.
2017-03-17
Here, we explore the influence of finite-volume effects on cumulants of baryon/quark number fluctuations in a nonperturbative chiral model. In order to account for soft modes, we use the functional renormalization group in a finite volume, using a smooth regulator function in momentum space. We compare the results for a smooth regulator with those for a sharp (or Litim) regulator, and show that in a finite volume, the latter produces spurious artifacts. In a finite volume there are only apparent critical points, about which we compute the ratio of the fourth- to the second-order cumulant of quark number fluctuations. Finally,more » when the volume is sufficiently small the system has two apparent critical points; as the system size decreases, the location of the apparent critical point can move to higher temperature and lower chemical potential.« less
Enabling Grid Computing resources within the KM3NeT computing model
NASA Astrophysics Data System (ADS)
Filippidis, Christos
2016-04-01
KM3NeT is a future European deep-sea research infrastructure hosting a new generation neutrino detectors that - located at the bottom of the Mediterranean Sea - will open a new window on the universe and answer fundamental questions both in particle physics and astrophysics. International collaborative scientific experiments, like KM3NeT, are generating datasets which are increasing exponentially in both complexity and volume, making their analysis, archival, and sharing one of the grand challenges of the 21st century. These experiments, in their majority, adopt computing models consisting of different Tiers with several computing centres and providing a specific set of services for the different steps of data processing such as detector calibration, simulation and data filtering, reconstruction and analysis. The computing requirements are extremely demanding and, usually, span from serial to multi-parallel or GPU-optimized jobs. The collaborative nature of these experiments demands very frequent WAN data transfers and data sharing among individuals and groups. In order to support the aforementioned demanding computing requirements we enabled Grid Computing resources, operated by EGI, within the KM3NeT computing model. In this study we describe our first advances in this field and the method for the KM3NeT users to utilize the EGI computing resources in a simulation-driven use-case.
Accurate Characterization of the Pore Volume in Microporous Crystalline Materials
2017-01-01
Pore volume is one of the main properties for the characterization of microporous crystals. It is experimentally measurable, and it can also be obtained from the refined unit cell by a number of computational techniques. In this work, we assess the accuracy and the discrepancies between the different computational methods which are commonly used for this purpose, i.e, geometric, helium, and probe center pore volumes, by studying a database of more than 5000 frameworks. We developed a new technique to fully characterize the internal void of a microporous material and to compute the probe-accessible and -occupiable pore volume. We show that, unlike the other definitions of pore volume, the occupiable pore volume can be directly related to the experimentally measured pore volumes from nitrogen isotherms. PMID:28636815
Accurate Characterization of the Pore Volume in Microporous Crystalline Materials
Ongari, Daniele; Boyd, Peter G.; Barthel, Senja; ...
2017-06-21
Pore volume is one of the main properties for the characterization of microporous crystals. It is experimentally measurable, and it can also be obtained from the refined unit cell by a number of computational techniques. In this work, we assess the accuracy and the discrepancies between the different computational methods which are commonly used for this purpose, i.e, geometric, helium, and probe center pore volumes, by studying a database of more than 5000 frameworks. We developed a new technique to fully characterize the internal void of a microporous material and to compute the probe-accessible and -occupiable pore volume. Lasty, wemore » show that, unlike the other definitions of pore volume, the occupiable pore volume can be directly related to the experimentally measured pore volumes from nitrogen isotherms.« less
Experimental Study and CFD Simulation of a 2D Circulating Fluidized Bed
NASA Astrophysics Data System (ADS)
Kallio, S.; Guldén, M.; Hermanson, A.
Computational fluid dynamics (CFD) gains popularity in fluidized bed modeling. For model validation, there is a need of detailed measurements under well-defined conditions. In the present study, experiments were carried out in a 40 em wide and 3 m high 2D circulating fluidized bed. Two experiments were simulated by means of the Eulerian multiphase models of the Fluent CFD software. The vertical pressure and solids volume fraction profiles and the solids circulation rate obtained from the simulation were compared to the experimental results. In addition, lateral volume fraction profiles could be compared. The simulated CFB flow patterns and the profiles obtained from simulations were in general in a good agreement with the experimental results.
Development of Computational Simulation Tools to Model Weapon Propulsors
2004-01-01
Calculation in Permanent Magnet Motors with Rotor Eccentricity: With Slotting Effect Considered," IEEE Transactions on Magnetics, Volume 34, No. 4, 2253-2266...1998). [3] Lieu, Dennis K., Kim, Ungtae. "Magnetic Field Calculation in Permanent Magnet Motors with Rotor Eccentricity: Without Slotting Effect
DOT National Transportation Integrated Search
1978-01-01
A system analysis was completed of the general deterrence of driving while intoxicated (DWI). Elements which influence DWI decisions were identified and interrelated in a system model; then, potential countermeasures which might be employed in DWI ge...
AOPs and Biomarkers: Bridging High Throughput Screening and Regulatory Decision Making
As high throughput screening (HTS) plays a larger role in toxicity testing, camputational toxicology has emerged as a critical component in interpreting the large volume of data produced. Computational models designed to quantify potential adverse effects based on HTS data will b...
Bootstrapping Least Squares Estimates in Biochemical Reaction Networks
Linder, Daniel F.
2015-01-01
The paper proposes new computational methods of computing confidence bounds for the least squares estimates (LSEs) of rate constants in mass-action biochemical reaction network and stochastic epidemic models. Such LSEs are obtained by fitting the set of deterministic ordinary differential equations (ODEs), corresponding to the large volume limit of a reaction network, to network’s partially observed trajectory treated as a continuous-time, pure jump Markov process. In the large volume limit the LSEs are asymptotically Gaussian, but their limiting covariance structure is complicated since it is described by a set of nonlinear ODEs which are often ill-conditioned and numerically unstable. The current paper considers two bootstrap Monte-Carlo procedures, based on the diffusion and linear noise approximations for pure jump processes, which allow one to avoid solving the limiting covariance ODEs. The results are illustrated with both in-silico and real data examples from the LINE 1 gene retrotranscription model and compared with those obtained using other methods. PMID:25898769
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maire, Pierre-Henri, E-mail: maire@celia.u-bordeaux1.fr; Abgrall, Rémi, E-mail: remi.abgrall@math.u-bordeau1.fr; Breil, Jérôme, E-mail: breil@celia.u-bordeaux1.fr
2013-02-15
In this paper, we describe a cell-centered Lagrangian scheme devoted to the numerical simulation of solid dynamics on two-dimensional unstructured grids in planar geometry. This numerical method, utilizes the classical elastic-perfectly plastic material model initially proposed by Wilkins [M.L. Wilkins, Calculation of elastic–plastic flow, Meth. Comput. Phys. (1964)]. In this model, the Cauchy stress tensor is decomposed into the sum of its deviatoric part and the thermodynamic pressure which is defined by means of an equation of state. Regarding the deviatoric stress, its time evolution is governed by a classical constitutive law for isotropic material. The plasticity model employs themore » von Mises yield criterion and is implemented by means of the radial return algorithm. The numerical scheme relies on a finite volume cell-centered method wherein numerical fluxes are expressed in terms of sub-cell force. The generic form of the sub-cell force is obtained by requiring the scheme to satisfy a semi-discrete dissipation inequality. Sub-cell force and nodal velocity to move the grid are computed consistently with cell volume variation by means of a node-centered solver, which results from total energy conservation. The nominally second-order extension is achieved by developing a two-dimensional extension in the Lagrangian framework of the Generalized Riemann Problem methodology, introduced by Ben-Artzi and Falcovitz [M. Ben-Artzi, J. Falcovitz, Generalized Riemann Problems in Computational Fluid Dynamics, Cambridge Monogr. Appl. Comput. Math. (2003)]. Finally, the robustness and the accuracy of the numerical scheme are assessed through the computation of several test cases.« less
Fovargue, Daniel E; Mitran, Sorin; Smith, Nathan B; Sankin, Georgy N; Simmons, Walter N; Zhong, Pei
2013-08-01
A multiphysics computational model of the focusing of an acoustic pulse and subsequent shock wave formation that occurs during extracorporeal shock wave lithotripsy is presented. In the electromagnetic lithotripter modeled in this work the focusing is achieved via a polystyrene acoustic lens. The transition of the acoustic pulse through the solid lens is modeled by the linear elasticity equations and the subsequent shock wave formation in water is modeled by the Euler equations with a Tait equation of state. Both sets of equations are solved simultaneously in subsets of a single computational domain within the BEARCLAW framework which uses a finite-volume Riemann solver approach. This model is first validated against experimental measurements with a standard (or original) lens design. The model is then used to successfully predict the effects of a lens modification in the form of an annular ring cut. A second model which includes a kidney stone simulant in the domain is also presented. Within the stone the linear elasticity equations incorporate a simple damage model.
Fovargue, Daniel E.; Mitran, Sorin; Smith, Nathan B.; Sankin, Georgy N.; Simmons, Walter N.; Zhong, Pei
2013-01-01
A multiphysics computational model of the focusing of an acoustic pulse and subsequent shock wave formation that occurs during extracorporeal shock wave lithotripsy is presented. In the electromagnetic lithotripter modeled in this work the focusing is achieved via a polystyrene acoustic lens. The transition of the acoustic pulse through the solid lens is modeled by the linear elasticity equations and the subsequent shock wave formation in water is modeled by the Euler equations with a Tait equation of state. Both sets of equations are solved simultaneously in subsets of a single computational domain within the BEARCLAW framework which uses a finite-volume Riemann solver approach. This model is first validated against experimental measurements with a standard (or original) lens design. The model is then used to successfully predict the effects of a lens modification in the form of an annular ring cut. A second model which includes a kidney stone simulant in the domain is also presented. Within the stone the linear elasticity equations incorporate a simple damage model. PMID:23927200
Evaluating the feasibility of the KDIGO CKD referral recommendations.
Singh, Karandeep; Waikar, Sushrut S; Samal, Lipika
2017-07-07
In 2012, the international nephrology organization Kidney Disease Improving Global Outcomes (KDIGO) released recommendations for nephrology referral for chronic kidney disease (CKD) patients. The feasibility of adhering to these recommendations is unknown. We conducted a retrospective analysis of the primary care population at Brigham and Women's Hospital (BWH). We translated referral recommendations based upon serum creatinine, estimated glomerular filtration rate (eGFR), and albuminuria into a set of computable criteria in order to project referral volume if the KDIGO referral recommendations were to be implemented. Using electronic health record data, we evaluated each patient using the computable criteria at the times that the patient made clinic visits in 2013. We then compared the projected referral volume with baseline nephrology clinic volume. Out of 56,461 primary care patients at BWH, we identified 5593 (9.9%) who had CKD based on albuminuria or estimated GFR. Referring patients identified by the computable criteria would have resulted in 2240 additional referrals to nephrology. In 2013, this would represent a 38.0% (2240/5892) increase in total nephrology patient volume and 67.3% (2240/3326) increase in new referral volume. This is the first study to examine the projected impact of implementing the 2012 KDIGO referral recommendations. Given the large increase in the number of referrals, this study is suggestive that implementing the KDIGO referral guidelines may not be feasible under current practice models due to a supply-demand mismatch. We need to consider new strategies on how to deliver optimal care to CKD patients using the available workforce in the U.S. health care system.
The Preliminary Design of a Standardized Spacecraft Bus for Small Tactical Satellites (Volume 2)
1996-11-01
this requirement, conditions of the model need to be modified to provide some flexibility to the original solution set. In the business world this...time The mission modules modeled in the Modsat computer model are necessarily "generic" in nature to provide both flexibility in design evaluation and...methods employed during the study, the scope of the problem, the value system used to evaluate alternatives, tradeoff studies performed, modeling tools
NASA Technical Reports Server (NTRS)
Hollis, Brian R.
1995-01-01
A FORTRAN computer code for the reduction and analysis of experimental heat transfer data has been developed. This code can be utilized to determine heat transfer rates from surface temperature measurements made using either thin-film resistance gages or coaxial surface thermocouples. Both an analytical and a numerical finite-volume heat transfer model are implemented in this code. The analytical solution is based on a one-dimensional, semi-infinite wall thickness model with the approximation of constant substrate thermal properties, which is empirically corrected for the effects of variable thermal properties. The finite-volume solution is based on a one-dimensional, implicit discretization. The finite-volume model directly incorporates the effects of variable substrate thermal properties and does not require the semi-finite wall thickness approximation used in the analytical model. This model also includes the option of a multiple-layer substrate. Fast, accurate results can be obtained using either method. This code has been used to reduce several sets of aerodynamic heating data, of which samples are included in this report.
[Clinical evaluation of heavy-particle radiotherapy using dose volume histogram (DVH)].
Terahara, A; Nakano, T; Tsujii, H
1998-01-01
Radiotherapy with heavy particles such as proton and heavy-charged particles is a promising modality for treatment of localized malignant tumors because of the good dose distribution. A dose calculation and radiotherapy planning system which is essential for this kind of treatment has been developed in recent years. It has the capability to compute the dose volume histogram (DVH) which contains dose-volume information for the target volume and other interesting volumes. Recently, DVH is commonly used to evaluate and compare dose distributions in radiotherapy with both photon and heavy particles, and it shows that a superior dose distribution is obtained in heavy particle radiotherapy. DVH is also utilized for the evaluation of dose distribution related to clinical outcomes. Besides models such as normal tissue complication probability (NTCP) and tumor control probability (TCP), which can be calculated from DVH are proposed by several authors, they are applied to evaluate dose distributions themselves and to evaluate them in relation to clinical results. DVH is now a useful and important tool, but further studies are needed to use DVH and these models practically for clinical evaluation of heavy-particle radiotherapy.
A DDC Bibliography on Computers in Information Sciences. Volume II. Information Sciences Series.
ERIC Educational Resources Information Center
Defense Documentation Center, Alexandria, VA.
The unclassified and unlimited bibliography compiles references dealing specifically with the role of computers in information sciences. The volume contains 239 annotated references grouped under three major headings: Artificial and Programming Languages, Computer Processing of Analog Data, and Computer Processing of Digital Data. The references…
Jiao, Y; Chen, R; Ke, X; Cheng, L; Chu, K; Lu, Z; Herskovits, E H
2011-01-01
Autism spectrum disorder (ASD) is a neurodevelopmental disorder, of which Asperger syndrome and high-functioning autism are subtypes. Our goal is: 1) to determine whether a diagnostic model based on single-nucleotide polymorphisms (SNPs), brain regional thickness measurements, or brain regional volume measurements can distinguish Asperger syndrome from high-functioning autism; and 2) to compare the SNP, thickness, and volume-based diagnostic models. Our study included 18 children with ASD: 13 subjects with high-functioning autism and 5 subjects with Asperger syndrome. For each child, we obtained 25 SNPs for 8 ASD-related genes; we also computed regional cortical thicknesses and volumes for 66 brain structures, based on structural magnetic resonance (MR) examination. To generate diagnostic models, we employed five machine-learning techniques: decision stump, alternating decision trees, multi-class alternating decision trees, logistic model trees, and support vector machines. For SNP-based classification, three decision-tree-based models performed better than the other two machine-learning models. The performance metrics for three decision-tree-based models were similar: decision stump was modestly better than the other two methods, with accuracy = 90%, sensitivity = 0.95 and specificity = 0.75. All thickness and volume-based diagnostic models performed poorly. The SNP-based diagnostic models were superior to those based on thickness and volume. For SNP-based classification, rs878960 in GABRB3 (gamma-aminobutyric acid A receptor, beta 3) was selected by all tree-based models. Our analysis demonstrated that SNP-based classification was more accurate than morphometry-based classification in ASD subtype classification. Also, we found that one SNP--rs878960 in GABRB3--distinguishes Asperger syndrome from high-functioning autism.
Computed 88% TCP dose for SBRT of NSCLC from tumour hypoxia modelling
NASA Astrophysics Data System (ADS)
Ruggieri, Ruggero; Stavreva, Nadejda; Naccarato, Stefania; Stavrev, Pavel
2013-07-01
In small NSCLC, 88% local control at three years from SBRT was reported both for schedule (20-22 Gy ×3) (Fakiris et al 2009 Int. J. Radiat. Oncol. Biol. Phys. 75 677-82), actually close to (18-20 Gy ×3) if density correction is properly applied, and for schedules (18 Gy ×3) and (11 Gy ×5) (Palma et al 2012 Int. J. Radiat. Oncol. Biol. Phys. 82 1149-56). Here, we compare our computed iso-TCP = 88% dose per fraction (d88) for three and five fractions (n) with such clinically adopted ones. Our TCP model accounts for tumour repopulation, at rate λ (d-1), reoxygenation of chronic hypoxia (ch-), at rate a (d-1) and fluctuating oxygenation of acute hypoxia (ah-), with hypoxic fraction (C) of the acutely hypoxic fractional volume (AHF). Out of the eight free parameters whose values we had fitted to in vivo animal data (Ruggieri et al 2012 Int. J. Radiat. Oncol. Biol. Phys. 83 1603-8), we here maintained (a(d-1), C, OERch, OERah/OERch, AHF, CHF) = (0.026, 0.17, 1.9, 2.2, 0.033, 0.145) while rescaling the initial total number of clonogens (No) according to the ratio of NSCLC on animal median tumour volumes. From the clinical literature, the usually assumed (αo/βo(Gy), λ(d-1)) = (10, 0.217) for the well-oxygenated (o-)cells were taken. By normal (lognormal) random sampling of all parameter values over their 95% C.I., the uncertainty on present d88(n) computations was estimated. Finally, SBRT intra-tumour dose heterogeneity was simulated by a 1.3 dose boost ratio on 50% of tumour volume. Computed d88(±1σ) were 19.0 (16.3; 21.7) Gy, for n = 3; 10.4 (8.7; 12.1) Gy, for n = 5; 5.8 (5.2; 6.4) Gy, for n = 8; 4.0 (3.6; 4.3) Gy, for n = 12. Furthermore, the iso-TCP = 88% total dose, D88(n) = d88(n)*n, exhibited a relative minimum around n = 8. Computed d88(n = 3, 5) are strictly consistent with the clinically adopted ones, which confirms the validity of LQ-model-based TCP predictions at the doses used in SBRT if a highly radioresistant cell subpopulation is properly modelled. The computed minimum D88(n) around n = 8 suggests the adoption of 6 ≤ n ≤ 10 instead of n = 3 in SBRT of small NSCLC tumours.
Level-Set Simulation of Viscous Free Surface Flow Around a Commercial Hull Form
2005-04-15
Abstract The viscous free surface flow around a 3600 TEU KRISO Container Ship is computed using the finite volume based multi-block RANS code, WAVIS...developed at KRISO . The free surface is captured with the Level-set method and the realizable k-ε model is employed for turbulence closure. The...computations are done for a 3600 TEU container ship of Korea Research Institute of Ships & Ocean Engineering, KORDI (hereafter, KRISO ) selected as
ESnet: Large-Scale Science and Data Management ( (LBNL Summer Lecture Series)
Johnston, Bill
2017-12-09
Summer Lecture Series 2004: Bill Johnston of Berkeley Lab's Computing Sciences is a distinguished networking and computing researcher. He managed the Energy Sciences Network (ESnet), a leading-edge, high-bandwidth network funded by DOE's Office of Science. Used for everything from videoconferencing to climate modeling, and flexible enough to accommodate a wide variety of data-intensive applications and services, ESNet's traffic volume is doubling every year and currently surpasses 200 terabytes per month.
NASA Technical Reports Server (NTRS)
Demerdash, N. A. O.; Nehl, T. W.
1979-01-01
A description and user's guide of the computer program developed to simulate the dynamics of an electromechanical actuator for aerospace applications are presented. The effects of the stator phase currents on the permanent magnets of the rotor are examined. The voltage and current waveforms present in the power conditioner network during the motoring, regenerative braking, and plugging modes of operation are presented and discussed.
Morphological phenotyping of mouse hearts using optical coherence tomography
NASA Astrophysics Data System (ADS)
Cua, Michelle; Lin, Eric; Lee, Ling; Sheng, Xiaoye; Wong, Kevin S. K.; Tibbits, Glen F.; Beg, Mirza Faisal; Sarunic, Marinko V.
2014-11-01
Transgenic mouse models have been instrumental in the elucidation of the molecular mechanisms behind many genetically based cardiovascular diseases such as Marfan syndrome (MFS). However, the characterization of their cardiac morphology has been hampered by the small size of the mouse heart. In this report, we adapted optical coherence tomography (OCT) for imaging fixed adult mouse hearts, and applied tools from computational anatomy to perform morphometric analyses. The hearts were first optically cleared and imaged from multiple perspectives. The acquired volumes were then corrected for refractive distortions, and registered and stitched together to form a single, high-resolution OCT volume of the whole heart. From this volume, various structures such as the valves and myofibril bundles were visualized. The volumetric nature of our dataset also allowed parameters such as wall thickness, ventricular wall masses, and luminal volumes to be extracted. Finally, we applied the entire acquisition and processing pipeline in a preliminary study comparing the cardiac morphology of wild-type mice and a transgenic mouse model of MFS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ongari, Daniele; Boyd, Peter G.; Barthel, Senja
Pore volume is one of the main properties for the characterization of microporous crystals. It is experimentally measurable, and it can also be obtained from the refined unit cell by a number of computational techniques. In this work, we assess the accuracy and the discrepancies between the different computational methods which are commonly used for this purpose, i.e, geometric, helium, and probe center pore volumes, by studying a database of more than 5000 frameworks. We developed a new technique to fully characterize the internal void of a microporous material and to compute the probe-accessible and -occupiable pore volume. Lasty, wemore » show that, unlike the other definitions of pore volume, the occupiable pore volume can be directly related to the experimentally measured pore volumes from nitrogen isotherms.« less
Tanaka, Yutaka; Saito, Shigeru; Sasuga, Saeko; Takahashi, Azuma; Aoyama, Yusuke; Obama, Kazuto; Umezu, Mitsuo; Iwasaki, Kiyotaka
2018-05-01
Quantitative assessment of post-transcatheter aortic valve replacement (TAVR) aortic regurgitation (AR) remains challenging. We developed patient-specific anatomical models with pulsatile flow circuit and investigated factors associated with AR after TAVR. Based on pre-procedural computed tomography (CT) data of the six patients who underwent transfemoral TAVR using a 23-mm SAPIEN XT, anatomically and mechanically equivalent aortic valve models were developed. Forward flow and heart rate of each patient in two days after TAVR were duplicated under mean aortic pressure of 80mmHg. Paravalvular leakage (PVL) volume in basal and additional conditions was measured for each model using an electromagnetic flow sensor. Incompletely apposed tract between the transcatheter and aortic valves was examined using a micro-CT. PVL volume in each patient-specific model was consistent with each patient's PVL grade, and was affected by hemodynamic conditions. PVL and total regurgitation volume increased with the mean aortic pressure, whereas closing volume did not change. In contrast, closing volume increased proportionately with heart rate, but PVL did not change. The minimal cross-sectional gap had a positive correlation with the PVL volumes (r=0.89, P=0.02). The gap areas typically occurred in the vicinity of the bulky calcified nodules under the native commissure. PVL volume, which could be affected by hemodynamic conditions, was significantly associated with the minimal cross-sectional gap area between the aortic annulus and the stent frame. These data may improve our understanding of the mechanism of the occurrence of post-TAVR PVL. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Higher order turbulence closure models
NASA Technical Reports Server (NTRS)
Amano, Ryoichi S.; Chai, John C.; Chen, Jau-Der
1988-01-01
Theoretical models are developed and numerical studies conducted on various types of flows including both elliptic and parabolic. The purpose of this study is to find better higher order closure models for the computations of complex flows. This report summarizes three new achievements: (1) completion of the Reynolds-stress closure by developing a new pressure-strain correlation; (2) development of a parabolic code to compute jets and wakes; and, (3) application to a flow through a 180 deg turnaround duct by adopting a boundary fitted coordinate system. In the above mentioned models near-wall models are developed for pressure-strain correlation and third-moment, and incorporated into the transport equations. This addition improved the results considerably and is recommended for future computations. A new parabolic code to solve shear flows without coordinate tranformations is developed and incorporated in this study. This code uses the structure of the finite volume method to solve the governing equations implicitly. The code was validated with the experimental results available in the literature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Oishik, E-mail: oishik-sen@uiowa.edu; Gaul, Nicholas J., E-mail: nicholas-gaul@ramdosolutions.com; Choi, K.K., E-mail: kyung-choi@uiowa.edu
Macro-scale computations of shocked particulate flows require closure laws that model the exchange of momentum/energy between the fluid and particle phases. Closure laws are constructed in this work in the form of surrogate models derived from highly resolved mesoscale computations of shock-particle interactions. The mesoscale computations are performed to calculate the drag force on a cluster of particles for different values of Mach Number and particle volume fraction. Two Kriging-based methods, viz. the Dynamic Kriging Method (DKG) and the Modified Bayesian Kriging Method (MBKG) are evaluated for their ability to construct surrogate models with sparse data; i.e. using the leastmore » number of mesoscale simulations. It is shown that if the input data is noise-free, the DKG method converges monotonically; convergence is less robust in the presence of noise. The MBKG method converges monotonically even with noisy input data and is therefore more suitable for surrogate model construction from numerical experiments. This work is the first step towards a full multiscale modeling of interaction of shocked particle laden flows.« less
Glass transition temperature of polymer nano-composites with polymer and filler interactions
NASA Astrophysics Data System (ADS)
Hagita, Katsumi; Takano, Hiroshi; Doi, Masao; Morita, Hiroshi
2012-02-01
We systematically studied versatile coarse-grained model (bead spring model) to describe filled polymer nano-composites for coarse-grained (Kremer-Grest model) molecular dynamics simulations. This model consists of long polymers, crosslink, and fillers. We used the hollow structure as the filler to describe rigid spherical fillers with small computing costs. Our filler model consists of surface particles of icosahedra fullerene structure C320 and a repulsive force from the center of the filler is applied to the surface particles in order to make a sphere and rigid. The filler's diameter is 12 times of beads of the polymers. As the first test of our model, we study temperature dependence of volumes of periodic boundary conditions under constant pressures through NPT constant Andersen algorithm. It is found that Glass transition temperature (Tg) decrease with increasing filler's volume fraction for the case of repulsive interaction between polymer and fillers and Tg weakly increase for attractive interaction.